logo
AI · 2025

NVIDIA 590 billion flash crash and rebound: How DeepSeek caused AI computing power misjudgment?

shayne

RockFlow Shayne

April 18, 2025 · 14 min read

cover-nvidia flash crash

Highlight:

1)DeepSeek claims to train models at one-tenth the cost of GPT-4, causing concerns about the collapse of computing power demand. However, "underwater demands" such as MultiModal Machine Learning models and real-time inference are gradually emerging, coupled with the acceleration of Model Iteration, which has instead pushed up the long-term demand for computing power in the market.

2)In addition to the explosive data given in previous financial reports, NVIDIA's true Competitive Edge is hidden in the ecological barrier built by 280 million lines of CUDA code. Its software stack advantage has formed an industry standard, and the migration cost is extremely high. Even if the hardware performance of competitors is close, the software ecosystem gap is still difficult to bridge.

3)Current data proves that although DeepSeek's breakthrough is significant, no tech giant has cut capital expenditures in computing and data centers. The upcoming Q4 2024 financial report may once again boost investor sentiment. It is expected that Nvidia's revenue in 2025 will remain extremely optimistic, and new highlights will be revealed at the GTC conference on March 17th (focusing on physical AI projects such as GB300, Rubin, and robots).

Introduction

On January 27, 2025, the US stock market witnessed a historic scene: NVIDIA plummeted 17% in a single day, with a market value evaporation of $590 billion, setting a record for the largest single-day market value loss in the history of the US stock market.

The epicenter of this earthquake came from a Chinese AI company, DeepSeek. It claimed to have trained a model with equivalent performance at one-tenth of the cost of GPT-4, instantly tearing apart the consensus on the market's demand for AI computing power.

But what's even more dramatic is that in less than a month, NVIDIA's stock price has basically recovered all lost ground. The bulls who remain optimistic and choose bottom fishing have bought low again.

RockFlow's research team believes that behind this NVIDIA roller coaster ride, the market's overreaction to short-term sentiment is reflected, and the subsequent rebound verifies the irreplaceability of its core competitiveness and industry position. In August last year, we analyzed the true Competitive Edge of NVIDIA that the market has not yet paid enough attention to, and why we believe that NVIDIA is not only a great company, but also an investment target with high potential returns.

This article will deeply review why the 590 billion dollar flash crash was misjudged, as well as the long-term reasons and short-term catalysts for our optimism about NVIDIA.

The essence of market panic: misjudging the technical path and demand structure

The release of DeepSeek R1 model is like a "bomb" thrown at the computing power market: it can be trained with only 2000 H800 GPUs and costs 6 million dollars, compared with GPT-4 which costs as much as 100 million dollars, and Llama 3 which uses 16000 H100. This order of magnitude of cost difference makes the market suddenly fall into the panic of "computing power demand collapse theory".

Some investors mistakenly believe that improving model efficiency will directly reduce reliance on NVIDIA GPUs, leading to short-term pressure on its stock price.

1.png

However, there is a misjudgment point here - the illusion of cost accounting. In fact, 6 million only covers the cost of GPU leasing and electricity, while the real training cost also includes many implicit expenses such as data cleaning, algorithm engineer salaries, and experimental losses. Research firm Semianalysis pointed out that if the full cost is included, DeepSeek's actual expenditure may reach the level of 30 million dollars.

More importantly, market panics have overlooked the "iceberg structure" of AI computing power demand: the current Model Training demand is only visible and may only account for 10%, while the underwater part includes exponential growth in computing power consumption of MultiModal Machine Learning models (video/3D generation), rigid requirements for low-latency hardware for real-time inference needs (such as Tesla's autonomous driving), and repetitive training needs brought about by the acceleration of Model Iteration speed.

The real disruptive aspect of DeepSeek this time is that its open-source strategy (free and open R1 model) directly challenges OpenAI's subscription system ($20-200/month), causing turmoil in the AIaaS valuation system. In addition, it may also lead to the diversion of the long-tail market, especially for small and medium-sized developers who may turn to cost-effective solutions such as Trainium 2, but NVIDIA's core customers (large-scale data centers) are difficult to migrate due to CUDA ecosystem binding.

Overall, there are two core logics for misjudging short-term market panic:

On one hand, although model optimization (such as sparse computing and algorithm improvement) reduces training costs, the acceleration of large-scale Model Iteration (such as MultiModal Machine Learning and real-time learning) actually increases the total long-term computing power demand.

Secondly, underestimating the resilience of NVIDIA's demand structure: core customers are concentrated in cloud vendors (AWS, Azure) and leading AI companies (OpenAI, Meta), and its capital expenditure plan is centered on "infrastructure expansion" rather than simply pursuing single-task costs.

NVIDIA's rebound driving force: fundamentals and ecological barriers

When the market is boiling for DeepSeek, don't forget the reality revealed by NVIDIA's Q3 2024 financial report: Hopper architecture chips (H200) are taking over data centers at the fastest speed in history. The core highlights of the last quarter's financial report include but are not limited to:

Data center revenue: $30.80 billion (+ 112% YoY), with Cloud as a Service Provider (CSP) contributing over 50% Blackwell demand: The first batch of samples sparked a buying frenzy, Huang Renxun said "demand exceeds the most optimistic expectations" Profit margin game: Although Blackwell's initial gross profit margin dropped to 75% in mass production, it will recover to the middle of 75% as production capacity climbs

2.jpeg

Hopper's outstanding sales in the third quarter helped the Data Center Division's revenue grow 112% year-over-year to $30.80 billion. CFO Colette Kress said on the earnings call:

The sales of H200 have grown continuously to billions of dollars, making it the fastest-growing product in the company's history. The inference performance of H200 has increased by 2 times, and the TCO has increased by 50%. Cloud as a Service provider accounts for about half of the sales of Data Center, and the revenue has increased by more than 2 times year-on-year.

Now, AWS, CoreWeave, and Microsoft Azure all offer cloud instances based on NVIDIA H200, and Google Cloud and OCI (Oracle Cloud Infrastructure) are about to be launched. In addition to the significant growth of large CSPs, as North America, EMEA, and the Asia-Pacific region increase the construction of NVIDIA cloud instances and sovereign clouds, NVIDIA GPU regional cloud revenue has doubled year-on-year.

In addition to CSP, NVIDIA also receives more than double the revenue from consumer internet companies that purchase Hopper chips to support the training of next-generation AI models, multimodal and proxy AI, deep learning recommendation engines, generative AI, and content creation.

Kress stated during the financial report conference call that the company has delivered the first batch of Blackwell samples to customers in the third quarter. Blackwell is its latest architecture series, and due to its powerful performance, there is a high demand for it. Two or three years ago, it took weeks or even months to train large AI models on hardware, but Blackwell can greatly shorten this time. In the rapidly developing AI industry, the faster developers bring innovative products to market, the greater the chance of success.

Management forecasts total revenue of $37.50 billion for Q4. If this expectation is met, Q4 revenue will increase by 69.7% YoY, and full-year revenue in 2024 will increase by 111% compared to 2023, reaching $128.66 billion.

In addition to the explosive data given in previous financial reports, NVIDIA's true Competitive Edge is hidden in the ecological barrier built by 280 million lines of CUDA code. Its software stack advantages: CUDA and AI libraries (such as TensorRT) have formed industry standards, and the migration cost is extremely high. Even if competitors (such as AMD) have similar hardware performance, the software ecosystem gap is still difficult to bridge.

In addition, NVIDIA's system-level solutions (such as DGX SuperPOD, OVX servers, and other full-stack solutions) have been deeply bound to customer infrastructure, replacing the need to reconstruct the entire technology stack. This ecological control was further strengthened in the Blackwell era: its NVLink 5.0 Technology Implementation ultra-high-speed chip interconnection bandwidth is several times that of AMD MI300X. When the hardware performance gap exceeds a certain critical point, the cost-effectiveness loses its discussion significance.

NVIDIA was originally a hardware company that produced GPUs. However, it is developing into a company that provides end-to-end AI solutions. It provides software tools for customers to build chatbots, AI virtual assistants, and virtual agents. This is not just a chip provider, but a mature AI giant.

3.png

It also emphasizes the total cost of ownership (TCO) of its entire AI infrastructure solution, rather than just focusing on manufacturing chips, which makes it harder for competitors who only sell cost-effective chips to compete. NVIDIA includes the entire hardware and software ecosystem, support, operating expenses, and the ability to quickly deploy AI solutions in its TCO calculation. Management hopes to tell customers, "Our AI chips may have higher upfront costs, but in the long run, the entire AI solution can save costs."

Therefore, although DeepSeek and the trend of reducing AI costs pose a certain threat to Nvidia, Nvidia does not need to worry too much.

The "arms dealer" logic of AI competition remains unchanged

The current theme of tech giants is that business growth is still limited by the ability to establish data centers and provide computing power. None of them have cut capital expenditures in computing and data centers. Moreover, their growth in Cloud Services remains strong.

In the past 12 months, the top three CSPs have invested $186 billion to expand their computing power. In the new year, Meta expects capital expenditures of $600- 65 billion, Microsoft expects to invest about $80 billion to expand its data center, Amazon sets a tone of at least $100 billion, and Alphabet expects capital expenditures to reach $75 billion.

Meanwhile, the recently launched "Stargate" project claims to cost $500 billion and aims to promote the development of US AI. Oracle alone has identified 100 data centers for future development. In the field of physical AI, Tesla completed the assembly of 50,000 H100 clusters at the Texas Gigafactory in 2024, which will be used for autonomous driving. Musk said that the computing power of the Optimus humanoid robot needs to be increased by 10 times.

Morgan Stanley research director Vishwanath Tirupattur believes that although DeepSeek's breakthrough is significant, it will not lead to a collapse in capital expenditures of giants with significant influence in AI and related fields.

He mentioned that the sharp decline in computing costs in the 1990s provided a useful reference for this. At that time, the investment boom was mainly driven by two factors: the speed at which companies replaced depreciation capital, and the sustained and significant decline in computing capital prices relative to output prices. If the efficiency improvements brought by DeepSeek reflect similar phenomena, then the cost of AI capital may decrease and support the expenditure prospects of companies.

Tech giants continue to increase spending, undoubtedly providing strong support for computing companies such as Nvidia. Amazon CEO Andy Jassy pointed out in a conference call that most companies rely on Nvidia's chips for AI computing, and in the foreseeable future, Amazon will continue to maintain a cooperative relationship with Nvidia.

Not to mention, Huang has repeatedly stated last year that countries around the world are planning to build and operate their own AI infrastructure domestically, which will drive demand for NVIDIA products.

He emphasized in an interview with Bloomberg that countries such as India, Japan, France, and Canada are discussing the importance of investing in "sovereign AI capabilities". "Their natural resources - data - should be refined and produced for their countries. The recognition of sovereign AI capabilities is global."

Of course, it needs to be acknowledged that as US tech giants firmly invest heavily in the AI field, NVIDIA is also facing increasing uncertainties: for example, the two AI ASIC giants that have benefited the most recently are Broadcom and Mywell Technology.

With their technological leadership in chip-to-chip communication and high-speed data transmission, Broadcom and Mywell Technology have become the core forces in the AI ASIC market. Tech giants such as Microsoft, Amazon, Google, and Meta are collaborating with Broadcom or Mywell Technology to develop their own AI ASIC chips for massive inference AI computing deployment.

In addition, according to Reuters, OpenAI is advancing plans to reduce its reliance on NVIDIA by developing its first generation of internal artificial intelligence chips, opening up a new chapter in its chip supply. Sources said it will complete the design of its first internal chip in the coming months and plans to send it to TSMC for manufacturing. The latest news shows that OpenAI is expected to achieve its mass production goal by 2026.

However, these factors do not hinder NVIDIA's AI competition "arms dealer" logic, and the selling caused by NVIDIA believers shouting DeepSeek is a bottom fishing opportunity. They still believe in three supporting factors.

Currently, the market's confidence in Hopper and Blackwell chips is constantly increasing. Although investors are under pressure for large training clusters, there are signs that large clusters are still under construction. The inference market is expected to drive NVIDIA's growth for many years, and NVIDIA's position in the inference field is solid.

In the short term, the upcoming Q4 2024 financial report may once again boost investor sentiment. It is expected that NVIDIA will reconfirm Blackwell's execution; Data Center 2025 revenue will increase by more than 60% year-on-year; and NVIDIA will build momentum for the GTC conference on March 17th (focusing on physical AI projects such as GB300, Rubin, and robots).

Conclusion

The RockFlow research team believes that the DeepSeek impact event will eventually be recorded in the history of technology, not because it changed the rules of the game, but because it verified the unshakable nature of the rules: in the infinite war of AI, computing power is not an option, but a must; it is not a cost item, but an asset item.

As the "infrastructure provider" of the global AI arms race, NVIDIA's irreplaceability will continue to strengthen during the industry expansion cycle. Despite the long-term disturbance caused by ASIC and self-developed chips, NVIDIA will still be the biggest winner in the AI computing power expansion cycle in the next 3-5 years.

When the market cheered for the "efficiency revolution", NVIDIA proved in a month that true Competitive Edge is never in the gross profit margin numbers of financial statements, but in the "import torch.cuda" command typed by every developer, in the roaring DGX SuperPOD of every Data center, and in the eternal desire of human beings to expand the boundaries of intelligence.

© 2025 Rockalpha Limited. Todos los Derechos Reservados.

Rockalpha Limited está inscrita en el Registro de Proveedores de Servicios Financieros de Nueva Zelanda (FSP: 1001454). El registro de Proveedores de Servicios Financieros de Rockalpha Limited puede verificarse en el Registro de Proveedores de Servicios Financieros. Rockalpha Limited es miembro del Insurance & Financial Services Ombudsman Scheme, un proveedor independiente de servicios de resolución de disputas. Rockalpha Limited no está autorizada por ningún organismo regulador neozelandés para prestar servicios monetarios o inmobiliarios a clientes, y la inscripción de Rockalpha Limited en el registro neozelandés de proveedores de servicios financieros o su pertenencia al Insurance & Financial Services Ombudsman Scheme no significa que Rockalpha Limited esté sujeta a la regulación o supervisión activa de un organismo regulador neozelandés.Rockalpha Limited está inscrita en el Registro de Proveedores de Servicios Financieros de Nueva Zelanda (FSP: 1001454). El registro de Proveedores de Servicios Financieros de Rockalpha Limited puede verificarse en el Registro de Proveedores de Servicios Financieros. Rockalpha Limited es miembro del Insurance & Financial Services Ombudsman Scheme, un proveedor independiente de servicios de resolución de conflictos.