English
Back
Download
Need Help?
Log in to access Online Inquiry
Back to the Top

NVIDIA: The Quiet Shift from Chips to AI Economy

NVIDIA: The Quiet Shift from Chips to AI Economy
Summary
– At CES 2026, as inference surpasses training as the main cost driver of AI, it was shown that Nvidia is prioritizing efficiency, orchestration, power, and reliability.
– Decisions regarding AI infrastructure increasingly reflect lock-ins in software, networking, and operations, allowing Nvidia to gain an advantage in enduring switching costs compared to hardware alone.
– The partnership with CoreWeave provides visibility into demand, real-world inference data, and deeper system-level platform lock-in, validating Nvidia's inference strategy.
– Despite a projected P/E ratio of 40x, Nvidia’s PEG of 1.0 to 1.1 suggests conservative earnings expectations given strong visibility and sustainable AI economics.
NVIDIA: The Quiet Shift from Chips to AI Economy
gorodenkoff/iStock via Getty Images
NVIDIA (NVDA) is no longer winning at AI simply by having the fastest chips;it is winning at AI through the economics of large-scale AI. CES 2026 supported this claim, with Mr. Rubin discussing not FLOPS but efficiency, power consumption, orchestration, and reliability.This marks a fundamental shift, as the cost factor in large-scale AI is transitioning from training to inference.
Decision-making about AI infrastructure is no longer just about hardware but involves commitments to software and operations, with high switching costs being considered across orchestration, networking, and reliability layers. NVIDIA designs with this model in mind, ensuring cost advantages per token.
CoreWeave(CRWV) serves as further evidence supporting this hypothesis in terms of demand, inference data, and lock-in. While the stock may appear expensive based on the P/E ratio, considering the PEG ratio and earnings sustainability, current expectations are already conservative. NVIDIA doesn't need to outperform its peers; it just needs to deliver results.
Why OpenAI's Diversification in Inference Isn't a Threat to NVIDIA
The recent Reuters report that OpenAI is considering alternatives to NVIDIA’s chips might cause some concern, but I think it's an oversimplification. OpenAI isn’t abandoning NVIDIA. This discussion,a $20 billion strategic investment in OpenAI,surfaced just days before NVIDIA announced this deal, suggesting the relationship between the two companies is not weakening but rather strengthening.
The real discussion revolves aroundinference workloads, which account for about 80% of OpenAI’s computing workload. Inference is a memory-intensive problem where responsiveness is more important than raw computational performance. I believe OpenAI is merely fine-tuning the latency of products like Codex and is not abandoning NVIDIA.
OpenAI is considering running a small portion—about 10%—of its inference workload on different chips with more on-chip SRAM, such as Cerebras, AMD, and Google TPUs, to fine-tune latency. As an investor, I'm not concerned at all. NVIDIA dominates the inference market and continues to reduce the cost per token. Moreover, inference demand is growing so rapidly that it can absorb this diversification without impacting NVIDIA’s long-term growth story.
Maia 200 is no NVIDIA-killer,
Microsoft is quietly raising the stakes in the AI computing war, and the impact goes beyond individual chips. Microsoft’sMaia 200'sThe release of Maia 200 exemplifies the pattern of hyperscalers accelerating their in-house inference chips over a 30-day cycle to control costs and reduce reliance on NVIDIA’s supply capabilities and pricing power. In addition to deploying Maia 200 in Iowa and Arizona, Microsoft is also aiming to gain an edge over NVIDIA’s software through a Triton-based package co-developed with OpenAI.
However, this isn't simply a matter of specifications. Maia 200 is a solution tailored for Microsoft’s internal data centers and workload profiles, first as an internal solution, then expanding into broader applications. The key here is heterogeneous computing because hyperscalers utilize multiple computing pathways. NVIDIA continues to dominate areas where it excels (frontier training, time-to-value), while in-house silicon is leveraged in economically sensitive areas (steady-state inference). NVIDIA’s challenge lies in long-term blending issues, not immediate replacement concerns.
In essence, this reflects a realistic and enduring trend where hyperscalers aim to own more parts of the inference economy. However, rather than replacing Nvidia's long-term central role in AI infrastructure, it enhances it.
NVIDIA: The Quiet Shift from Chips to AI Economy
microsoft.com/maia-200/
Why Rubin hints at NVIDIA’s next moat
What was most interesting at CES wasRubin’s positioning, which focused not on peak FLOPS or peak performance but on deployment, power, orchestration, and system reliability. This shift happened because the economics of AI fundamentally changed. In the old world of AI, performance was paramount. But in the new AI world we are entering, efficiency, reliability, and total cost of ownership take center stage.
While training remains crucial, as AI transitions from proof-of-concept to full-scale operation, inference is becoming a significant cost factor. CES 2026 reiterated that NVIDIA is advancing its designs for this new reality. Many hardware comparisons fail to capture the bigger picture because shifting AI infrastructure isn’t just about purchasing hardware—it’s about buying software and operations. Changing platforms is a complex process requiring orchestration layers, network architecture, observability, reliability models, and many deeply embedded software and operational elements. This is a multi-year process, not one measured in quarters.
CES 2026 emphasized once again that NVIDIA is advancing its development with this reality in mind. The goal is not to improve performance but to ensure the economic viability of inference throughout the entire lifecycle of AI systems. This is why the cost of switching platforms is orders of magnitude higher than the cost of the platform itself. Switching is possible, but it is a time-consuming, costly process that carries operational risks.
NVIDIA: The Quiet Shift from Chips to AI Economy
www.nvidia.com/en-us/データセンター/テクノロジー/rubin/
P/E is high, but cheap in important areas
From a valuation perspective, NVIDIA is expensive. Its forward price-to-earnings ratio (P/E) exceeds 40x, and its enterprise value-to-sales ratio (EV/Sales) surpasses 21x, justifying a relatively lower valuation rating. On its own, this is understandable.
However, there is a missing piece in this framework. I believe the appropriate framework here is not P/E but PEG. NVIDIA's forward PEG is in the range of 1.0 to 1.1, which is significantly below the sector median PEG of 1.6 and also below its five-year average. This indicates that the market is already pricing in a sharp deceleration in earnings growth.
NVIDIA: The Quiet Shift from Chips to AI Economy
YChartsData provided by YCharts
What is interesting, however, is that NVIDIA’s future prospects are unusually strong. Traditionally, NVIDIA was a stock trading at a premium valuation despite much lower demand forecasts. Now, however, while its valuation multiple is much lower, the outlook is much stronger, which is unusual. In terms of future earnings per share, I believe the market may already be assuming an overly conservative normalization.
China: A tail risk or an undervalued option?
The most misunderstood risk for NVIDIA, or perhaps the semiconductor industry as a whole, is likely China. It is not mentioned in official guidance, and I believe most sell-side models assume China risk is either negligible or small enough to be ignored. On the surface, this seems like a reasonable assumption, but I think it creates a misunderstanding of the risk.
On the demand side, the situation is more complex. Demand for approved products remains very robust, and the quality of orders received over the past few months has been very good. It is clear that tariffs will have a significant impact on profit margins for each sale, but considering our cost structure, revenue from the Chinese market translates into increased operating profits.
The issue is not margins but interpretation. The prevailing view interprets China market risk as a tail risk, not an option. That means we are only looking at downside risks while completely ignoring upside risks. Partial transparency, rather than full reopening, will change how investors think. This is a major issue. Risks exist, and so do options.
Conclusion
NVIDIA is experiencing a quiet revolution. In the world of computing economics, we are transforming from the dominant company providing chips to dominant companies, into a dominant force ourselves.
While the world debates valuations based on outdated metrics, we are building businesses. In my view, the ultimate question is no longer about the growth of AI, but rather who will dominate the large-scale AI economy. I am confident that we are a few steps ahead.
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only. Read more
4
+0
See Original
Report
13K Views
Comment
Sign in to post a comment
    日興證券 HSBC証券 2社の証券会社の設立 などの証券会社での勤務
    4313
    Followers
    1498
    Following
    17K
    Visitors
    Follow