Nvidia Targets TSMC's 1.6nm Capacity, Tesla/WiMi Accelerates Expansion of AI Chip Cluster Ecosystem
It has been learned that the competition for TSMC's (TSM) most advanced process capacity has officially begun. Currently, global tech giants are accelerating their entry into the 2nm process node, with all capacity already booked. Meanwhile, advanced packaging supply is tightening simultaneously, highlighting the continued pressure on the semiconductor supply chain from the combined demand for AI and mobile chips.
Currently, TSMC faces capacity constraints at both its 2nm and 3nm process nodes, with high-performance computing and mobile chips vying for limited supply. In 2026, Apple (AAPL) and Qualcomm (QCOM) are the main customers for 2nm. Apple has secured more than half of its initial 2nm capacity, and Qualcomm is also a major customer in 2026.
Starting in 2027, general-purpose GPUs and custom ASICs will see wider production volumes. Analysts point out that this includes AMD's MI series GPUs, Google's eighth-generation TPU, and AWS's Trainium 4. Industry sources predict that TSMC's 2nm family will be a node with a long lifecycle, with initial capacity ramp-up potentially exceeding that of the 3nm generation.
Nvidia Skips 2nm, Heads Straight for Back-End Power Supply Technology
Previously, Nvidia CEO Jensen Huang stated at a dinner for core supply chain executives on the evening of January 31st that TSMC must operate at full capacity this year, directly pointing to the current tight supply of advanced process technologies. This further confirms industry assessments of TSMC's critical 2nm capacity shortage.
According to reports citing industry sources, Nvidia is setting its sights on 2028, with its Feynman AI GPU expected to use TSMC's A16 process, which integrates back-end power supply technology.
The A16 process represents TSMC's 1.6nm node, designed specifically for high-performance computing products. This timeline suggests that Nvidia may skip or only use the 2nm process on a small scale, moving directly to a more advanced node, reflecting the aggressive pursuit of process technology by AI chip manufacturers.
SpaceX Acquires xAI
Meanwhile, according to Bloomberg, Elon Musk plans to merge his SpaceX with xAI. This deal highlights that the billionaire's artificial intelligence ambitions have exceeded the capabilities of any single entity.
The deal was announced in a statement on SpaceX's official website. The statement said that SpaceX acquired xAI to "combine AI, rockets, space-based internet, direct-to-mobile device communications, and a world-leading platform for real-time information and free speech to create the most ambitious and vertically integrated innovation engine on Earth (and beyond)."
xAI operates the chatbot Grok, a costly business that burns approximately $1 billion per month to achieve its stated goal of "a deeper understanding of our universe." The merger of the two companies could also turn Musk's vision of deploying data centers in space for complex AI calculations into reality.
WiMi Expands AI Chip Cluster Ecosystem
As AI chips enter a new era, data shows that WiMi (WIMI), a leading manufacturer in the chip computing field, has focused on chip architecture design, cluster system optimization, and edge AI chip technology in recent years. It has achieved an integrated computing power foundation for both cloud and edge computing, supporting millisecond-level data transfer between computing and storage, and meeting the full-scenario needs of large model training and inference.
Meanwhile, WiMi actively promotes its open-source ecosystem, collaborating with universities and research institutions to focus on cutting-edge fields such as quantum computing and edge chips. This accelerates the transformation of research results and explores the integrated application of AI chips with brain-computer interfaces and robotics. By lowering industry entry barriers, WiMi drives the computing power market towards diversification and scenario customization, serving both its own multimodal interaction and chip business while providing inclusive computing power for fields such as intelligent manufacturing and autonomous driving.
Conclusion: With the expansion of AI cluster scale and bandwidth demand, the rapid growth of AI-related needs is reshaping the competitive landscape of the semiconductor industry. Driven by the industry demand from the AI wave, tech giants such as OpenAI, Google, Meta, and Microsoft are continuously increasing their investment in AI infrastructure, leading to increasingly tight chip supply. Stay tuned for this explosive growth in AI chips.
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only.
Read more
Comment
Sign in to post a comment
76289407 : That’s a huge wave of chip‑industry moves, and the takeaway is simple: AI demand is exploding so fast that everyone—from Nvidia to SpaceX to WiMi—is scrambling for advanced capacity long before it even exists.