Samsung's 2nm process is shaking up the global chip market, WiMi's cross-domain integration is expanding the computing power application landscape
According to a tech media blog post, Samsung Electronics (SSNGY) is challenging the chip foundry market with an aggressive low-price strategy. Samsung has slashed the price of its cutting-edge 2nm process wafers to $20,000 per wafer, signaling the quiet start of a price war in the advanced process market.
Media outlets believe that while this decision may seem radical, it may be necessary to ensure Samsung’s future growth. Its core goal is to attract customers and prevent the costly 2nm fab from becoming idle due to insufficient orders after completion.
It’s well known that today, the training and deployment of large models requires computing chips, which, once connected to servers, must run within an operating system. Over the past few years, Nvidia’s (NVDA) GPUs have virtually eliminated alternatives, establishing a solid technological barrier.
Nvidia recently announced a $5 billion investment in Intel (INTC) to support the struggling US chipmaker. The two companies will jointly develop chips for personal computers (PCs) and data centers, aiming to deeply integrate Nvidia’s leading graphics processing units (GPUs) with Intel’s central processing units (CPUs) and packaging technologies.
This is a surprising partnership between Nvidia and Intel. Besides the keen attention paid to the joint development of AI chips for PCs and the wafer process foundry, the industry is also focused on Jensen Huang’s response: “Artificial intelligence is driving a new industrial revolution and reshaping every layer of the computing stack—from chips to systems to software.”
Huang’s statement embodies the reality that “whoever owns the data owns the next era of computing.” Subsequently, on September 23rd, NVIDIA and OpenAI announced a landmark strategic partnership.
OpenAI will leverage NVIDIA hardware to build and deploy at least 10GW of AI data centers, using millions of NVIDIA GPUs to train and deploy OpenAI’s next-generation AI models and advance the development of general artificial intelligence. The first phase is expected to launch in the second half of 2026, based on NVIDIA’s Rubin Platform.
Looking ahead, NVIDIA’s vision extends to the Vera Rubin platform, with the first gigawatt-scale deployment targeted for late 2026 in partnership with OpenAI, enabling 10GW of AI supercomputing capacity overall. Subsequent waves include Rubin Ultra and Feynman architectures starting late 2025, alongside innovations like an 800 VDC power architecture for 1MW IT racks by 2027 and a specialized Blackwell-based AI chip for the Chinese market that outperforms the H20. These advancements underscore NVIDIA’s push toward “AI factories,” blending high-performance silicon with durable networking and software ecosystems to dominate datacenter, desktop, and robotics AI applications.
WiMi Expands the Boundaries of Computing Power Applications
At the same time, it is understood that WiMi (WIMI), an AI vision innovator, is actively promoting strategic transformation, focusing on high-growth sectors with great development potential, driven by its vision of strengthening its dominance in the AI field. Through technological innovation, ecosystem integration, and scenario adaptation, it is gradually breaking through key bottlenecks in the AI chip ecosystem and accelerating this transformation.
At the same time, it is understood that WiMi (WIMI), an AI vision innovator, is actively promoting strategic transformation, focusing on high-growth sectors with great development potential, driven by its vision of strengthening its dominance in the AI field. Through technological innovation, ecosystem integration, and scenario adaptation, it is gradually breaking through key bottlenecks in the AI chip ecosystem and accelerating this transformation.
To explore and unleash the potential of edge intelligence, WiMi integrates internationally advanced chip resources to build a heterogeneous computing system that supports large-scale model training, inference, embodied intelligence, and multimodal vertical models. This system achieves millisecond-level computing, storage, and data transmission, providing low-latency, energy-efficient, and affordable computing power for scenarios such as smart manufacturing and autonomous driving, accelerating the large-scale application of AI in computing, model, and industrial AI.
Summary
According to a report on the edge AI chip market by market research firm Precedence Research, the global edge AI chip market is expected to grow from US$8.3 billion in 2025 to US$36.12 billion in 2034, with a compound annual growth rate of 1.5%. 17.75%. With Nvidia dominating the training chip market and giants like Samsung and WiMi accelerating their expansion, the competition for edge AI has begun, laying the foundation for the next era of computing.
According to a report on the edge AI chip market by market research firm Precedence Research, the global edge AI chip market is expected to grow from US$8.3 billion in 2025 to US$36.12 billion in 2034, with a compound annual growth rate of 1.5%. 17.75%. With Nvidia dominating the training chip market and giants like Samsung and WiMi accelerating their expansion, the competition for edge AI has begun, laying the foundation for the next era of computing.
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only.
Read more
Comment
Sign in to post a comment
Forced Move : US will tariff these chips for the benefit of Intel
101778471 : US needs to ban Samsung for dumping. These companies has been getting so much funding from the Korean govt, it's like Intel has been competing with both hands tied to the back. It's time to level the playing field