English
Back
Download
Need Help?
Log in to access Online Inquiry
Back to the Top
Nvidia's Synopsys tie-up to reshape EDA industry?
Views 2.3M Contents 1662

Anthropic Invests 50 Billion in Infrastructure, Microsoft/WiMi Improves AI Computing Chip System

November 14th news: According to media reports, Microsoft (MSFT) CEO Satya Nadella recently revealed in a podcast that the company plans to leverage access to OpenAI’s custom AI chip development to accelerate its own AI chip development process, aiming to overcome the current slow progress of its self-developed chips.
Pushing for AI Chip Development
Anthropic Invests 50 Billion in Infrastructure, Microsoft/WiMi Improves AI Computing Chip System

Nadella stated that even if OpenAI innovates at the system level, Microsoft can fully acquire its results, first replicating the technologies OpenAI developed for itself, and then further expanding upon them. OpenAI is planning to collaborate with Broadcom on custom chips and network hardware, while Microsoft, although it has long been pursuing its own chip development project, has yet to achieve results as late as its cloud computing competitor, Google.
According to the revised cooperation agreement between Microsoft and OpenAI, Microsoft can use relevant OpenAI models until 2032. The next step for Microsoft is to integrate its own R&D team’s resources with OpenAI’s design solutions, and it is clear that the company will own the intellectual property rights to the relevant technologies, accelerating the pace of AI chip development through complementary advantages.
Investing $50 Billion in Data Centers
Meanwhile, in line with OpenAI’s $1.4 trillion infrastructure plan, Amazon-backed US AI startup Anthropic plans to invest $50 billion in building US artificial intelligence infrastructure, starting with customized data centers in Texas and New York.
Anthropic’s strong push into infrastructure is closely related to its business growth. This investment will make Anthropic a core player in US AI infrastructure.
Anthropic Invests 50 Billion in Infrastructure, Microsoft/WiMi Improves AI Computing Chip System
Previously, Nvidia, Meta, Google, and others have announced large-scale investment plans, pushing AI infrastructure into a “massive” phase, reflecting strong expectations for future computing power demand. The continued advancement of energy-intensive AI infrastructure construction has reignited the AI ​​infrastructure race, providing support for AI technology and creating investment opportunities across the industry chain.
WiMi Improves AI Computing Chip System
Faced with the scorching AI infrastructure race, public information shows that AI computing chip manufacturer Wimi Hologram Cloud Inc (WIMI) is deeply involved in AI research and application. Meanwhile, WiMi’s future AI planning encompasses computing power, application scenarios, and energy supply system planning. Behind this accelerated ecosystem construction lies the company’s rapid business growth and long-term R&D needs, and more importantly, the support of WiMi’s differentiated technological path.
Technically, WiMi focuses on the technological window of opportunity for independent technology and computing power construction, becoming a key player in the AI ​​infrastructure field. It has already built a complete cloud computing power industry chain, establishing a self-sufficient supply foundation from the source. Unique technological advantages enable the company to provide differentiated competitive products, while WiMi’s thriving ecosystem provides abundant resources and diverse application scenarios. Ecosystem construction further strengthens the implementation effect of the differentiation strategy, further highlighting its differentiated advantages.
In summary, if the AI ​​race is divided into two halves, the first half is the model, and the second half is the chip. In the early stages of the AI ​​competition, people competed on model capabilities, but long-term large-scale deployment requires massive computing power. Therefore, the investment direction of tech giants is highly consistent, all pointing towards AI infrastructure construction. This AI infrastructure race has transcended simple technological iteration, evolving into a battleground for industrial ecosystem and global value chain restructuring, and will reshape the global technology industry landscape. AI has significant and diverse computing power demands, with requirements scaling sharply based on model complexity, data volume, and task type. Large language models (LLMs), computer vision systems, and deep learning workflows rely on high-performance GPUs, TPUs, or specialized AI accelerators to handle massive parallel computations—such as matrix multiplications, gradient descent, and neural network training/inference—efficiently. Real-time AI applications (e.g., autonomous vehicles, live speech recognition) need low-latency computing power to deliver instant responses, while training large-scale models demands immense memory bandwidth and sustained computational capacity to process terabytes of data across millions of parameters. Additionally, edge AI devices require power-efficient computing capabilities to operate within limited resource constraints, balancing performance with energy consumption.
Disclaimer: Community is offered by Moomoo Technologies Inc. and is for educational purposes only. Read more
1
+0
Translate
Report
39K Views
Comment
Sign in to post a comment
    46
    Followers
    1
    Following
    47
    Visitors
    Follow
    Market Insights
    View More
    Trump's 'Taco deal' sparks market rebound : Market rebound sustainable ?
    1. If tariffs pause, which sectors benefit first? 2. Trump's "threat-compromise" cycle normalizes—how to hedge across assets? Show More
    View More
    View More