share_log

AMD(AMD.US)与英伟达战火蔓延至AI领域! Meta与甲骨文官宣斥巨资购AMD AI芯片

The battle between AMD (AMD.US) and Nvidia has spread to the field of AI! Meta and Oracle Civil Servants Decry Huge Investments to Buy AMD AI Chips

Zhitong Finance ·  Dec 6, 2023 20:30

AMD announced the official launch of its flagship AI GPU accelerator, the MI300X, which means that the intense competition between AMD and Nvidia extends from the PC field to the AI field across the board.

The Zhitong Finance App learned that AMD (AMD.US), a giant in both the CPU and GPU industries and one of Nvidia's (NVDA.US) rivals, held a “NVDA.US” press conference on Wednesday EST. AMD announced the official launch of its flagship AI GPU accelerator MI300X, which means that the intense competition between AMD and Nvidia extends from the PC field to the AI field. AMD test data shows that its overall performance is 60% higher than Nvidia's H100. In addition to releasing new products such as the Instinct MI300X and MI300A as expected, some global leaders in the AI field also came to the scene, such as OpenAI, Microsoft, and Meta, and indicated that they will deploy a large number of AMD Instinct MI300X in the future.

The Instinct MI300X accelerator appeared first at the “Impaired AI” press conference. Since the parameters of the MI300 series AI chips were announced at the product promotion campaign half a year ago, this “Incurable AI” press conference focused more on the performance of the entire system in actual applications and the comprehensive performance comparison with the Nvidia H100 AI chip, which is the most popular in the field of AI training/reasoning. Furthermore, AMD has abruptly revised its global AI chip market size forecast up to 2027 from 150 billion US dollars to 400 billion US dollars.

Global leaders in the AI field OpenAI, Microsoft (MSFT.US), and Meta (META.US) said at an AMD event on Wednesday that they will use AMD's latest AI chip, the Instinct MI300X. This is the clearest sign that global tech companies are looking for alternatives to the expensive Nvidia H100 AI chip so far. The Nvidia H100 is essential for the creation and deployment of generative artificial intelligence (generative AI) applications such as ChatGPT under OpenAI. Today, the Nvidia H100, which has almost a monopoly in the AI chip field, finally has a strong competitor, and that is the AMD Instinct MI300X.

If AMD's latest high-end AI chips start shipping early next year, they will be sufficient to meet the computing power needs of technology companies and cloud service providers that build large models of artificial intelligence, while at the same time reducing the cost for technology companies to develop artificial intelligence models, it will inevitably put tremendous competitive pressure on Nvidia's AI chips, whose sales continue to soar.

AMD CEO Lisa Su (Lisa Su) said on Wednesday: “Basically, all interest from potential big customers is focused on large processors and large GPUs in the cloud computing sector.”

According to AMD, the MI300X is based on a brand new architecture, which usually brings significant performance improvements. The most prominent feature of AMD's new AI chip is that it has 192 GB of cutting-edge high-performance HBM3 memory, which transmits data faster and can adapt to larger artificial intelligence models.

Su Zifeng, affectionately known as “Su Ma” by AMD fans, directly compared the MI300X and the system it built with Nvidia's main AI chip H100 at the press conference. “This performance directly translates into a better user experience,” Su Zifeng said. “When you make a request to the model, you want it to respond more quickly, especially if the response becomes more complex.”

The main question facing AMD is whether the company that has always been based on Nvidia software and hardware will invest time and money to trust another AI chip supplier. AMD told investors and partners on Wednesday that the company has improved the supporting software suite called ROCm to compete with Nvidia's industry-standard CUDA software and to some extent solved one of the key flaws AMD faces in the field of AI chips — that is, the hardware and software ecosystem, and this flaw has always been one of the main reasons why AI developers currently prefer Nvidia's products.

AMD did not disclose the price of the MI300X on Wednesday, but each Nvidia AI chip sells for up to 40,000 US dollars. Su Zifeng said that AMD's chips must have lower purchasing and operating costs than Nvidia's chips in order to convince potential major customers to buy them.

Although the content of the press conference is generally in line with market expectations, it does not seem to have boosted AMD's stock price. Against the backdrop of the collective weakening of US technology stocks on Wednesday, before the AMD press conference was over, the stock price changed from rising to falling. In the end, it closed down more than 1%, but the after-hours increase was more than 1%.

Since AMD is about to launch a high-performance AI chip to compete with Nvidia's H100, Wall Street analysts are generally bullish on AMD's stock price. According to Wall Street analysts' consensus ratings and target prices compiled by Seeking Alpha, Wall Street analysts' consensus rating for AMD is “buy,” and the average target price is expected to reach 132.24 US dollars, which means a potential increase of up to 13% over the next 12 months. The highest target price is $200.

Better performance than Nvidia H100! AMD drastically raised market size expectations

The latest performance comparison data with the Nvidia H100 AI chip, which is the most popular in the field of AI training/inference, shows that in the general LLM core TFLOP, the MI300X provides a performance increase of up to 20% compared to the Nvidia H100 in the FlashAttention-2 and LLAMA 270B. From a platform perspective, comparing the 8x Mi300x solution with the 8x Nvidia H100 solution, AMD found that the LLAMA 270B had a much greater gain, up to 40%, while the gain under the Bloom 176B benchmark was as high as 60%.

The specific performance comparison data of AMD Instinct MI300X and Nvidia H100 shows:

In the 1v1 comparison, the overall performance is 20% higher than the H100 (Llama 2 70B)

Overall performance is 20% higher than H100 (FlashAttention 2) in the 1v1 comparison

Overall performance in 8v8 servers is 40% higher than H100 (Llama 2 70B)

In an 8v8 server, overall performance is 60% higher than H100 (Bloom 176B)

The driving force behind the latest MI300 AI chip is AMD ROCm 6.0. The software stack has been updated to the latest version with powerful new features, including support for a variety of AI workload workloads, such as generative artificial intelligence and large language models (LLM).

Memory is another huge upgrade area for AMD. The HBM3 capacity of the MI300X has increased 50% compared to its predecessor, the MI250X (128 GB). To achieve a memory pool of up to 192GB, AMD equipped the MI300X with 8 HBM3 stacks, each 12-Hi, and integrated 16Gb ICs. Each IC has 2GB capacity, or each stack has a capacity of up to 24GB.

This level of memory scale will provide up to 5.3 Tb/s of bandwidth and 896 Gb/s of Infinity Fabric bandwidth. In contrast, Nvidia's upcoming H200 AI chip offers 141GB of capacity, while Intel Gaudi3 will provide 141GB of capacity. In terms of power consumption, the rated power of the AMD Instinct MI300X is 750W, which is 50% higher than the Instinct MI250X's 500W, and 50W more than the NVIDIA H200.

On Wednesday, AMD gave bold predictions on the future market size of the AI chip field, believing that the AI chip market will expand rapidly. Specifically, AMD expects the overall size of the AI chip market to reach more than 400 billion US dollars by 2027, an increase of nearly two times from the 150 billion US dollars provided by the company a few months ago, highlighting the rapid changes in expectations of the world's major companies for artificial intelligence hardware, and various companies are rapidly deploying new AI products.

Which tech giants will use the MI300X?

At a press conference on Wednesday, AMD said that the company has signed agreements with some of the technology companies that need GPUs the most to use the chip. Meta and Microsoft were the biggest buyers of Nvidia's H100 AI chip in 2023, according to a recent report by research firm Omidia.

At the press conference, Facebook and Instagram parent company Meta publicly stated that in the future, the company will make extensive use of the MI300X GPU to process artificial intelligence inference workloads, such as processing artificial intelligence stickers, image editing, and operating its AI assistant, and said that it will combine the ROCm software stack to support AI inference workloads.

Microsoft Chief Technology Officer Kevin Scott publicly stated that Microsoft will provide technical access to the Mi300x chip through its Azure network service. Furthermore, news on the same day showed that Microsoft will evaluate demand for AMD's AI chip products in the future and evaluate the feasibility of adopting this new product.

ChatGPT developer OpenAI said the company will support GPUs such as the AMD MI300 in an important software product called Triton. Triton is not a large language model like GPT-4, but it is a very important product for the field of artificial intelligence research.

Oracle, one of Nvidia's largest customers, said it will use the Instinct MI300X accelerator in its cloud computing service system and plans to develop generative AI services based on the AMD Instinct MI300X.

At present, AMD has not predicted the long-term sales volume of this AI chip. Currently, it has only given an forecast for 2024. It is estimated that the total revenue generated by data center GPUs in 2024 will be about 2 billion US dollars. In the most recent quarter alone, Nvidia's data center business revenue exceeded $14 billion, but this figure also includes businesses other than GPUs. However, AMD said that in the next four years, the total market size of the AI chip field may rise to 400 billion US dollars, double the company's previous forecast.

Su Zifeng also told reporters that AMD does not think it needs to beat Nvidia to achieve good results in the market. When talking about the AI chip market, Su Zifeng told reporters: “I think it is obvious that Nvidia now occupies the vast majority.” “We think the AI chip market may exceed $400 billion by 2027, and we will play an important role.”

Disclaimer: This content is for informational and educational purposes only and does not constitute a recommendation or endorsement of any specific investment or investment strategy. Read more
    Write a comment