share_log

光大海外:催化频频印证AI算力产业链持续高景气 坚定AI主题投资信心

Everbright Overseas: Frequent catalysts confirm that the AI computing power industry chain continues to be booming and strengthens confidence in AI-themed investments

Zhitong Finance ·  Jan 21 17:41

Everbright overseas released a research report saying that recent industry events have been catalyzed frequently, confirming that the AI computing power industry chain continues to be booming and strengthens confidence in AI-themed investments.

The Zhitong Finance App learned that Everbright has released research reports overseas saying that recent industry events have been catalyzed frequently, confirming that the AI computing power industry chain continues to be booming and strengthens confidence in AI-themed investments. It is recommended to focus on Nvidia's related industrial chain and the domestic AI computing power industry chain.

(1) Nvidia related industrial chain. ① AI chip: Nvidia (NVDA.US), AMD (AMD.US), Intel (INTC.US); ② server: ultra-micro computer (SMCI.US), Lenovo Group (00992), industrial fulfilment (601138.SH); ③ cloud technology service provider: Oracle; ④ optical module: Zhongji Xuchuang (300308.SZ), Xinyisheng (300502.SZ); ⑤ HBM: Samsung Electronics (SSNLF.US), SK Hynix, Micron (MU.US); ⑥ monthly testing light, Amkor Technology; (2) AI computing power domestic industry chain: Haiguang Information (688041.SH), Cambrian (688256.SH), Inspur Information (000977.SZ), Hi-Tech Development (000628.SZ), Runjian Co., Ltd. (002929.SZ).

Incidents: Recently: (1) Ultramicrocomputer raised FY24Q2 revenue & earnings per share (EPS) guidelines; (2) Meta increased Nvidia H100 GPU investment; (3) TSMC's 23Q4 law will guide strong demand for AI chips, and the global tech giants' earnings season is gradually kicking off, confirming that the AI computing power industry chain continues to thrive.

Everbright Overseas's main views are as follows:

Ultramicrocomputer (SMCI.O) raised FY24Q2 revenue & EPS guidelines, reflecting strong demand for AI computing power.

As an overall solution provider for artificial intelligence, cloud, storage, and 5G/Edge, ultra-microcomputers are closely cooperating with Nvidia, a leading global AI computing power manufacturer, and large-scale shipments of GPU servers equipped with Nvidia's high-performance HGX H100 8U GPU system began in 23Q1. The company released the FY24Q2 (as of December 31, 2023) performance forecast on January 18, US time. The company expects FY24Q2:

(1) Net sales were US$36.0-3.65 billion, higher than the previous guidance of US$2.0-2.90 billion;

(2) The diluted EPS was $4.90-5.05, higher than the previous guideline of $3.75-4.24;

(3) The adjusted diluted EPS was $5.40-5.55, higher than the previous guidance of $4.40-4.88. It is predicted that FY24Q2 revenue & EPS will exceed previous guidance and market expectations, mainly due to strong demand from the market and end customers for their rack size, artificial intelligence, and overall IT solutions.

Meta is increasing investment in Nvidia's H100 GPU, and North American tech giants are still investing heavily in the AI arms race.

On January 18, US time, Meta CEO Mark Zuckerberg announced that Meta is training the next big open source model, Llama 3, and will purchase more than 350,000 Nvidia H100 GPUs.

(1) According to market research agency Omdia estimates, Nvidia shipped 150,000 H100 GPUs to Meta in '23, which is the same as Microsoft, and 3 times that of Google, Amazon, and Oracle.

(2) Additionally, Zuckerberg said that if Nvidia A100 GPUs and other AI chips are considered, Meta's total GPU computing power will reach the equivalent of nearly 600,000 blocks of H100 GPU computing power by the end of '24.

(3) Meta previously stated at the 23Q3 results conference that it expects capital expenditure for the full year of '24 to be between US$30 to 35 billion, yoy +11.1%-20.7%. AI accounts for the highest share of the increase in capital expenditure in '24. If you calculate the purchase cost of each H100 GPU of 25,000 to 30,000 US dollars, Meta only purchased 350,000 new H100 GPUs in '24 had a capital expenditure of $875—10.5 billion.

TSMC's share of HPC revenue increased dramatically in 23Q4, leading to strong demand for AI chips.

TSMC held a 23Q4 law conference on January 18. (1) 23Q4 revenue slightly exceeded the guideline, with 23Q4 revenue of 19.62 billion US dollars, yoy -1.5%, qoq +13.6%, slightly exceeding the guideline limit of 188-19.6 billion US dollars, benefiting from the continuous mass production of the 3nm process leading to a sharp rise in wafer volume and price, with 7nm and below more advanced processes accounting for 67% of revenue; (2) 23Q4 HPC's revenue share increased sharply to 43%, with the main GPUAI and data center CPUs contributing 24 Annual CoWoS production capacity will Compared to 23 years, it is still unable to meet demand; (4) Driven by AI and HPC demand, the revenue growth rate in 24 years is 20%; (5) AI-related revenue is expected to have a long-term CAGR of around 50%, and the share of AI processor-related revenue will reach High-Teens (15% to 20%) in the next five years.

Risk analysis: (1) Downstream application development and scenario expansion are slow, causing AI commercialization progress to fall short of expectations;

(2) Production capacity expansion is limited, and data center business shipments fall short of expectations;

(3) There is a downside risk in the computational power requirements for training and inference of large AIGC models;

(4) The risk that geopolitics will affect future GPU sales.

Disclaimer: This content is for informational and educational purposes only and does not constitute a recommendation or endorsement of any specific investment or investment strategy. Read more
    Write a comment