AMD’s new AI chip MI300X is about to be mass-produced, driving a new competition for advanced packaging and high-bandwidth memory to servers; Huida has also upgraded memory specifications to counteract this war. This war has made the AI server supply chain increasingly prosperous.
“Financial News” reported that Nvidia is the mainstream of AI chips, but starting from the middle of the year, challenger AMD’s AI chip MI300X will be shipped, taking part of Nvidia’s market, and once again affecting the wafer foundry industry. AI supply chain to servers.
AMD fights back and will launch new products in the second half of the year
Japan’s Mizuho Securities reported that Huida’s AI chip market share is as high as 95%, “far higher than AMD and Intel combined.” According to Huida’s financial report, in the fourth quarter of 2023 alone, data center revenue reached US$18.4 billion, approximately NT$18.4 billion. NT$540 billion, an increase of 409% from the same period last year.
But AMD’s counterattack will appear in the second half of the year. “Financial News” interviewed industry insiders and said that MI300X chips, products of the same level as AMD and Huida H100 chips, have been shipped in small quantities and are being tested by Microsoft and others. “This chip will be available in quantity in the second half of the year.” The main buyers are companies with large data centers such as Microsoft. “Starting in May or June, there will be a wave of shipments.”
Microsoft, Meta, Oracle, OpenAI, etc. will purchase this chip to provide services in data centers. Server manufacturers such as Supermicro, ASUS, Gigabyte, Hongbai Technology, Inventec, and Wenda are also designing solutions. At the conference in the third quarter of last year, AMD CEO Su Zifeng said that data center revenue could reach US$2 billion in 2024, which means that MI300 will be the fastest product in AMD’s history to reach US$1 billion in revenue.
AMD’s weapon to fight back against Huida is to use high-bandwidth memory and advanced packaging to improve AI computing efficiency. The challenge of AI chips is to move a large amount of data between the memory and the processor. Therefore, AMD adopts TSMC’s advanced packaging. The memory placed outside the chip is directly moved into the chip, so that a large amount of data no longer needs to go through the network and is directly moved into the chip. Data can be moved from memory to the processor for calculation.
“Financial News” pointed out that Su Zifeng demonstrated at the press conference that she used only one chip to execute an AI model with a data volume of up to 40GB to write a poem for San Francisco. “This chip can execute up to 80GB AI model.” Su Zifeng said.
Mizuho Securities reported that the MI300X high-bandwidth memory is up to 192GB, which is more than double the 80GB of the Huida H100 SXM chip. The performance of many computing power tests is also higher than the Huida H100 SXM chip. Huida also provided figures to show that the performance of Huida’s H100 chip is significantly faster than that of MI300X, and announced test details to allow users to compare the performance of the two chips. The smell of competition between the two powers has obviously increased.
The market is looking forward to it and is not happy to see Huida dominate
Another advantage of AMD is that buyers do not want the market to be dominated by Huida. Industry insiders observe that “the problem with AI chips is not insufficient production capacity, but that Huida is very expensive.” Microsoft and others do not want to have only one supplier in the market. Since self-developed chips cannot maintain leading performance like semiconductor companies, they naturally hope to have competitors. Appears, performance and price can rival Huida.
In the second half of the year, AMD’s new chips will accelerate competition in the AI supply chain. Huida will launch the H200 chip, which will increase the high-bandwidth memory from the previous generation 80G to 141G, and switch to the faster HBM3E memory; this year, the B100 chip will also be launched. The performance is twice that of H200. However, AMD’s next-generation new product MI350 will also use the new HBM3E new high-bandwidth memory and will be launched next year. Due to the competition between the two giants, the supply of high-bandwidth memory has been tight, and SK Hynix has sold out its production this year.
TSMC’s advanced packaging supply chain has also expanded rapidly. To combine more memories and accelerators, advanced packaging will inevitably be used. Various manufacturers have also begun to certify the advanced packaging capabilities of other suppliers, causing the stock prices of packaging companies such as ASE to soar. TSMC has even formed an alliance with Hynix to vigorously deploy high-bandwidth memory.
However, the addition of AMD will add new momentum and demand to the entire supply chain from wafer manufacturing to server production. Industry insiders predict that AI chips will become increasingly diversified in the future, ranging from high-end to mid-to-low-end, meeting the needs from model training to AI edge computing. “If used for inference, X86 and ARM processors also have opportunities.”
“Financial News” reported that the keynote speaker at this year’s Taipei International Computer Show is AMD CEO Lisa Su. Intel CEO Kissinger will also give a speech at the Taipei International Computer Show. Huang Renxun will also come to Taiwan to participate in the event, plus Huida announced a series of new products at the GTC conference in March. It is conceivable that the AI competition between AMD and Huida will rise to a new peak, and the market will also expand due to competition.
(This article is written by financial news Reprinted with permission; source of first image:AMD)
Further reading:
2024-03-16 02:06:57
#industry #happy #dominant #player #competition #tigers #full #smoke #fire #AMD #challenges #Huida #supply #chain #open #situation #year