Meta’s talks with Google regarding a multibillion-dollar TPU agreement caused a sharp decline in Nvidia shares, while triggering a rise on the Alphabet side. As Meta negotiates a TPU deal with Google, competition in the AI chip market is being reshaped.
Nvidia’s shares fell sharply following news that Meta is discussing a deal worth billions of dollars for Google’s AI accelerators. According to the US-based The Information, Meta is considering using Google’s custom AI chips, known as TPU (Tensor Processing Unit), in its data centers by 2027. Additionally, it is stated that the company is putting the possibility of renting TPUs via Google Cloud next year on the table.
Meta Turns to Google
This development indicated that competition is accelerating in an area where Nvidia has long been the undisputed leader. Following the news, Alphabet’s shares rose by up to 2.7% in after-hours trading, while Nvidia experienced a decline of the same rate. The company, which owns Google, had previously signed an agreement to provide up to 1 million TPUs to Anthropic.
Following the news, Alphabet-linked companies in Asian markets rose rapidly. In South Korea, IsuPetasys, which provides multi-layer circuit boards to Alphabet, broke a record with an 18% increase in trading, while MediaTek shares in Taiwan gained nearly 5%.
A potential supply agreement with Meta means a significant prestige gain for Google. This is because Meta is one of the companies allocating the most resources to data center investments and AI development efforts on a global scale.
Google Finds Success with TPU
The TPU architecture, which Google first developed 10 years ago for AI workloads, is gathering increasing interest among major players outside the company. Concerns among tech giants about the risk of remaining overly dependent on Nvidia are effective in this rising interest.
While GPUs, which form the basis of Nvidia’s superiority, were originally designed for graphics processing, they provided a perfect fit for big data sets and computationally intensive AI training processes. TPUs, on the other hand, offer a more focused alternative to Nvidia’s general-purpose GPUs as ASIC-based chips designed for specific tasks.
Google has developed TPUs over the years to accelerate internal AI and machine learning models. Researchers working on the company’s DeepMind team and Gemini models directly contributed to chip design, creating a dual development cycle on both the hardware and software sides.
On the other hand, Google introduced its most powerful AI processor, Ironwood, in recent months. Opening Ironwood to general use this month, Google offers an inference processing capacity of up to 4,614 TFLOPs in its new TPU chip. These chips can communicate directly with each other via the next-generation Inter-Chip Interconnect (ICI) developed by Google. Moreover, these processors can operate in clusters of up to 9,216 units with their liquid-cooled structures. This massive structure can reach a total computing power of 42.5 Exaflops.
You Might Also Like;
- We Selected 10 Series Similar to Stranger Things for Those Who Love It
- Where and How is Silver Used in Electric Vehicles?
- Hyundai Unveils Its Multi-Purpose Wheeled Robot
