Google Launches New TPUs for Enhanced AI Performance
Google unveils two new TPUs designed for the "agentic era"
Ars Technica
Image: Ars Technica
Google has introduced two new Tensor Processing Units (TPUs), the TPU8t for training and TPU8i for inference, aimed at improving AI model efficiency. Designed for the evolving 'agentic era', these chips significantly reduce training time and enhance computational power, marking a leap in AI hardware capabilities.
- 01Google's TPU8t and TPU8i are designed for training and inference, respectively.
- 02The TPU8t reduces AI model training time from months to weeks.
- 03Each TPU pod contains 9,600 chips with two petabytes of shared memory.
- 04The TPU8t offers an impressive 121 FP4 EFlops of compute power per pod.
- 05These advancements aim to support the growing demands of AI development in the 'agentic era'.
Advertisement
In-Article Ad
Google has unveiled its latest Tensor Processing Units (TPUs), the TPU8t and TPU8i, specifically designed to cater to the evolving needs of artificial intelligence in what the company calls the 'agentic era'. The TPU8t is focused on training AI models, significantly reducing the time required for training from months to just weeks. Each TPU pod is equipped with 9,600 chips and features two petabytes of shared high-bandwidth memory, allowing for scalable AI model development. The TPU8t boasts an impressive 121 FP4 EFlops of compute power per pod, nearly tripling the performance of its predecessor, the Ironwood TPU. This innovation is set to enhance the efficiency of building large-scale AI models, making it easier for developers to harness the power of AI technologies.
Advertisement
In-Article Ad
These new TPUs will enable developers to create advanced AI models more quickly and efficiently, potentially leading to faster innovations in AI applications.
Advertisement
In-Article Ad
Reader Poll
How important do you think advancements in AI hardware are for the future of technology?
Connecting to poll...
More about Google
Read the original article
Visit the source for the complete story.



