Google launches AI chip to handle faster training performance of LLMs

Google has announced that it is expanding its AI-optimised infrastructure portfolio with Cloud TPU v5e. It claims that it is “the most cost-efficient, versatile, and scalable Cloud TPU to date.” With the new tensor processing unit (TPU), Google aims to address the inadequate computing infrastructure that is unable to handle increasing workloads like generative AI and LLMs.

“The number of parameters in LLMs has increased by 10x per year over the past five years. As a result, customers need AI-optimised infrastructure that is both cost-effective and scalable,” Google said.

Read more

You may also like

More in IT

Comments are closed.