Google Splits Its AI Chip in Two to Cut Inference Costs by 80%
At Google Cloud Next, Alphabet unveiled TPU 8t for training and TPU 8i for inference — the first time Google has shipped purpose-built dies for each workload, claiming 80% better inference economics and a supply chain spanning Broadcom, MediaTek, and TSMC's 2nm node.
















