Here’s how our TPUs power increasingly demanding AI workloads.

Here’s how our TPUs power increasingly demanding AI workloads.

以下是我们的 TPU 如何为日益严苛的 AI 工作负载提供动力。

Behind the Google products you use every day are custom chips designed for one job: doing math at massive scale. They’re called TPUs, or Tensor Processing Units. 在你每天使用的 Google 产品背后,是专为一项任务而设计的定制芯片:进行大规模数学运算。它们被称为 TPU,即张量处理单元(Tensor Processing Units)。

We designed TPUs from the ground up more than a decade ago specifically to run AI models. Basically, it takes a lot of math for AI models to work, and TPUs can do complex math super quickly: The newest generation of TPUs can process 121 exaflops of compute power with double the bandwidth of previous generations. 十多年前,我们从零开始设计了 TPU,专门用于运行 AI 模型。从本质上讲,AI 模型的运行需要大量的数学运算,而 TPU 可以极其快速地完成这些复杂的数学计算:最新一代的 TPU 能够提供 121 exaflops 的计算能力,且带宽是前几代产品的两倍。

Learn more about these tiny but mighty processors in the video below. 请观看下方视频,了解更多关于这些“小而强大”的处理器信息。