Qualcomm and Nvidia vie for leadership in AI chip efficiency tests

Qualcomm’s artificial intelligence chips outperformed Nvidia’s in two out of three measures of energy efficiency in a new test dataset published on Wednesday, while a Taiwanese startup outperformed both in one category.

Nvidia dominates the market for training AI models with large amounts of data. But once these models are trained, they’re used more broadly in what’s called “inference,” performing tasks like generating text responses to prompts and deciding whether an image contains a cat, for example.

Analysts believe the market for data center inference chips will grow rapidly as companies build AI technologies into their products, but companies like Alphabet Inc’s Google are already exploring how to keep tabs on the additional costs that measure will bring.

One of those big costs is electricity, and Qualcomm has used its history of designing chips for battery-powered devices like smartphones to create a chip called the Cloud AI 100, which aims for parsimonious power consumption.

In test data published on Wednesday by MLCommons, an engineering consortium that maintains widely used test benchmarks in the industry, Qualcomm’s AI 100 outperformed Nvidia’s flagship H100 chip in image rankings, based on how many data center server queries each chip can perform per watt.

Qualcomm’s chips achieved 197.6 server queries per watt versus Nvidia’s 108.4 queries per watt. Neuchips, a startup founded by veteran Taiwanese chip academic Youn-Long Lin, took first place with 227 queries per watt.

Source: CNN Brasil

You may also like