Nvidia said it's A100 GPUs won all the MLPerf benchmark tests for AI inference recently and said a wide variety of companies are using its products even as most companies worry about financial returns. (Nvidia)
The GPU manufacturer won all the tests in the latest round of MLPerf benchmarks. It noted that eight A100 GPUs in a single DGX A100 device could provide nearly 1,000 dual-socket CPU servers with the same computed output on some applications.
Nvidia GPUs are used in systems from various server providers, including Cisco, Dell EMC, Fujitsu, and Lenovo. Nvidia noted in Paresh Kharya 's blog, Nvidia's Senior Director of Product Management and Marketing, that MLPerf benchmarks are being used by Arm, Facebook, Google, Intel, Lenovo, and Microsoft.
In its blog, Nvidia said that AI's breakthroughs profoundly affect natural language processing, medical imaging, and recommendation systems. The company's GPUs are used in aerospace, robotics, retail, manufacturing, and financial services by companies such as American Express, BMW, Capital One, Dominos, Ford, Kroger, and Toyota.
Separately on Wednesday, Synopsis announced a partnership with IBM Research's AI Hardware Center to advance AI compute performance by 1,000 times over the next decade, more than an annual doubling of AI compute performance.
Nvidia's marketing of its AI inference GPUs and other industry initiatives to boost AI compute performance are in stark contrast to the recent finding that only 11 percent of companies claim they have seen substantial financial returns on investment from AI deployments. The results were based on a survey of 3,000 global managers and interviews conducted by the Boston Research Group in collaboration with the MIT Sloan Management Review.
"Computer performance is important but not as important as how you have trained your AI software and how well you have defined AI parameters," said Jack Gold, J's independent analyst. Gold Associates, by email to Fierce Electronics.
When it comes to AI, Gold said, "What matters is how good your algorithms are, and more importantly, how useful your learning data is … AI metrics are troublesome because there are a lot of them and they might not be applied too close to what your AI process is doing. I'm taking all the benchmarks with a big grain of salt.