[ad_1]
Throughout its onstage presentation, Intel supplied new particulars in regards to the Gaudi 3 structure, efficiency, and the OEMs dedicated to bringing it market and touted numerous new prospects. The corporate cited greater than a dozen “companions” utilizing its Gaudi 3 accelerators, together with Naver Corp., Bosch, NielsenIQ, and Seekr.
Traditionally, Nvidia has led the AI {hardware} market with it GPUs (graphics processing items) and TPUs [tensor processing units], created to energy and practice massive language fashions and AI functions. Intel positioned its Gaudi 3 as a direct competitor to Nvidia’s H100 GPU.
The Gaudi 3 delivers 50% on common higher inference and 40% on common higher energy effectivity in comparison with the Nvidia H100 – “at a fraction of the fee,” Gelsinger mentioned. In response to Intel, the Gaudi 3 accelerators can ship 4 occasions AI compute for pc reminiscence techniques suing the BF16 floating level format and 1.5 occasions the in-memory bandwidth over Gaudi 2; it additionally gives twice the networking bandwidth in comparison with its predecessor.
Intel used TSMC’s 5nm course of to construct the Gaudi 3 chips, which are actually accessible to unique tools producers (OEMs) together with Dell, HPE, Lenovo and Supermicro for AI knowledge heart market. The chip is designed to be strung along with hundreds of others in racks inside knowledge facilities.
Final yr, Nvidia managed about 83% of the information heart chip market, with a lot of the remaining 17% dominated by Google’s customized tensor processing items (TPUs).
Benjamin Lee, a professor on the College of Pennsylvania’s College of Engineering and Utilized Science, mentioned Intel’s trajectory isn’t a simple one and it has challenges to being aggressive with Nvidia.
[ad_2]
Source link