Top Stories

Nvidia updates its flagship processor to support more complex AI systems

 Nvidia updates its flagship processor to support more complex AI systems


The processor, dubbed the H200, will surpass Nvidia's top-tier H100 chip at present. More high-bandwidth memory, one of the most expensive components of the processor that determines how much data it can process rapidly, is the main improvement.


On Monday, Nvidia enhanced the capabilities of its premium AI processor and announced that it would begin shipping it alongside Amazon.com, Google, and Oracle early in the next year.


The processor, dubbed the H200, will surpass Nvidia's top-tier H100 chip at present. More high-bandwidth memory, one of the most expensive components of the processor that determines how much data it can process rapidly, is the main improvement.


In addition to powering OpenAI's ChatGPT service and several other generative AI services that provide human-like writing responses to questions, Nvidia is the industry leader in AI processors. Such services will be able to respond faster because to the inclusion of extra high-bandwidth memory and a quicker link to the chip's processing units.


High-bandwidth memory in the H200 is 141 gigabytes, compared to 80 gigabytes in the H100. Although Micron Technologies said in September that it was seeking to become an Nvidia partner, Nvidia withheld the identity of the companies that provided the memory on the new chip.


Additionally, Nvidia purchases memory from SK Hynix, a Korean company that said last month that AI processors were boosting sales.


Along with specialized AI cloud providers CoreWeave, Lambda, and Vultr, Nvidia said on Wednesday announced Amazon Web Services, Google Cloud, Microsoft Azure, as well as Oracle Cloud Infrastructure would be among the first cloud service providers to enable access to H200 processors.



No comments: