Hugging Face, which is valued at $4.5 billion, is supported, among others, by Amazon, Alphabet's Google, and Nvidia. It has developed into a major center for AI researchers and developers to trade chatbots and other AI software.
Hugging Face, an artificial intelligence firm, and Amazon.com's cloud division announced a partnership on Wednesday to facilitate the running of thousands of AI models on Amazon's specialized processing processors.
Hugging Face, which is valued at $4.5 billion, is supported, among others, by Amazon, Alphabet's Google, and Nvidia. It has developed into a major center for AI researchers and developers to trade chatbots and other AI software. It is where developers go to get their hands on and experiment with open-source AI models, including Llama 3 from Meta Platforms.
However, after modifying an open-source AI model, programmers usually want to incorporate the model into a software product. Hugging Face and Amazon announced on Wednesday that they have teamed to enable this on a specially designed AWS chip known as Inferentia2.
Head of product along with development at Hugging Face Jeff Boudier said, "An element that's very important to us is productivity - making sure that as many people as possible can run models alongside ensuring they can run them in the most cost-effective way."
AWS, on the other hand, wants to get more AI developers to utilize its cloud services to provide AI. Although Nvidia is the industry leader in model training, AWS contends that its processors can eventually run those learned models—a process known as inference—at a cheaper cost.
Possibly once a month, you train these models. However, you may be using inference tens of thousands of times per hour against them. Inferentia2 really excels in such situation, according to Matt Wood, AWS's product manager for artificial intelligence.
No comments:
Post a Comment