Nvidia and Llama 3.1 help enterprises build supercomputers

Nvidia and Llama 3.1 help enterprises build supercomputers

full version at cryptopolitan

Nvidia, a multinational corporation and technology company, has announced the release of a new service that helps enterprises and countries build customized and specialized supercomputers using Meta’s latest LLM, Llama 3.1. 

Nvidia released two new services that target enterprises and countries. The technology giant released a service within Nvidia AI Foundry and inference microservices within NVIDIA NIM. Both services leverage Meta’s latest open-source LLMs library, Llama 3.1, and can create generative AI supercomputers. 

Nvidia AI Foundry will help enterprises and countries create super LLM models that are customized for specific industry requirements and needs. This is possible through the use of Llama 3.1 and Nvidia’s software, hardware, and talent. Enterprises and countries will have the option to train these supermodels using proprietary data or synthetic data generated from Llama 3.1 and the Nvidia Nemotron reward model.

Also Read: Tesla to start using humanoid bots in 2025, says Elon Musk

Llama 3.1, which was released today, has 405 billion parameters and is positioned to compete with closed-source AI models like ChatGPT and Gemini. Meta is continuously improving Llama by providing additional components that function with the model. Meta and Nvidia partnered to integrate Llama 3.1 within Nvidia’s services, making the solution available from day one. The CEO of Nvidia, Jensen Huang, said,

“…NVIDIA AI Foundry has integrated Llama 3.1 throughout and is ready to help enterprises build and deploy custom Llama supermodels.”  

The generative AI models of Llama 3.1 are trained on more than 16,000 Nvidia H100 Tensor Core GPUs. In addition, they’re optimized for Nvidia’s accelerated computing and software which enables deployment in data centers, clouds, and on GPU powered personal computers.

Also Read: Meta unveils biggest version of Llama 3 AI model

Currently, many companies worldwide have access to NIM microservices for Llama. Accenture is the first client to build custom Llama supermodels for Aramco, AT&T, and Uber. These corporations will be the first to access NIM microservices using Llama 3.1. After successfully creating custom models, an enterprise can choose Nvidia’s microservices, an OPs platform, and a cloud platform to run the models. 

Last week, Mistral AI released a new 12B model named Mistral NeMo in collaboration with Nvidia. The model is available as an Nvidia NIM inference microservice. Regarding new GPU hardware, a leaker claimed that hardware technology company will release a new Gen RTX 5090D only for the Chinese market. The new GPU is going to be the successor to the RTX 4090D. 

Recent conversions

149 ETH to ETH 18 ETH to CZK 0.0003 ETH to NOK 1.25 SOL to GBP 0.0042 BTC to AUD 0.75 SOL to CZK 0.30 ETH to CHF 0.0315 BTC to BTC 13 BTC to BTC 0.0126 BTC to EUR 250000000 COP to GBP