- Nvidia acquires SchedMD to continue offering Slurm as open source for workload management
- Slurm manages scheduling and resources for large clusters running parallel AI tasks
- Nvidia has launched the Nemotron 3 models in Nano, Super and Ultra formats for artificial intelligence
Nvidia announced a significant expansion of its open source effort, combining software procurement with new open AI models.
This was announced by the company. SchedMD acquiredthe developer of Slurm, an open source workload management system widely used in high-performance computing and artificial intelligence.
Nvidia will continue to use Slurm as vendor-neutral software to ensure compatibility with various hardware and maintain support for existing HPC and AI clients.
Slurm and more
Slurm handles the scheduling, queuing, and resource allocation of large groups of computers running parallel tasks.
More than half of the TOP500’s top 10 and top 100 supercomputers rely on these services, and the system is used by enterprises, cloud providers, research labs and artificial intelligence companies in industries including autonomous driving, healthcare, energy, financial services, manufacturing and government organizations.
Slurm runs on the latest Nvidia hardware and continues to be tuned by its developers for advanced AI workloads.
In addition to the acquisition, Nvidia also presented this Nemotron 3 open model familyincluding nano, super and ultra formats.
The models use a hybrid expert architecture to support multi-agent AI systems.
Nemotron 3 Nano focuses on efficient task execution, Nemotron 3 Super supports collaboration between multiple AI agents, and Nemotron 3 Ultra handles complex reasoning workflows.
Nvidia provides these models with associated datasets, reinforcement learning libraries, and NeMo Gym training environments.
Nemotron 3 models run on Nvidia-accelerated computing platforms, including workstations and large AI clusters.
Developers can combine open models with proprietary systems in multi-agent workflows using public clouds or enterprise platforms.
Nvidia provides tools, libraries, and datasets to support training, testing, and deployment in various computing environments.
Nvidia has released three billion tokens of pre, post and reinforcement learning data for Nemotron 3 models.
Additional AI tools, including NeMo RL and NeMo Evaluator, provide model evaluation and safety assessment.
Early adopters of the Nemotron 3 integration include companies in software, cybersecurity, media, manufacturing and cloud services.
Nvidia has made open source models, AI tools and datasets available on GitHub and Hugging Face for developers building AI applications for agents.
“Open innovation is the foundation for progress in AI,” Nvidia founder and CEO Jensen Huang wrote in the company’s press release.
“With Nemotron, we’re turning advanced AI into an open platform that gives developers the transparency and efficiency they need to build scalable agent systems.”
