Listen to this article |
To operate efficiently in the real world, robots need to be adaptable, learn new skills readily, and adjust to their surroundings. Traditional training methods, however, can limit a robot’s ability to apply learned skills to new situations. This is often due to the gap that exists between perception and action, as well as the challenges that come with transferring skills across different contexts.
NVIDIA hopes to combat these limitations with Isaac Lab, an open-source modular framework for robot learning. The Isaac Lab creates modular, high-fidelity simulations for diverse training environments to provide physical AI capabilities and GPU-powered physics simulations.
The platform supports both imitation learning, where robots learn by mimicking humans, and reinforcement learning, where robots learn through trial and error. Imitation learning is typically used for tasks with specific movements or behaviors, requiring less data and leveraging human expertise. Support for imitation learning comes through the learning framework Robomimic and enables saving data in HDF5.
Reinforcement learning (RL), on the other hand, makes robots more adaptable to new situations, potentially exceeding human performance for some tasks. However, RL can be slow and requires carefully designed reward functions to guide the robot’s learning. Isaac Lab provides support for RL through wrappers to different libraries, which convert environment data into function argument and return types.
It provides flexibility in training approaches for any robot embodiment and offers a user-friendly environment for training scenarios that helps robot makers add or update robot skills with changing business needs.
Inside Isaac Lab’s key features
Some key features of the system include:
Flexibility with task design workflows
Isaac Lab allows users to build robot training environments in two ways, NVIDIA said: manager-based or direct. With the manager-based workflow, you can switch out different parts of the environment. To optimize performance for complex logic, NVIDIA recommends the direct workflow.
Tiled rendering
Isaac Lab offers high-fidelity rendering for robot learning, helping reduce the sim-to-real gap. Tiled rendering reduces rendering time by consolidating input from multiple cameras into a single large image. It provides an API for handling vision data, where the rendered output directly serves as observational data for simulation learning.
Multi-GP and multi-node support
For complex reinforcement learning environments, users may want to scale up training across multiple GPUs. NVIDIA said this is possible in Isaac lab through the use of the PyTorch distributed framework.
Vectorized APIs
Users can tap into enhanced View APIs for improved usability, eliminating the need for pre-initialized buffers, NVIDIA said, and caching indices for different objects in the scene, in addition to support for multiple view objects in the scene.
Easy deployment to public clouds
Isaac Lab supports deployment on AWS, GCP, Azure, and Alibaba Cloud, with Docker integration for efficient RL task execution in containers, as well as scaling of multi-GPU and multi-node jobs using OSMO. NVIDIA OSMO is a cloud-native workflow orchestration platform that helps to orchestrate, visualize, and manage a range of tasks. These include generating synthetic data, training foundation models, and implementing software-in-the-loop systems for any robot embodiment.
Accurate physics simulation
According to NVIDIA, users can tap into the latest GPU-accelerated PhysX version through Isaac Lab, including support for deformables, ensuring quick and accurate physics simulations augmented by domain randomization.
Industry collaborators using Isaac Lab for humanoids, surgical robots, and more
NVIDIA’s industry collaborators are using Isaac Lab to train humanoid robots. These collaborators include Fourier Intelligence, whose GR-1 humanoid robot has human-like degrees of freedom, and Mentee Robotics, whose MenteeBot is built for household-to-warehouse applications.
NVIDIA has additional products for humanoid robot learning. NVIDIA Project GR00T is an initiative to develop general-purpose foundation models for humanoid robots. The complexity of modeling humanoid dynamics increases exponentially with each added degree of freedom, so RL and imitation learning are the only scalable ways to develop policies for humanoids that work across a wide variety of tasks and environments.
Isaac Lab is enabling industry collaborators to perform robot learning, including 1X, the AI Institute, Boston Dynamics, ByteDance Research, Field AI, Fourier, Galbot, LimX Dynamics, Mentee, NEURA Robotics, RobotEra, and Skild AI.
ORBIT-Surgical is a simulation framework based on Isaac Lab. It trains surgical robots, like the da Vinci Research Kit (dVRK) to assist surgeons in reducing their mental workload. The framework uses reinforcement learning and imitation learning, running on NVIDIA RTX GPUs, to enable robots to manipulate both rigid and soft objects. Additionally, NVIDIA Omniverse helps generate high-fidelity synthetic data that can be used to train AI models for segmenting surgical tools in real-world hospital operating room videos.
Boston Dynamics is using Isaac Lab and NVIDIA Jetson AGX Orin to enable simulated policies to be directly deployed for interference, simplifying the deployment process.