Skip to content

leggedrobotics/robotic_world_model_lite

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Robotic World Model Lite

Python Linux platform License

This repository is a lightweight implementation intended for users who want to train dynamics models from offline data and model-based policies only, without the need to set up a full robotics simulator like Isaac Lab.

We provide a Google Colab notebook for quick start: RWM Lite Colab Notebook.

⭐ For the full version with online simulator-based data collection, model and policy training and evaluation pipeline, please refer to our full Isaac Lab RWM Extension implementation.

Overview

This repository provides a lightweight training pipeline for

and related model-based reinforcement learning methods, without any simulator dependency.

It enables:

  • training of dynamics models with ensemble recurrent neural networks,
  • training of policies with learned neural network dynamics without any simulator,
  • WandB logging support for experiment tracking.

Robotic World Model

Paper: Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics
Project Page: https://sites.google.com/view/roboticworldmodel

Uncertainty-Aware Robotic World Model

Paper: Uncertainty-Aware Robotic World Model Makes Offline Model-Based Reinforcement Learning Work on Real Robots
Project Page: https://sites.google.com/view/uncertainty-aware-rwm

Authors: Chenhao Li, Andreas Krause, Marco Hutter
Affiliation: ETH AI Center, Learning & Adaptive Systems Group and Robotic Systems Lab, ETH Zurich


Installation

  1. Create Conda environment with python>=3.10 and activate it
conda create -n rwm_lite python=3.10 -y
conda activate rwm_lite
  1. Clone this repository inside your Isaac Lab directory
git clone git@github.com:leggedrobotics/robotic_world_model_lite.git
cd robotic_world_model_lite
  1. Install rwm_lite
python -m pip install -e .

Model-Based Policy Training & Evaluation

  1. Login WandB
wandb login
  1. Train policy with RWM
python scripts/train.py --task anymal_d_flat

The policy is saved under logs/.

  1. Evaluate the policy with a simulator or hardware

The learned policy can be played and evaluated with our full Isaac Lab RWM Extension or the original Isaac Lab task registry.

python scripts/reinforcement_learning/rsl_rl/play.py --task Isaac-Velocity-Flat-Anymal-D-Play-v0 --checkpoint <checkpoint_path>

Code Structure

We provide a reference pipeline that enables RWM and RWM-U on ANYmal D.

Key files:


Citation

If you find this repository useful for your research, please consider citing:

@article{li2025robotic,
  title={Robotic world model: A neural network simulator for robust policy optimization in robotics},
  author={Li, Chenhao and Krause, Andreas and Hutter, Marco},
  journal={arXiv preprint arXiv:2501.10100},
  year={2025}
}
@article{li2025offline,
  title={Offline Robotic World Model: Learning Robotic Policies without a Physics Simulator},
  author={Li, Chenhao and Krause, Andreas and Hutter, Marco},
  journal={arXiv preprint arXiv:2504.16680},
  year={2025}
}

About

Lightweight implementation of Robotic World Model (RWM) and Uncertainty-Aware Robotic World Model (RWM-U)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages