11 KiB
| id | title | status | source_sections | related_topics | key_equations | key_terms | images | examples | open_questions |
|---|---|---|---|---|---|---|---|---|---|
| dev-environment | Development Environment Setup | established | reference/sources/github-unitree-sdk2.md, reference/sources/github-unitree-sdk2-python.md, reference/sources/github-unitree-mujoco.md, reference/sources/github-unitree-rl-gym.md, reference/sources/github-groot-wbc.md | [getting-started sdk-programming simulation learning-and-ai whole-body-control motion-retargeting] | [] | [unitree_sdk2 cyclone_dds sim_to_real domain_id] | [] | [] | [Isaac Gym Preview 4 compatibility with RTX 5090 / Blackwell WSL2 GPU passthrough latency impact on RL training throughput GR00T-WBC Docker vs native install trade-offs] |
Development Environment Setup
Full software stack for G1 development on Windows 10 with WSL2. Install in layers — you don't need everything on day one.
Hardware assumed: Windows 10 PC with NVIDIA RTX 5090 (32GB VRAM), WSL2, developing for a G1 EDU with Jetson Orin NX.
Layer 1: WSL2 + Ubuntu (Install First)
Enable WSL2
Open PowerShell as Administrator on Windows:
wsl --install -d Ubuntu-22.04
Reboot when prompted. On first launch, create a username and password.
Verify WSL2
# Inside WSL2 Ubuntu
wsl --version # From PowerShell — should show WSL2
uname -r # Inside Ubuntu — should show a Linux kernel
NVIDIA CUDA for WSL2
- Install the NVIDIA GPU driver on Windows (not inside WSL) — download from nvidia.com
- Inside WSL2, install the CUDA toolkit:
# Add NVIDIA package repo
wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
sudo apt install cuda-toolkit-12-6
- Verify:
nvidia-smi # Should show your RTX 5090
nvcc --version # Should show CUDA version
Note: Do NOT install NVIDIA drivers inside WSL2. The Windows driver handles GPU passthrough automatically. [T1]
Essential System Packages
sudo apt update && sudo apt install -y \
build-essential \
cmake \
git \
python3.10 \
python3.10-venv \
python3-pip \
libyaml-cpp-dev \
libeigen3-dev \
libboost-all-dev \
libspdlog-dev \
libfmt-dev \
libglfw3-dev \
wget \
curl \
unzip
Create a Project Directory
mkdir -p ~/unitree && cd ~/unitree
All repos will be cloned here.
Layer 2: CycloneDDS 0.10.2 (CRITICAL)
This MUST be version 0.10.2 exactly. Any other version causes silent DDS communication failures with the G1. [T1]
cd ~/unitree
git clone -b 0.10.2 https://github.com/eclipse-cyclonedds/cyclonedds.git
cd cyclonedds
mkdir build && cd build
cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/unitree/cyclonedds/install
make -j$(nproc)
make install
Add to ~/.bashrc:
echo 'export CYCLONEDDS_HOME="$HOME/unitree/cyclonedds/install"' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH="$HOME/unitree/cyclonedds/install/lib:$LD_LIBRARY_PATH"' >> ~/.bashrc
echo 'export PATH="$HOME/unitree/cyclonedds/install/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
Verify:
echo $CYCLONEDDS_HOME # Should print the install path
Layer 3: Unitree SDKs
Python SDK (Start Here)
cd ~/unitree
git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
cd unitree_sdk2_python
pip3 install -e .
Verify:
python3 -c "import unitree_sdk2py; print('SDK OK')"
If it fails, it's almost always a CycloneDDS issue. Verify CYCLONEDDS_HOME is set.
C++ SDK (For Production / Real-Time)
cd ~/unitree
git clone https://github.com/unitreerobotics/unitree_sdk2.git
cd unitree_sdk2
mkdir build && cd build
cmake ..
make -j$(nproc)
sudo make install
Layer 4: MuJoCo Simulation
MuJoCo Python
pip3 install mujoco
pip3 install pygame # For joystick control
Unitree MuJoCo Simulator
cd ~/unitree
git clone https://github.com/unitreerobotics/unitree_mujoco.git
Run the Simulator
cd ~/unitree/unitree_mujoco/simulate_python
python3 unitree_mujoco.py
This opens a MuJoCo window with the G1. If you're in WSL2, you need an X server or WSLg (Windows 11 has it built in; Windows 10 needs VcXsrv or similar).
WSL2 GUI on Windows 10
If MuJoCo can't open a window, install VcXsrv on Windows:
- Download VcXsrv from sourceforge
- Launch with "Disable access control" checked
- In WSL2:
echo 'export DISPLAY=$(cat /etc/resolv.conf | grep nameserver | awk "{print \$2}"):0' >> ~/.bashrc
source ~/.bashrc
Windows 11 alternative: WSLg handles this automatically — no extra setup needed.
Connecting SDK to Simulator
The simulator uses DDS just like the real robot. In your Python code:
# For simulation (localhost, domain ID 1)
ChannelFactoryInitialize(1, "lo")
# For real robot (your network interface)
ChannelFactoryInitialize(0, "enp2s0") # Change to your interface name
Same code, one line changes. [T0]
Layer 5: RL Training (Install When Starting Phase C)
Python Virtual Environment (Recommended)
Keep RL dependencies separate to avoid conflicts:
cd ~/unitree
python3 -m venv rl_env
source rl_env/bin/activate
PyTorch with CUDA
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu124
Verify GPU access:
python3 -c "import torch; print(torch.cuda.is_available(), torch.cuda.get_device_name(0))"
# Should print: True NVIDIA GeForce RTX 5090
Isaac Gym (Preview 4)
Isaac Gym requires downloading from NVIDIA (free account required):
- Go to https://developer.nvidia.com/isaac-gym
- Download Isaac Gym Preview 4 (
.tar.gz) - Extract and install:
cd ~/unitree
tar -xzf IsaacGym_Preview_4.tar.gz
cd isaacgym/python
pip3 install -e .
Verify:
cd ~/unitree/isaacgym/python/examples
python3 joint_monkey.py # Should open a sim window with a robot
Note: Isaac Gym Preview 4 uses an older gym API. If you see gym version warnings, install: pip3 install gym==0.23.1 [T2]
RTX 5090 note: Isaac Gym Preview 4 was released before Blackwell GPUs. It should work via CUDA compatibility, but if you hit issues, Isaac Lab (see Layer 6) is the actively maintained alternative. [T3]
unitree_rl_gym (G1 RL Training)
cd ~/unitree
git clone https://github.com/unitreerobotics/unitree_rl_gym.git
cd unitree_rl_gym
pip3 install -e .
Train Your First Policy
cd ~/unitree/unitree_rl_gym
python3 legged_gym/scripts/train.py --task=g1
This trains a G1 locomotion policy using PPO. On an RTX 5090 with 32GB VRAM, you can run thousands of parallel environments. Training a basic walking policy takes 1-4 hours depending on settings. [T2]
Validate in MuJoCo (Sim2Sim)
python3 legged_gym/scripts/play.py --task=g1 # Replay in Isaac Gym
# Then Sim2Sim transfer to MuJoCo for cross-validation
Deploy to Real Robot (Sim2Real)
The deploy/ directory in unitree_rl_gym contains C++ deployment code. This runs the trained policy and sends commands via rt/lowcmd. See getting-started §8 and locomotion-control §9.
Layer 6: Whole-Body Control (Install When Starting Phase F)
GR00T-WBC (NVIDIA)
cd ~/unitree
git clone https://github.com/NVlabs/GR00T-WholeBodyControl.git
cd GR00T-WholeBodyControl
GR00T-WBC provides Docker-based setup:
# Docker approach (recommended — handles all dependencies)
docker build -t groot-wbc .
docker run --gpus all -it groot-wbc
Or native install (follow the repo's README for detailed dependency list).
Key files:
deploy_g1.py— orchestration script for real G1 deployment- Pre-trained locomotion models included
- LeRobot integration for data collection + behavior cloning
Pinocchio (Rigid Body Dynamics / IK)
# Via conda (recommended)
conda install -c conda-forge pinocchio
# Or via pip
pip3 install pin
Verify:
python3 -c "import pinocchio; print('Pinocchio', pinocchio.__version__)"
Isaac Lab (Optional — Alternative to Isaac Gym)
Isaac Lab is NVIDIA's actively maintained replacement for Isaac Gym. If Isaac Gym has issues with your RTX 5090:
- Install Isaac Sim 4.5.0 or 5.0.0 from NVIDIA Omniverse
- Clone
unitreerobotics/unitree_sim_isaaclab - Follow the repo's setup instructions
Isaac Lab provides better integration with GR00T-WBC and newer GPU support. [T0]
Layer 7: Motion Data (Install When Starting Phase G)
SMPL Body Model
pip3 install smplx
pip3 install trimesh # Mesh utilities
AMASS Dataset (Pre-Retargeted for G1)
pip3 install huggingface_hub
# Download the retargeted dataset
python3 -c "
from huggingface_hub import snapshot_download
snapshot_download(repo_id='ember-lab-berkeley/AMASS_Retargeted_for_G1', local_dir='~/unitree/data/amass_g1')
"
This provides thousands of human motions already mapped to the G1's 29-DOF joint structure. Format: numpy arrays of shape [-1, 36] (29 joint positions + 7 base state). [T1]
CMU Motion Capture Database (Optional — Raw Source Data)
Available at mocap.cs.cmu.edu in BVH/C3D/ASF+AMC formats. Use AMASS instead for G1 — it includes CMU data already retargeted.
Jetson Orin NX Setup (On the Robot)
The Jetson comes with most dependencies pre-installed. Key things to verify/add:
# SSH in
ssh unitree@192.168.123.164 # password: 123
# Verify CycloneDDS
echo $CYCLONEDDS_HOME # Should be /home/unitree/cyclonedds/install
# Verify Python SDK
python3 -c "import unitree_sdk2py; print('OK')"
# If SDK not installed:
cd ~
git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
cd unitree_sdk2_python
pip3 install -e .
For production deployment, you'll copy your trained policy weights to the Jetson and run inference there. The Jetson's 100 TOPS handles RL policy inference easily (< 1ms per step). [T1]
Quick Reference: What Runs Where
| Machine | What Runs | Why |
|---|---|---|
| Your PC (WSL2) | RL training, simulation, policy development | GPU power (RTX 5090), fast iteration |
| Jetson Orin NX | Policy inference, real-time control, deployment | On the robot, low DDS latency |
| Locomotion Computer (RK3588) | Stock controller OR passthrough in debug mode | Not user-programmable |
Your dev workflow: train on PC → validate in sim on PC → copy weights to Jetson → deploy on real robot.
Install Order Summary
Day 1: Layer 1 (WSL2 + Ubuntu + CUDA)
Layer 2 (CycloneDDS 0.10.2)
Layer 3 (Python SDK)
Layer 4 (MuJoCo + unitree_mujoco)
Week 3+: Layer 5 (PyTorch, Isaac Gym, unitree_rl_gym)
Month 2+: Layer 6 (GR00T-WBC, Pinocchio)
Layer 7 (AMASS dataset, SMPL)
Key Relationships
- Prereq for: getting-started (need SDK installed to talk to robot)
- Uses: sdk-programming (SDK installation details)
- Enables: simulation (MuJoCo + Isaac Gym environments)
- Enables: learning-and-ai (RL training pipeline)
- Enables: whole-body-control (GR00T-WBC framework)
- Enables: motion-retargeting (AMASS dataset, Pinocchio IK)