You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

5.6 KiB

id title status source_sections related_topics key_equations key_terms images examples open_questions
gb10-offboard-compute Dell Pro Max GB10 — Offboard AI Compute established Cross-referenced from git/spark context system [hardware-specs networking-comms learning-and-ai sensors-perception deployment-operations] [] [dell-pro-max-gb10 dgx-spark offboard-compute llm vlm isaac-lab] [] [] [DDS latency over Wi-Fi between GB10 and G1 under realistic conditions Optimal LLM size for real-time task planning (latency vs. capability tradeoff)]

Dell Pro Max GB10 — Offboard AI Compute

The Dell Pro Max GB10 (NVIDIA Grace Blackwell) can serve as an offboard AI brain for the G1, handling large model inference, training, and simulation that exceed the Jetson Orin NX's capabilities.

Full integration document: See git/spark/context/g1-integration.md in the Dell Pro Max GB10 knowledge base for complete architecture, code examples, and setup instructions.

1. Capability Comparison

Capability G1 Orin NX Dell Pro Max GB10
AI compute 100 TOPS 1,000 TFLOPS (FP4)
Memory 16 GB 128 GB unified LPDDR5X
Max LLM ~7B (quantized) ~200B (FP4)
CUDA arch sm_87 sm_121 (Blackwell)
CPU ARM (Orin) ARM (Cortex-X925/A725)
Price Included in G1 EDU $3,699-$3,999

2. Connection

  • Wi-Fi: G1 Wi-Fi 6 ↔ GB10 Wi-Fi 7 (backward compatible). ~1 Gbps, 5-50 ms latency.
  • 10GbE: GB10 RJ45 to G1 Ethernet. 10 Gbps, <1 ms latency. Best for lab use.
  • Subnet: GB10 joins 192.168.123.0/24 (e.g., 192.168.123.100) or uses a router bridge.

3. Key Use Cases

Use Case G1 Role GB10 Role Latency OK?
LLM task planning Sends command, executes plan Runs 70B+ LLM, returns plan Yes (1-5s)
Vision-language Streams D435i frames Runs large VLM Yes (0.5-2s)
RL policy training Deploys trained policy Runs Isaac Lab simulation Offline
Imitation learning Collects demo data Trains LeRobot policies Offline
Speech interaction STT/TTS on Orin LLM reasoning on GB10 Yes (1-5s)

4. What Stays On-Robot

  • 500 Hz locomotion control loop (RK3588)
  • Balance and stability (real-time, cannot tolerate network latency)
  • Emergency stop
  • Basic perception (on Orin NX)

The GB10 handles only high-level reasoning with relaxed latency requirements.

5. LLM API Access

GB10 runs an OpenAI-compatible API:

# From G1 Orin NX
curl http://192.168.123.100:30000/v1/chat/completions \
  -d '{"model":"llama","messages":[{"role":"user","content":"Walk to the table and pick up the red cup"}]}'

6. ARM Compatibility

Both systems are ARM64-native. Model files (.pt, .onnx, .gguf) trained on GB10 deploy directly to Orin NX without architecture conversion. Container images are interoperable (both aarch64).

7. GR00T-WBC Deployment — Verified (2026-02-14/15) [T1]

GR00T-WBC runs successfully on the GB10 for both simulation and real robot control. The GB10 relays DDS commands to the G1 over Ethernet at 50 Hz.

Network Configuration:

  • GB10 at 10.0.0.68 on LAN, also at 192.168.123.100 on robot subnet
  • SSH: ssh mitchaiet@10.0.0.68 (password: Strat3*gb10)
  • Firewall (ufw) open for: SSH (22), VNC (5900), NoMachine (4000), Sunshine (47984-47990), web viewer (8080), robot subnet (192.168.123.0/24)

Software Stack on GB10:

  • Ubuntu 24.04.3 LTS (Noble), kernel 6.14.0-1015-nvidia
  • NVIDIA GB10 GPU, driver 580.95.05
  • Python 3.12.3, ROS2 Jazzy
  • GR00T-WBC cloned to ~/GR00T-WholeBodyControl with Python venv
  • CycloneDDS 0.10.4 (system) / 0.10.5 (ROS2) — ABI incompatible but works with FastRTPS RMW (default)

Remote Access (headless — no monitor):

  • NoMachine (NX protocol) on port 4000 — best for interactive desktop
  • Sunshine (NVENC game streaming) on ports 47984-47990 — installed but Moonlight client unstable on Win10
  • x11vnc on port 5900 — works for basic desktop but cannot stream OpenGL content
  • Xvfb virtual framebuffer on display :99 — used for headless rendering

Critical Patches Applied:

  • Removed <Tracing> XML from unitree_sdk2py/core/channel_config.py (aarch64 buffer overflow fix)
  • Created ros2.pth in venv for ROS2 package access
  • Patched sync mode sim thread check in run_g1_control_loop.py
  • Enabled auto-login in /etc/gdm3/custom.conf

Launch Commands:

# Real robot control (primary use case)
Xvfb :99 -screen 0 1024x768x24 &
tmux new-session -d -s groot "cd ~/GR00T-WholeBodyControl && source .venv/bin/activate && \
  export LD_LIBRARY_PATH=/opt/ros/jazzy/lib:\$LD_LIBRARY_PATH && \
  export DISPLAY=:99 && \
  export CYCLONEDDS_URI='<CycloneDDS><Domain><General><Interfaces>\
    <NetworkInterface address=\"192.168.123.100\"/></Interfaces></General></Domain></CycloneDDS>' && \
  python3 -u gr00t_wbc/control/main/teleop/run_g1_control_loop.py --no-with-hands --interface real \
  2>&1 | tee /tmp/groot_diag.log"

# Simulation with viewer (from NoMachine terminal)
bash ~/GR00T-WholeBodyControl/launch_sim.sh --sync

# Headless simulation with web viewer
python ~/GR00T-WholeBodyControl/launch_with_web_viewer.py \
  --interface sim --simulator mujoco --no-enable-onscreen \
  --no-with-hands --sim-sync-mode --keyboard-dispatcher-type ros

# Keyboard sender (separate terminal)
python /tmp/keysender.py

Key Relationships