You trained a robot arm to pick up objects in simulation. Success rate: 95%. You deploy it on a real robot. Success rate: 30%. Welcome to the sim-to-real gap.
This gap is the central challenge of robot learning, and in 2026 we have better tools and techniques to close it than ever before. This guide covers what causes the gap and how to fix it.
Why Sim-to-Real Fails
The Three Gaps
Visual gap: Simulation renders don’t look like the real world. Lighting, textures, shadows, camera noise — all differ between sim and real. Your visual encoder learned features that don’t exist in reality.
Physics gap: Simulated physics is an approximation. Contact dynamics, friction, deformable objects, cable behavior — these are notoriously hard to simulate accurately. A grasp that works perfectly in MuJoCo may fail when the real gripper meets real friction.
Dynamics gap: Your simulated robot has perfect motors with zero latency. Your real robot has motor backlash, communication delays, sensor noise, and wear. The control policy never learned to handle these imperfections.
Domain Randomization: The Brute Force Approach
The idea is simple: if you can’t make simulation match reality perfectly, randomize the simulation parameters so broadly that reality falls within the training distribution.
What to Randomize
Visual randomization:
- Lighting direction, color, and intensity
- Object textures and colors
- Background scenes
- Camera position and field of view
- Added noise (Gaussian, salt-and-pepper)
Physics randomization:
- Object mass (±30-50%)
- Friction coefficients (±50%)
- Joint damping and stiffness
- Contact parameters
- Table height and object positions
Dynamics randomization:
- Control frequency jitter
- Action delay (1-3 timesteps)
- Sensor noise injection
- Actuator strength scaling
How Much to Randomize
This is the art. Too little randomization and your policy overfits to simulation. Too much and it can’t learn anything useful.
Rule of thumb: Start with ±20% on physical parameters and increase until performance in simulation starts to degrade. The optimal policy should be slightly worse in simulation than a non-randomized policy — that’s the price of robustness.
NVIDIA Isaac Sim Implementation
Isaac Sim’s built-in domain randomization makes this straightforward:
from omni.isaac.lab.envs import ManagerBasedRLEnv
# In your environment config:
class MyEnvCfg:
class domain_rand:
randomize_friction = True
friction_range = [0.5, 1.5]
randomize_mass = True
mass_range = [0.8, 1.2]
push_robots = True
push_interval_s = 5.0
max_push_vel_xy = 0.5
System Identification: The Precise Approach
Instead of randomizing broadly, measure the real-world parameters and simulate them accurately.
What to Measure
- Motor characteristics — Use a dynamometer to measure torque-speed curves
- Joint friction — Move each joint slowly and measure resistance
- Sensor calibration — Compare simulated and real camera intrinsics/extrinsics
- Latency — Measure the round-trip time from observation to action execution
When System ID Beats Domain Randomization
System ID works better when:
- You have a fixed, well-characterized robot
- The task requires precise manipulation
- You can afford the measurement time
Domain randomization works better when:
- You’re deploying across many robot instances
- The environment changes frequently
- You can’t easily measure all parameters
Real-to-Sim-to-Real: The Modern Approach
The 2026 state-of-the-art combines both approaches:
- Scan the real environment using depth cameras or LiDAR
- Reconstruct it in simulation (NVIDIA Isaac Sim supports this natively)
- Train with domain randomization on top of the realistic reconstruction
- Fine-tune on real data — collect a small amount of real-world demonstrations and fine-tune the sim-trained policy
This pipeline typically achieves 70-85% real-world success rates, compared to 30-50% from pure sim-to-real.
Practical Tips
Start simple. Get a basic policy working in sim before worrying about transfer. Many sim-to-real failures are actually just bad sim policies.
Log everything on the real robot. Record observations, actions, and outcomes. Compare these distributions to simulation — the divergences tell you exactly where the gap is.
Use a real-world validation set. Before deploying, run 20-50 real-world trials. If success rate is below 60%, don’t iterate on the real robot — go back to simulation.
Curriculum learning helps. Start with easy variants (centered objects, good lighting) and gradually increase difficulty. This works in both sim and real.
The first 10 real-world demonstrations are gold. Even a small amount of real data for fine-tuning dramatically improves transfer. Tools like LeRobot make data collection straightforward.
Tools for 2026
| Tool | Purpose | Best For |
|---|---|---|
| NVIDIA Isaac Sim | Photorealistic simulation + domain randomization | Full pipeline |
| MuJoCo | Fast physics simulation | Rapid iteration |
| Isaac Lab | RL training framework on Isaac Sim | Policy training |
| LeRobot | Real-world data collection + policy training | Fine-tuning |
| RealSense/ZED | Depth cameras for environment scanning | Real-to-sim reconstruction |
What’s Next
The sim-to-real gap is shrinking but not disappearing. Key research frontiers include:
- Neural sim-to-real — Learning the transfer function itself
- Foundation model pre-training — VLA models pre-trained on diverse sim data transfer better
- Adaptive policies — Policies that detect and compensate for sim-real mismatch at runtime
The goal isn’t zero gap — it’s a gap small enough that a few real-world demonstrations can close it.