Squint: Fast Visual Reinforcement Learning for Sim-to-Real Robotics

2026-02-24Robotics

RoboticsComputer Vision and Pattern RecognitionMachine Learning
AI summary

The authors developed Squint, a new method to teach robots using images more quickly and efficiently. They combined several improvements to reduce training time, like smarter data processing and better simulation. Squint was tested on a set of robot tasks and even worked when transferred to a real robot. Their approach trains policies in minutes using one GPU, which is faster than previous methods for visual learning in robots.

visual reinforcement learningoff-policy methodson-policy methodsSoft Actor Criticdistributional criticdomain randomizationsim-to-real transferManiSkill3GPU accelerationrobotic manipulation
Authors
Abdulaziz Almuzairee, Henrik I. Christensen
Abstract
Visual reinforcement learning is appealing for robotics but expensive -- off-policy methods are sample-efficient yet slow; on-policy methods parallelize well but waste samples. Recent work has shown that off-policy methods can train faster than on-policy methods in wall-clock time for state-based control. Extending this to vision remains challenging, where high-dimensional input images complicate training dynamics and introduce substantial storage and encoding overhead. To address these challenges, we introduce Squint, a visual Soft Actor Critic method that achieves faster wall-clock training than prior visual off-policy and on-policy methods. Squint achieves this via parallel simulation, a distributional critic, resolution squinting, layer normalization, a tuned update-to-data ratio, and an optimized implementation. We evaluate on the SO-101 Task Set, a new suite of eight manipulation tasks in ManiSkill3 with heavy domain randomization, and demonstrate sim-to-real transfer to a real SO-101 robot. We train policies for 15 minutes on a single RTX 3090 GPU, with most tasks converging in under 6 minutes.