Tendon Force Modeling for Sim2Real Transfer of Reinforcement Learning Policies for Tendon-Driven Robots
2026-03-04 • Robotics
Robotics
AI summaryⓘ
The authors studied how to better control robots that use tendons to move, like fingers in soft or flexible robots. They created a new way to predict the forces tendons produce using a special test setup and a transformer-based model. This model helped improve the accuracy of simulated robot behavior, making it closer to what happens in real life. Using this improved simulation, their trained controllers performed much better on real robotic fingers. Their method can work with different robot types and helps make learning-based robot control more reliable.
tendon-driven actuationreinforcement learningsim-to-real gapservo motorstransformer modelrobotic manipulationforce predictionrigid body simulationdexterous manipulationsoft robotics
Authors
Valentin Yuryev, Josie Hughes
Abstract
Robots which make use of soft or compliant inter- actions often leverage tendon-driven actuation which enables actuators to be placed more flexibly, and compliance to be maintained. However, controlling complex tendon systems is challenging. Simulation paired with reinforcement learning (RL) could be enable more complex behaviors to be generated. Such methods rely on torque and force-based simulation roll- outs which are limited by the sim-to-real gap, stemming from the actuator and system dynamics, resulting in poor transfer of RL policies onto real robots. To address this, we propose a method to model the tendon forces produced by typical servo motors, focusing specifically on the transfer of RL policies for a tendon driven finger. Our approach extends existing data- driven techniques by leveraging contextual history and a novel data collection test-bench. This test-bench allows us to capture tendon forces undergo contact-rich interactions typical of real- world manipulation. We then utilize our force estimation model in a GPU-accelerated tendon force-driven rigid body simulation to train RL-based controllers. Our transformer-based model is capable of predicting tendon forces within 3% of the maximum motor force and is robot-agnostic. By integrating our learned model into simulation, we reduce the sim-to-real gap for test trajectories by 41%. RL-based controller trained with our model achieves a 50% improvement in fingertip pose tracking tasks on real tendon-driven robotic fingers. This approach is generalizable to different actuators and robot systems, and can enable RL policies to be used widely across tendon systems, advancing capabilities of dexterous manipulators and soft robots.