Learning Adaptive Force Control for Contact-Rich Sample Scraping with Heterogeneous Materials

2026-03-11Robotics

Robotics
AI summary

The authors developed a robot system to help scrape materials out of small laboratory vials, a task typically done by humans using spatulas. They created a special control method combining a stable robot arm controller with a learning agent that adjusts how much force to use based on what the robot sees. To train this system, they made a computer simulation where the materials behave differently in each spot, helping the robot learn how to handle various substances. After training in simulation, the authors tested the system on a real robot, showing it performed better than a simple fixed-force approach. This work helps bring robots closer to doing delicate tasks in human labs.

robotic manipulationreinforcement learningCartesian impedance controladaptive controlrobot simulationforce controlmaterial handlinglab automationtransfer learningrobotic scraping
Authors
Cenk Cetin, Shreyas Pouli, Gabriella Pizzuto
Abstract
The increasing demand for accelerated scientific discovery, driven by global challenges, highlights the need for advanced AI-driven robotics. Deploying robotic chemists in human-centric labs is key for the next horizon of autonomous discovery, as complex tasks still demand the dexterity of human scientists. Robotic manipulation in this context is uniquely challenged by handling diverse chemicals (granular, powdery, or viscous liquids), under varying lab conditions. For example, humans use spatulas for scraping materials from vial walls. Automating this process is challenging because it goes beyond simple robotic insertion tasks and traditional lab automation, requiring the execution of fine-granular movements within a constrained environment (the sample vial). Our work proposes an adaptive control framework to address this, relying on a low-level Cartesian impedance controller for stable and compliant physical interaction and a high-level reinforcement learning agent that learns to dynamically adjust interaction forces at the end-effector. The agent is guided by perception feedback, which provides the material's location. We first created a task-representative simulation environment with a Franka Research 3 robot, a scraping tool, and a sample vial containing heterogeneous materials. To facilitate the learning of an adaptive policy and model diverse characteristics, the sample is modelled as a collection of spheres, where each sphere is assigned a unique dislodgement force threshold, which is procedurally generated using Perlin noise. We train an agent to autonomously learn and adapt the optimal contact wrench for a sample scraping task in simulation and then successfully transfer this policy to a real robotic setup. Our method was evaluated across five different material setups, outperforming a fixed-wrench baseline by an average of 10.9%.