Zero-shot Interactive Perception

2026-02-20Robotics

RoboticsArtificial Intelligence
AI summary

The authors developed Zero-Shot Interactive Perception (ZS-IP), a system that helps robots understand and interact with objects by physically pushing or grasping them. Their method uses a special vision-language model enhanced with new visual markers called pushlines that are designed to improve pushing actions. They also include memory to help the robot make better decisions based on context. Tested on a robotic arm, their system worked better than other methods especially in tasks requiring pushing, even when objects were partially hidden or complex environments were involved.

Interactive PerceptionVision Language ModelPushlinesRobotic Manipulation7-DOF Franka Panda armSemantic ReasoningPhysical InteractionOcclusionMemory-Guided Actions
Authors
Venkatesh Sripada, Frank Guerin, Amir Ghalamzan
Abstract
Interactive perception (IP) enables robots to extract hidden information in their workspace and execute manipulation plans by physically interacting with objects and altering the state of the environment -- crucial for resolving occlusions and ambiguity in complex, partially observable scenarios. We present Zero-Shot IP (ZS-IP), a novel framework that couples multi-strategy manipulation (pushing and grasping) with a memory-driven Vision Language Model (VLM) to guide robotic interactions and resolve semantic queries. ZS-IP integrates three key components: (1) an Enhanced Observation (EO) module that augments the VLM's visual perception with both conventional keypoints and our proposed pushlines -- a novel 2D visual augmentation tailored to pushing actions, (2) a memory-guided action module that reinforces semantic reasoning through context lookup, and (3) a robotic controller that executes pushing, pulling, or grasping based on VLM output. Unlike grid-based augmentations optimized for pick-and-place, pushlines capture affordances for contact-rich actions, substantially improving pushing performance. We evaluate ZS-IP on a 7-DOF Franka Panda arm across diverse scenes with varying occlusions and task complexities. Our experiments demonstrate that ZS-IP outperforms passive and viewpoint-based perception techniques such as Mark-Based Visual Prompting (MOKA), particularly in pushing tasks, while preserving the integrity of non-target elements.