GCImOpt: Learning efficient goal-conditioned policies by imitating optimal trajectories
2026-04-24 • Robotics
Robotics
AI summaryⓘ
The authors introduce GCImOpt, a method to teach machines how to control systems by learning from many carefully planned example actions. Their technique quickly creates large datasets of high-quality demonstrations using trajectory optimization, which tells them the best steps to take. They also expand the data by treating points along these paths as extra goals to aim for, boosting learning. The trained small and fast neural networks can handle different control tasks like balancing poles and moving robot arms efficiently, and can run on devices with limited computing power.
Imitation learningTrajectory optimizationGoal-conditioned policiesNeural networksControl systemsData augmentationCart-pole stabilizationQuadcopter control6-DoF robot arm
Authors
Jon Goikoetxea, Jesús F. Palacián
Abstract
Imitation learning is a well-established approach for machine-learning-based control. However, its applicability depends on having access to demonstrations, which are often expensive to collect and/or suboptimal for solving the task. In this work, we present GCImOpt, an approach to learn efficient goal-conditioned policies by training on datasets generated by trajectory optimization. Our approach for dataset generation is computationally efficient, can generate thousands of optimal trajectories in minutes on a laptop computer, and produces high-quality demonstrations. Further, by means of a data augmentation scheme that treats intermediate states as goals, we are able to increase the training dataset size by an order of magnitude. Using our generated datasets, we train goal-conditioned neural network policies that can control the system towards arbitrary goals. To demonstrate the generality of our approach, we generate datasets and then train policies for various control tasks, namely cart-pole stabilization, planar and three-dimensional quadcopter stabilization, and point reaching using a 6-DoF robot arm. We show that our trained policies can achieve high success rates and near-optimal control profiles, all while being small (less than 80,000 neural network parameters) and fast enough (up to more than 6,000 times faster than a trajectory optimization solver) that they could be deployed onboard resource-constrained controllers. We provide videos, code, datasets and pre-trained policies under a free software license; see our project website https://jongoiko.github.io/gcimopt/.