Berlin, Germany (SPX) Feb 13, 2026
Researchers at Universidad Carlos III de Madrid (UC3M) have developed a new way for an assistive robot to learn how to move its arms autonomously by combining observational learning with communication between its limbs. The approach lets the robot watch how people perform everyday tasks and then adapt those movements so its two arms can work together safely and efficiently in domestic settings such as kitchens and dining rooms.
The work uses the ADAM platform, an Autonomous Domestic Ambidextrous Manipulator designed to support older adults in their homes or in care facilities. ADAM can already carry out a range of assistive tasks, including setting and clearing a table, tidying a kitchen, and bringing a user a glass of water, medication, or clothing at the right time. The main goal is to provide help with simple but important daily actions like fetching a drink or laying out cutlery, which can significantly improve comfort and independence for people who need assistance.
Coordinating two robotic arms so they can operate together without collisions is one of the hardest problems in current robotics research. In their study, UC3M scientists first teach each arm to carry out its part of a task independently using imitation learning, where the robot learns by example from human demonstrations. Once each arm has learned its own motion, the system links them using a mathematical framework called Gaussian Belief Propagation that allows the arms to exchange information during movement.
This coordination method acts like a continuous, invisible dialogue between the arms, allowing them to adjust their trajectories in real time and avoid bumping into each other or nearby objects. Because the robot does not need to stop and recompute its motion plan whenever the environment changes slightly, its movements remain fluid and natural. Tests in both simulations and real domestic assistance scenarios show that the robot can perform bimanual tasks more reliably and with smoother motion than with traditional planning techniques.
The UC3M team frames their work within a broader shift in robotics from hard coded motion to learning from demonstration. Instead of programming thousands of lines of instructions to define each gesture, imitation learning lets the robot observe how a person performs an action, either by having the human guide the robot's arm directly or by recording human movements. Tasks such as serving water, arranging objects on a shelf, or placing dishes on a table can then be transferred from human to robot through examples rather than manual coding.
However, simple copying is not enough for robust operation in real homes, where object positions and conditions constantly change. If a robot only memorizes a single trajectory to pick up a bottle at a fixed location, it fails when the bottle is shifted even slightly. The researchers therefore focus on learning movement patterns that capture the essence or intent of the action, so the robot can adapt to variations in start and end positions while preserving key constraints such as keeping a bottle upright to avoid spills.
To achieve this, the team develops motion representations that behave like a deformable rubber band. When the target or environment changes, the learned trajectory is smoothly reshaped so the robot still reaches the goal while maintaining important features of the motion. This allows ADAM to generalize from a limited set of demonstrations to a wide range of practical situations, such as moving around new obstacles on a cluttered table or handing over objects at slightly different heights and angles.
The robot's operation is organized into three main phases: perception, reasoning, and action. In the perception phase, ADAM collects information from its surroundings using 2D and 3D laser sensors that measure distances, detect obstacles, and locate objects, as well as RGB cameras with depth that build three dimensional models of the scene. During reasoning, the system processes these sensor data to identify relevant objects, estimate their positions, and determine how the environment is changing.
In the action phase, ADAM chooses how to move its mobile base and how to coordinate its dual arms to execute specific tasks. The new dual arm coordination method is integrated at this stage so that the robot can plan and update its movements as it interacts with people and objects. This pipeline is designed to work in real homes and care environments, where furniture, people, and objects are constantly shifting.
A key challenge is going beyond simply seeing objects to understanding how they are used and what users might need next. Earlier approaches often relied on fixed databases of common sense knowledge, which struggled to capture the variety and nuance of human environments. The UC3M group is now exploring the integration of generative models and artificial intelligence tools that let the robot infer likely uses of objects and adapt its behavior to the current situation in a more context aware way.
Although ADAM is currently an experimental platform with an estimated cost between 80,000 and 100,000 euros, the researchers argue that the underlying technology is mature enough to move toward future commercial systems. They estimate that in about 10 to 15 years, robots with similar capabilities could become much more affordable and be deployed widely in homes to support everyday living. As components become cheaper and learning methods become more efficient, assistive robots may transition from research prototypes to household helpers.
The project also highlights how robotics can help address population aging, a major social and economic issue in many countries. With the number of elderly people increasing and fewer caregivers available, technological support systems will be increasingly important to maintain quality of life and autonomy. The UC3M researchers see assistive robots like ADAM as part of a toolkit that can complement human care, reduce the burden on family members and professionals, and allow older adults to stay in their own homes longer and more safely.
In their presentation at the IROS 2025 conference, members of the Mobile Robots Group at UC3M emphasized that the long term ambition is for robots to become genuine coworkers rather than simple recorders of movement. They envision systems that can perceive their surroundings, anticipate human actions, and collaborate safely in shared spaces. By combining imitation learning, dual arm coordination, and context aware perception, the team aims to bring that vision closer to reality in domestic environments.
Research Report: Coordination of Learned Decoupled Dual-Arm Tasks through Gaussian Belief Propagation
Related Links
Universidad Carlos III de Madrid
All about the robots on Earth and beyond!
Researchers at Universidad Carlos III de Madrid (UC3M) have developed a new way for an assistive robot to learn how to move its arms autonomously by combining observational learning with communication between its limbs. The approach lets the robot watch how people perform everyday tasks and then adapt those movements so its two arms can work together safely and efficiently in domestic settings s