Robotics and Biology Laboratory

Interactive Perception

Manipulating objects as dexterously as humans remains an open problem in robotics - not so much in carefully controlled environments such as factories, but in every day household environments, so-called unstructured environments. 

Interactive Perception, as the name suggests, is about acting to improve perception. The fundamental assumption is that the domains of perception and action cannot be separated, but form a complex which needs to be studied in its entirety. Using this approach, we try to design robot that explore their environment actively, in a way that reminds of how a baby explores a new toy.

Ongoing projects

Intelligent Kinematic Problem Solving

Robots need to be able to understand and manipulate kinematic structures such as windows, door or drawers. We can draw inspiration from animals such as Goffin's cockatoos to teach robots these skills. Although these cockatoos certainly did not evolve to solve kinematic puzzles, they show remarkable success in such tasks. We want to find out how this is possible and how we can equip robots with similarly robust manipulation skills.

© Felix Noak

Capabilities and consequences of recursive, hierarchical information processing in visual systems

Robotic vision benefits from insights about human visual perception. But how about the other way around? Could robot visual perception help understand human visual perception better? Using a hierarchical functional architecture for synthetic perceptual systems, we study human performance and derive principles of robust information processing in perceptual systems. With this, we simultaneously advance our understanding of human vision and incorporate the underlying principles in robot vision.

Previous Projects

Online Interactive Perception

Project Description

We developed an RGB-D-based online algorithm for the interactive perception of articulated objects. In contrast to existing solutions to this problem, the online-nature of the algorithm permits perception during the interactions and addresses a number of shortcomings of existing methods. Our algorithm consists of three interconnected recursive estimation loops. The interplay of these loops is the key to the robustness of our proposed approach. The robustness stems from the feedback from our algorithm which can be used to adapt the robot's behavior.

Contact Persons

Roberto Martín-Martín, Manuel Baum, Aravind Battaje, Vito Mengers

Acquiring Kinematic Background Knowledge with Relational Reinforcement Learning

Project Description

If a robot faces a novel, unseen object, it must first acquire information about the object’s kinematic structure by interacting with it. But there is an infinite number of possible ways to interact with an object. The robot therefore needs kinematic background knowledge: knowledge about the regularities that hint at the kinematic structure.

We developed a method for the efficient extraction of kinematic background knowledge from interactions with the world. We use relational model-based reinforcement learning, an approach that combines concepts from first-order logic (a relational representation) and reinforcement learning. Relational representations allow the robot to conceptualize the world as object parts and their relationship, and reinforcement learning enables it to learn from the experience it collects by interacting with the world. Using this approach, the robot is able to collect experiences and extract kinematic background knowledge that generalizes to previously unseen objects.


Contact Persons

Sebastian Höfer, Manuel Baum, Aravind Battaje

Reducing uncertainty with motion planning

Please note: Once you watch the video, data will be transmitted to YouTube/Google. For more information, see Google Privacy.

Uncertainty is the major obstacle for robots manipulating objects in the real world. A robot can never perfectly know its position in the world, the position of objects, and the outcome of its actions. A particularly hard challenge is motion planning under uncertainty. How should the robot move, if the model of the world might be wrong or incomplete?

However, our approach reasons about uncertainty and contact. A robot can significantly reduce uncertainty if it uses contact sensing to establish controlled contact with the environment. Moreover, the robot's capabilities are increased if it anticipates contact events that can happen during the execution of the plan. Our goal is to develop algorithms that can plan under uncertainty while exploiting contact and reasoning about sensor events during planning for high dimensional motion problems.

Contact Person

Arne Sieverling, Előd Páll


Alexander von Humboldt

Science of Intelligence

Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2002/1 "Science of Intelligence" - project number 390523135

Soft Manipulation (SoMa)

Soft Manipulation (SoMa) -  funded by the European Commission in the Horizon 2020 program, award number 645599, May 2015 - April 2019.   Alexander von Humboldt professorship - awarded by the Alexander von Humboldt foundation and funded through the Ministry of Education and Research, BMBF,

July 2009 - June 2014



[en] Battaje, Aravind; Brock, Oliver; Rolfs, Martin
An interactive motion perception tool for kindergarteners (and vision scientists)
i-Perception, 14 (2) :20416695231159182
March 2023
ISSN: 2041-6695
Mengers, Vito; Battaje, Aravind; Baum, Manuel; Brock, Oliver
Combining Motion and Appearance for Robust Probabilistic Object Segmentation in Real Time
2023 IEEE International Conference on Robotics and Automation (ICRA), Page 683--689


Battaje, Aravind; Brock, Oliver
One Object at a Time: Accurate and Robust Structure From Motion for Robots
Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)


Battaje, Aravind; Brock, Oliver
Interconnected Recursive Filters in Artificial and Biological Vision
Proceedings of the DGR Days, Page 32-32


Baum, Manuel; Brock, Oliver
Achieving Robustness by Optimizing Failure Behavior
Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Page 5806-5811