Robotics and Biology Laboratory

The pursuit of robot learning aims to create robots that can adapt and improve their performance over time through the use of machine learning algorithms. These algorithms rely on data to learn from and train models that can make predictions and take actions in real-world scenarios. However, without taking the embodiment of the machine learning system into account, these models may struggle to perform effectively in real-world situations or generalize to new environments.


Embodiment refers to the physical form and structure of the machine learning system and its interaction with the environment. In the case of robots, this encompasses factors such as the robot's body shape, movement abilities, and sensory input. These elements can greatly affect a robot's ability to perform tasks and interact with its surroundings. To optimize machine learning for the field of robotics, it is crucial to consider both learning and embodiment together. We are approaching this by discovering robotics-specific prior knowledge and incorporating it into our learning algorithms. As an example, our research has shown that robots can efficiently learn versatile manipulation skills from just a single human demonstration by utilizing the benefits of embodiment to generate complementary information. Additionally, incorporating knowledge of physical laws helps us to learn state representations from data more efficiently, as robots interact with the physical world through their bodies.

Ongoing Projects

© RBO

Learning To Manipulate From Demonstration

We will develop a novel approach to Learning from Demonstration for teaching a robot complex, contact-rich manipulation tasks. This approach will enable the robots to physically operate complex locks and other mechanisms. The scientific challenge involves reliably operating such multi-degree-of-freedom mechanisms that require transitions between different multi-contact situations. Rather than programming these manipulation actions directly, we will have a human demonstrate motions to the robot.

© DALL·E 2

Rational Selection of Exploration Strategies in an Escape room Task

How do humans select the right strategy to solve a task? We aim to unlock the secrets of the mind's toolbox. In this project, we explore the mechanisms behind strategy selection in solving cognitive and behavioral tasks. Focusing on how the trade-off between accuracy and costs is inferred, we aim to provide a deeper understanding of the ecologically rational strategy selection process and how it can be improved.

Previous Projects

State Representation Learning

Contact Persons

Rico Jonschkowski

Project description

We want to enable robots to learn a broad range of tasks. Learning means generalizing knowledge from experienced situations to new situations. But in order to do so, the robots must already know what makes situations similar or different with respect to their current task. They need to be able to extract the right information from their sensory input that characterizes these situations. This information is what we call "state".

The information that should be included in the state differs depending on the task. For driving a car, for example, the state representation of the environment must include the road, other cars, traffic lights and so on. For cooking dinner in a kitchen, it must focus on completely different aspects of the environment.

Instead of relying on human defined perception (mapping from observations to the current state) for a specific task, robots must be able to autonomously learn which patterns in their sensory input are important. We think that the can learn this by interacting with the world: performing actions, observing how the sensory input changes and which situations are rewarding. From such experience, robots can learn task-specific state representations by making them consistent with prior knowledge about the physical world, e.g. that changes in the world are proportional to the magnitude of the actions of the robot, or that the state and the action together determine the reward.

Action and Forward Model Learning

Contact Persons

Sebastian Höfer

Project description

In order to learn suitable state representations, the robot requires a set of task-relevant actions, and must know how to execute them. But we can also look at the orthogonal problem: how can the robot learn suitable actions?

In our work, we study how to use knowledge about the state to learn better actions. This motivates our approach coupled action parameter and effect learning (CAPEL): we jointly learn the parametrizations of actions and a forward model for each actionThese forward models predict the effects of each action, given the state of the world, and allow the robot to select the right action for a task.

Why do we try to solve these two complex learning problem together? We argue that they are tightly coupled: given a forward model, the model is only valid if the underlying action parametrization reliably evokes the effects the model predicts. Conversely, an action is only relevant if the robot can predict its effects with high certainty. Thus, the two learning problems are intrinsically coupled and should be solved jointly. 

Learning with Side Information

Project description

These approaches for learning state and action representations all follow a common theme: they exploit information that is relevant for the task, but that is not input or output of the function that is learned (e.g., the actions are used to learn a mapping from observation to states, but they are not required for estimating the state). This kind of information is termed side information.
Our work shows that learning with side information subsumes a variety of related approaches, e.g. multi-task learning, multi-view learning and learning using privileged information. This provides us with (i) a new perspective that connects these previously isolated approaches, (ii) insights about how these methods incorporate different types of prior knowledge, and hence implement different patterns, (iii) facilitating the application of these methods in novel tasks.
We have made our code for learning with side information publicly available: github.com/tu-rbo/concarne

Funding

 

Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy - EXC 2002/1 "Science of Intelligence" - project number 390523135

Robotics-Specific Machine Learning (R-ML) funded by Deutsche Forschungsgemeinschaft (DFG), award number: 329426068, April 2017 - April 2020

Alexander von Humboldt professorship - awarded by the Alexander von Humboldt foundation and funded through the Ministry of Education and Research, BMBF,
July 2009 - June 2014

Publications

2018

Jonschkowski, Rico; Rastogi, Divyam; Brock, Oliver
Differentiable Particle Filters: End-to-End Learning with Algorithmic Priors
Proceedings of Robotics: Science and Systems
2018

2016

Jonschkowski, Rico; Brock, Oliver
End-To-End Learnable Histogram Filters
Workshop on Deep Learning for Action and Interaction at NIPS,
December 2016
Höfer, Sebastian; Raffin, Antonin; Jonschkowski, Rico; Brock, Oliver; Stulp, Freek
Unsupervised Learning of State Representations for Multiple Tasks
Workshop on Deep Learning for Action and Interaction at NIPS,
December 2016
Jonschkowski, Rico; Höfer, Sebastian; Brock, Oliver
Patterns for Learning with Side Information
February 2016

2015

Jonschkowski, Rico; Brock, Oliver
Learning State Representations with Robotic Priors
Autonomous Robots, 39 (3) :407-428
2015
Publisher: Springer US
ISSN: 0929-5593

Page 1 of 2