Robotics and Biology Laboratory

Robotics-Specific Machine Learning

Project Description

This project will develop robotics-specific machine learning methods. The requirement for such methods follows directly from the no-free-lunch theorems (Wolpert, 1996) which prove that no machine learning method works better than random guessing when averaged over all possible problems. The only way to improve over random guessing is to restrict the problem space and incorporate prior knowledge about this problem space into the learning method.

Of course, there are machine learning methods that apply to a wide range of real world problems by incorporating fairly general priors, e.g. parsimony, smoothness, hierarchical structure, or distributed representation. However, even for solving relatively simple problems, such methods already require huge amounts of data and computation. The overall problem of robotics—learning behavior that maps a stream of high-dimensional sensory input to a stream of high-dimensional motor output from sparse feedback—is too complex to be solved by generic machine learning methods using realistic amounts of data and computation.

To tailor machine learning towards the problem space robotics, we have to do two things: a) discover robotics-specific prior knowledge and b) incorporate these priors into machine learning methods. Since robots interact with the physical world, physics is the most direct source of prior knowledge. To incorporate such priors, we will relate them to state representations, which are an intermediate result of the mapping from the robot’s sensory input to its motor output. The intuition is the following: since intermediate state representations must reflect properties of the world, the same physical laws that apply to the real world must also apply to these internal state representations. Therefore, knowledge about these laws allows us to find state representations that facilitate learning of robot behavior.

Funding

This project is funded by Deutsche Forschungsgemeinschaft (DFG) - award number: 329426068

Publications

2019

Morik, Marco; Rastogi, Divyam; Jonschkowski, Rico; Brock, Oliver
State Representation Learning with Robotic Priors for Partially Observable Environments
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) :accepted
2019

2018

Eppner, Clemens; Höfer, Sebastian; Jonschkowski, Rico; Martín-Martín, Roberto; Sieverling, Arne; Wall, Vincent; Brock, Oliver
Four aspects of building robotic systems: lessons from the Amazon Picking Challenge 2015
Autonomous Robots, 42 (7) :1459–1475
October 2018
Publisher: Springer US
Jonschkowski, Rico; Rastogi, Divyam; Brock, Oliver
Differentiable Particle Filters: End-to-End Learning with Algorithmic Priors
Proceedings of Robotics: Science and Systems
2018

2017

Jonschkowski, Rico; Hafner, Roland; Scholz, Jonathan; Riedmiller, Martin
PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations
New Frontiers for Deep Learning in Robotics Workshop at RSS,
2017

2016

Jonschkowski, Rico; Brock, Oliver
End-To-End Learnable Histogram Filters
Workshop on Deep Learning for Action and Interaction at NIPS,
December 2016
Höfer, Sebastian; Raffin, Antonin; Jonschkowski, Rico; Brock, Oliver; Stulp, Freek
Unsupervised Learning of State Representations for Multiple Tasks
Workshop on Deep Learning for Action and Interaction at NIPS,
December 2016
Jonschkowski, Rico; Eppner, Clemens; Höfer, Sebastian; Martín-Martín, Roberto; Brock, Oliver
Probabilistic Multi-Class Segmentation for the Amazon Picking Challenge
IEEE/RSJ International Conference on Intelligent Robots and Systems, Page 1-7
October 2016
Jonschkowski, Rico; Brock, Oliver
Towards Combining Robotic Algorithms and Machine Learning: End-To-End Learnable Histogram Filters
Workshop on Machine Learning Methods for High-Level Cognitive Capabilities in Robotics at IROS,
October 2016
Eppner, Clemens; Höfer, Sebastian; Jonschkowski, Rico; Martín-Martín, Roberto; Sieverling, Arne; Wall, Vincent; Brock, Oliver
Lessons from the Amazon Picking Challenge: Four Aspects of Building Robotic Systems
Proceedings of Robotics: Science and Systems
Publisher: AnnArbor, Michigan
June 2016
Jonschkowski, Rico; Höfer, Sebastian; Brock, Oliver
Patterns for Learning with Side Information
February 2016

2015

Jonschkowski, Rico; Brock, Oliver
Learning State Representations with Robotic Priors
Autonomous Robots, 39 (3) :407-428
2015
Publisher: Springer US
ISSN: 0929-5593

2014

Jonschkowski, Rico; Brock, Oliver
State Representation Learning in Robotics: Using Prior Knowledge about Physical Interaction
Proceedings of Robotics: Science and Systems
July 2014

2013

Jonschkowski, Rico; Brock, Oliver
Learning Task-Specific State Representations by Maximizing Slowness and Predictability
Proceedings of the 6th International Workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems (ERLARS),
September 2013