How do humans select the right strategy to solve a task? We aim to unlock the secrets of the mind's toolbox. In this project, we explore the mechanisms behind strategy selection in solving cognitive and behavioral tasks. Focusing on how the trade-off between accuracy and costs is inferred, we aim to provide a deeper understanding of the ecologically rational strategy selection process and how it can be improved.
The no-free-lunch theorems (Wolpert, 1996) proves that no machine learning method works better than random guessing when averaged over all possible problems. The only way to improve over random guessing is to restrict the problem space and incorporate prior knowledge about this problem space into the learning method. This is especially important in robotics, where data is high-dimensional and scarce.
Inspired by human grasping and manipulation capabilities, we build anthropomorphic soft robotic hands with a high degree of dexterity to enable robust interactions with the environment. We develop new sensor technologies that work with the highly compliant hands, while still providing useful sensor feedback. At the same time, we further increase the robustness of soft hands by devising control methods that reduce perceptual, model, and motion uncertainty through haptic feedback.
The behavior of a robotic agent is determined by its control program, shape and material composition, as well as external factors from its environment. Both control and morphology affect the behavior and thus must be chosen carefully to ensure robust and general behavior in various operating environments. Thus, given a set of tasks, control and morphology must be considered as one combined aspect in designing soft robots. Moreover, we can divide responsibilities into morphology and control simultaneously and synergistically to ensure robust behavior in the physical world. This joint programming of morphology and control is called co-design.
Robots need to be able to understand and manipulate kinematic structures such as windows, door or drawers. We can draw inspiration from animals such as Goffin's cockatoos to teach robots these skills. Although these cockatoos certainly did not evolve to solve kinematic puzzles, they show remarkable success in such tasks. We want to find out how this is possible and how we can equip robots with similarly robust manipulation skills.
We will develop a novel approach to Learning from Demonstration for teaching a robot complex, contact-rich manipulation tasks. This approach will enable the robots to physically operate complex locks and other mechanisms. The scientific challenge involves reliably operating such multi-degree-of-freedom mechanisms that require transitions between different multi-contact situations. Rather than programming these manipulation actions directly, we will have a human demonstrate motions to the robot.
Robotic vision benefits from insights about human visual perception. But how about the other way around? Could robot visual perception help understand human visual perception better? Using a hierarchical functional architecture for synthetic perceptual systems, we study human performance and derive principles of robust information processing in perceptual systems. With this, we simultaneously advance our understanding of human vision and incorporate the underlying principles in robot vision.
We propose a computational principle for mapping sensory inputs to suitable actions, consisting of three building blocks: recursive estimators, interconnections, and differentiable programming. This integrated system can extract task-relevant information from the sensory input and generate suitable actions to achieve complex goals. We seek to study it as a model for different intelligent behaviors, thereby proposing it as a more general principle of intelligence.
This project aims to enhance complex, robust, and general robot manipulation learning through inductive biases based on structured regularities in the perception/action space. The biases will be hierarchical and composed of regularities at different levels of abstraction. They will be validated in a contact-rich manipulation task using a highly capable hand/arm system with multi-modal sensors, resulting in a powerful and data-efficient learning approach.
We explore specialist and generalist approaches in artificial intelligence through the outfielder ball catching problem. Our analysis shows that these views lie on a spectrum, and the choice of problem representation is key. We find that, for this problem, the two views collapse to a single point on the spectrum. These findings have important implications for building smarter machines that can tackle complex decision-making problems more effectively.
This project seeks to develop new methods for protein structure determination to tackle the challenge of analyzing certain elusive protein systems. The proposed approach utilizes high-density cross-link/mass spectrometry data and custom computational algorithms to interpret the data. The project aims to advance cross-linking for structure determination by increasing the density and distribution of CLMS data and combining it with tailored conformational space search algorithms.
Manipulation systems are difficult to deploy to a wide-spread of industrial applications because of their complexity, fragility, lack of strength, and difficulty of use. The Soma project describes a path of disruptive innovations for the development of simple, compliant, yet strong, robust, and easy-to-program manipulation systems. The core idea is the use of soft-bodied robots and deliberate contact with the environment.
Actively seeking for information, exploring the environment and thereby acquiring a model of the environment is a crucial aspect of intelligent behavior. Such behavior is also described in terms of curiosity, or the intrinsic motivation to learn about the environment. The goal of this project was to develop methods to realize such behavior concretely in the context of a physical world, where a robot needs to physically explore and interact with its environment so as to uncover its physical and kinematic structure.
In May 2015, our Team RBO won the Amazon Picking Challenge. This challenge addressed one of the last problems in warehouse automation: visually perceiving and grasping a diverse range of objects from a cluttered warehouse shelf autonomously without any human intervention. Our robot was able to win the competition by picking 10 out of 12 objects, outperforming 25 teams from Europe, the USA and Asia.
Motion Generation is concerned with the planning and execution of motion tasks for possible complex robotic systems. We are especially interested in motion generation for mobile manipulators operating in the real world. Unstructured environments pose a significant challenge to solve mobile manipulation takes. This the case becaue accurate knowlegde about the surrounding environment can not be assumed.
The parrobots project was a seed funded project that we used to bootstrap our research and application for the project "Intelligent Kinematic Problem Solving". In this project we started our interdisciplinary cooperation to find out how Goffin's cockatoos can learn to solve mechanical puzzles. To this end we developed a novel experimental setup, the called the Modular Lockbox. It allows to set up new kinematic puzzles in a short time-frame, to quickly perform new experiments.