Humans navigate and interact with the 3D world using only 2D eye sensors and exploiting regularities in 3D space. By using gaze fixation and specific movements, humans are able to extract relevant 3D properties of the world through 2D sensors that measure changes, somewhat like an event camera. In this project, find out how event cameras, can help robots interact with the 3D world as effortlessly as humans.
This thesis aims to develop a system that allows robots to acquire manipulation skills directly from human demonstration videos. The novelty of this system is to actively command the robot to perform exploratory actions and gather additional sensory information rather than solely relying on passive observed information from demonstrations.
It is very useful if robots know the pose of objects not only when it sees them lying on a table, but also while these objects are grasped. But while objects are grasped, their pose often cannot easily be perceived visually. Either because the hand itself obstructs the view, or because the task requires visual attention elsewhere. Thus we suggest to estimate the in-hand pose of objects using acoustic sensing. This is a novel sensing technique that enables contact estimation.
Humans interact with objects in the 3D world robustly without complicated 3D sensors like lidars. Instead they only have 2D sensors in the eyes. If compared (rather naively) to widely available camera sensors, the human retina has vastly diminished capabilities, such as resolution, refresh rate etc. How then can humans interact with the 3D world so robustly?
Air mass control for soft pneumatic actuators is the proper actuation scheme to avoid compromising the intrinsic compliance of the system during control. The enclosed air-mass in a soft system is independent of shape changes during interaction with the environment. In this work, we investigate different data-driven techniques to increase the accuracy of a given air mass controller.