Efficient interactions with the environment require knowing its affordances. In manipulation tasks, an affordance describes parts of the environment that are graspable and movable. However, estimating such affordances from RGBD data without interaction is inherently ambiguous. These ambiguities manifest in different affordance estimates depending on leveraged cues, e.g, appearance or geometry. Thus, different models provide only uncertain measurements, which we can fuse to obtain robust estimates. To do so, we recursively estimate beliefs over affordances for multiple existing affordance predictors separately and fuse their beliefs.
Humans navigate and interact with the 3D world using only 2D eye sensors and exploiting regularities in 3D space. By using gaze fixation and specific movements, humans are able to extract relevant 3D properties of the world through 2D sensors that measure changes, somewhat like an event camera. In this project, find out how event cameras, can help robots interact with the 3D world as effortlessly as humans.
This thesis aims to develop a system that allows robots to acquire manipulation skills directly from human demonstration videos. The novelty of this system is to actively command the robot to perform exploratory actions and gather additional sensory information rather than solely relying on passive observed information from demonstrations.
Air mass control for soft pneumatic actuators is the proper actuation scheme to avoid compromising the intrinsic compliance of the system during control. The enclosed air-mass in a soft system is independent of shape changes during interaction with the environment. In this work, we investigate different data-driven techniques to increase the accuracy of a given air mass controller.