Robotics and Biology Laboratory

Grasping using Visual Feedback


Even though grasping of objects in unstructured environments is at the heart of robotics research, current state-of-art approaches do not yield satisfactory results for real-world settings. Most algorithms assume perfect knowledge of the nature and pose of the robot and the object - a set of assumption that is too strict and unrealistic.

Based on the "mitten thought experiment" the ongoing grasping project in our lab challenges the traditional separation of the grasping problem into mechanism design, perception, manipulation, planning, and control. The idea of the experiment is straight-forward: A blindfolded subject that is wearing a thick mitten glove can still successfully grasp a huge variety of objects by just closing the hand, if a second experimenter positioned the object reasonably well in relation to the hand. Thus, in order to implement reliable grasping in robots one needs to provide:

  • an appropriate perceptual strategy (taking the role of the experimenter) to interactively perceive the object and position the robot's hand,
  • and a compliance-based control strategy, i.e. the mitten hand.

Expected Outcome

We hope to demonstrate that the grasping problem can indeed be solved with the above described approach of just using optical feedback for active exploration. Such a proof-of-concept may provide insight and justification to further pursue this new idea of how to grasp objects.

Description of Work

This thesis aims at developing the perceptual strategy of the experimenter, i.e. interactive exploration and perception of the object, decision whether it is graspable, selection of the most promising pregrasp configuration of the robot hand, and positioning of the hand with respect to the object. We want to show in simulation and real-world experiments that this strategy will yield reliable and good grasping performance for a variety of objects.