The autonomous execution of mobile manipulation tasks in natural environments requires complex motion capabilities. The motion needs to satisfy various motion constraints, imposed by the task and the environment. Additionally, natural, real world environments are subject to unpredictable change. Therefore sensory feed-back needs to be continuously integrated to ensure the motion remains valid. The feedback requirements imposed by mobile manipulation tasks cannot be met with conventional planning techniques, as the high dimensionality of the problem, and the high amount of uncertainty about the state of the world make this problem especially challenging. These challenges will be addressed in this thesis.
The traditional approach to robotic motion planning is based on a sense, plan, execute cycle.
Based on the current sensor information, a planner determines a complete motion trajectory. This motion trajectory is then executed as precisely as possible by a controller. The approach presented in this thesis, hinges on the insight that this distribution of responsibilities is sub-optimal for the problem at hand. The boundary between planning and control is shifted, promoting control to be an integral part of the motion generation process. Based on this idea, a motion generation framework is implemented that is able to generate task-consistent motion, meeting the feedback requirements of mobile manipulation tasks. In a second step the boundary between planning and control is shifted further. Based on a close integration of planning, sensing and control, the responsibility of making concrete path choices is shifted from planning to execution. These path choices can be made eciently with local information, while being guided by the global planner based on a model of uncertainty.