Proteins are involved in almost all functions in our cells due to their ability to combine conformational motion with chemical specificity. Hence, information about the motions of a protein provides insights into its function. This thesis proposes a novel elastic network model of learned maintained contacts, lmcENM. It expands the range of motions that can be captured by such simplified models by leveraging novel information about a protein’s structure.
This thesis contributes to algorithmic approaches for the motion generation problem for mobile manipulators. This problem is unsolved in unstructured environments, where the robot does not have access to precise models but must infer the state of the world with its sensors. In this thesis we do not only explore efficient algorithms for motion planning in free space, but also tackle the problem to plan actions that make contact with the environment.
Grasping is a crucial skill for any autonomous system that needs to alter the physical world. The complexity of robot grasping stems from the fact that any solution comprises various components: Hand design, control, perception, and planning all affect the success of a grasp. Apart from picking solutions in well-defined industrial scenarios, general grasping in unstructured environment is still an open problem.
Intelligent robots must be able to learn; they must be able to adapt their behavior based on experience. But generalization from past experience is only possible based on assumptions or prior knowledge (priors for short) about how the world works.
In this thesis, Rico Jonschkowski analyzes different ways in which prior knowledge can be employed to enable robotic perception. The priors used can range from algorithmic priors to loss functions.
The thesis proposes a general approach for interactive perception to address the challenges in robot mechanical manipulation of kinematic objects in unstructured environments, particularly with articulated objects. The approach leverages the relationship between robot actions and sensor stream changes, temporal structure of physical processes, task-specific priors, and dependencies between perceptual subtasks. The approach is instantiated in several robot perceptual systems to extract kinematic, geometric, and dynamic properties of articulated objects using only RGB-D or a combination of RGB-D and proprioceptive signals. The systems are evaluated in challenging environmental and task conditions, and are complemented with methods to monitor, control, and steer robot interaction based on perceived information. A novel method to generate and select informative actions for interactive perception is also proposed and evaluated.
This thesis focuses on the design and construction of robotic hands and grippers for grasping by autonomous robots. The author proposes a new approach that reconsiders the basic motivation and goals for grasping and advocates for a shift towards a Soft Manipulation paradigm. The study investigates the use of pneumatic soft hands, which enable safe collision with objects, maintain contact under disturbance, and provide many places of contact for a robust grasp. The thesis develops a comprehensive set of tools for rapidly prototyping pneumatic soft hands, including a versatile and easy-to-prototype actuator design named PneuFlex. The study also proposes and validates a fast and stable dynamic simulation model for the simulation of pneumatic soft hands. Experiments with two artifacts show the shape adaptability, grasp dexterity, and suitability of soft hands for implementing grasping strategies that exploit environment constraints for robust execution. Overall, the thesis contributes the groundwork for further research on Soft Manipulation, with a focus on hand hardware, control, and grasping strategies.
Reinforcement learning is a computational framework that enables machines to learn from trial-and-error interaction with the environment. In recent years, reinforcement learning has been successfully applied to a wide variety of problem domains, including robotics. However, the success of the reinforcement learning applications in robotics relies on a variety of assumptions, such as the availability of large amounts of training data, highly accurate models of the robot and the environment as well as prior knowledge about the task.
This thesis discusses the challenges of computational protein structure prediction due to the high dimensionality and vast size of the protein conformational space. The authors propose leveraging three novel sources of information, including physicochemical information, experimental data from high-density cross-linking/mass spectrometry experiments, and corroborating information, to advance protein structure prediction. They demonstrate the effectiveness of these methods in extensive ab initio structure prediction experiments, achieving state-of-the-art performance in the Critical Assessment of Protein Structure Prediction experiment. Using their CLMS-based hybrid method, they reconstruct the domain structures of human serum albumin in solution and in its native environment, human blood serum, which represents a disruptive step towards a mass spectrometry-driven, ab initio structure determination method.
The key features of this system are a high degree of immersion into the computer generated virtual environment and a large working volume. The high degree of immersion will be achieved by multimodal human-exoskeleton interaction based on haptic effects, audio and three- dimensional visualization. The large working volume will be achieved by a lightweight wearable construction that can be carried on the back of the user.
This thesis develops robotic skills for manipulating novel articulated objects. The degrees of freedom of an articulated object describe the relationship among its rigid bodies, and are often relevant to the object's intended function. Examples of everyday articulated objects include scissors, pliers, doors, door handles, books, and drawers. Autonomous manipulation of
articulated objects is therefore a prerequisite for many robotic applications in our everyday environments.
The most significant impediment for protein structure prediction is the inadequacy of conformation space search. Conformation space is too large and the energy landscape too rugged for existing search methods to consistently find near-optimal minima. Conformation space search methods thus have to focus exploration on a small fraction of the search space. To best utilize domain specific information search needs to be customized for each domain. The first contribution of this thesis customizes search for protein structure prediction, resulting in significantly more accurate protein structure predictions.
Balancing exploration and exploitation is key to computationally efficient motion planning. The exploring/exploiting tree (EET) planner presents a solution that leverages real-world planning structures to avoid exhaustive searches in high-dimensional configuration spaces. The EET planner utilizes acquired workspace information for effective exploitation and adjusts towards exploration in challenging regions.
This thesis proposes a new utility-guided framework for motion planning that can reliably compute collision-free motions with the efficiency required for real-world planning. The utility-guided approach begins with the observation there is regularity in space of possible motions available to a robot. Further, certain motions are more crucial than others for computing collision free paths. Together these observations form structure in the robot’s space of possible movements.
Involving RGB-D camera, proprioception, and force sensor data, this study examines real-world data fusion methods for robots. It compares diverse fusion architectures, focusing on interrelations between sensors, favoring a potentially resilient feedforward setup without compromising accuracy.
It is very useful if robots know the pose of objects not only when it sees them lying on a table, but also while these objects are grasped. But while objects are grasped, their pose often cannot easily be perceived visually. Either because the hand itself obstructs the view, or because the task requires visual attention elsewhere. Thus we suggest to estimate the in-hand pose of objects using acoustic sensing. This is a novel sensing technique that enables contact estimation.
Using DeepLab, a learning-based algorithm for image segmenation, the object of interest is first detected in each image frame of the footage. Combining the segmentation output with a particle filter creates a robust tracking algorithm for tracking the object in the image frame. With a Kalman filter performing sensor fusion on the drone's IMU and GPS sensor, the tracking information in the image frame is used to recreate a global trajectory of the object.
The advantages of wrist movements for hand manipulation have received little attention in robotics. Most approaches use only the capabilities of the fingers of a hand. Humans constantly move their wrist to take advantage of gravity or inertial forces to support the desired manipulation. In this work, we explore the role of external resources (gravity, inertia) in the context of exploiting constraints for hand manipulation.
Using an off-the-shelf computer vision tool, a multi-camera setup tracks the user's hand and wrist postures to estimate 3D hand pose. Human joint angles are mapped to the RBO Hand 3 in simulation and on a robotic hand. The setup's quality is evaluated by performing complex in-hand manipulations. This work bridges the gap between theory and practice in contact dynamics, providing insights for more realistic modeling.
The goal of this master's thesis is to analyse how different materials and morphologies can change the grasping and manipulation behavior of soft robot hands. In this thesis you will build robot fingers from different kinds of soft materials and will experiment with different hand morphologies. During the course of this thesis we hope to understand how we can build robot hands that are not only more dexterous, but also more robust in their behavior.
How can robots reliably estimate the state of mechanical objects around them? While visual estimation offers a way to precisely estimate the state of mechanisms such as drawers or doors, visual estimation also has its shortcomings. Occlusions or bad lighting conditions make it challenging or even impossible to tackle this problem just using vision. In this thesis we explore how audio can be used as a sensor modality that augments or even replaces visual estimation in settings that are challenging to vision.
This thesis presents a method for estimating objectness in a visual scene by fusing information from motion and appearance. Two interconnected recursive estimators estimate objectness in a way tailored to kinematic structure estimation. The method shows improved objectness estimation and improvement in estimated kinematic joints. Further analysis provides insight into the connection between objectness and kinematic joints as well interconnected recursive estimation.
The thesis aims to remotely control a soft, pneumatically operated robot hand using a data glove to perform in-hand manipulation. The work involves creating a mapping between the data glove and the RBO-Hand 3 using supervised learning algorithms based on data recorded with the data glove. The learned mapping will be used to teleoperate the RBO-Hand 3, and the resulting teleoperation tool will be used to perform experiments on in-hand manipulation with the RBO-Hand 3. This work contributes to the goal of designing robot hands that resemble the human hand and can adapt their capabilities, particularly for in-hand manipulation.
In collaboration with colleagues from Vienna, researchers are investigating how cockatoos can solve multi-step kinematic puzzles by building models of their behavior, with the goal of improving robots' abilities to explore their environments. The researchers will search for models in the literature and compare them to real bird data to develop a plausible set of hypotheses that could explain the behavior. The thesis developed a taxonomy of models to understand the landscape of potential models in this domain. This work may provide insights into strategies for robots to explore and understand previously unseen kinematic structures in their environment.
When we build soft robotic grippers, we take inspiration from the compliance and softness of the human hand. But what makes the human hand one of the most powerful tools is its sense of touch. Since the sensors established in rigid robotics are not applicable to soft actuators, we need to look at new materials and new fabrication methods. In this paper, we want to present a way to introduce tactile sensors into a soft actuator without limiting its dexterity, using the RBO Hand 3 actuator as an example.
Robots need to be able to not only grasp objects and to manipulate them on a coarse spatial scale. We also want robots to learn more fine-grained in-hand manipulation skill, such as the skill to rotate and wiggle a pen in-hand. In this thesis we work towards that goal. We research how a soft robotic hand manipulates a grasped object more reliably when we consider the position of the object in a closed-loop controller.
We extend the idea of robotic priors to work on non Markovian observation spaces. For this, we train a recurrent neural network on trajectories, such that the network learns to encode past information in its hidden state.
Classical robotic grasping is a sequential process of approaching an object, contacting,and applying forces at desired points for a stable grasp. On the other hand, humans can robustly grasp even without deciding on hand placement on an object and they use wrist movements concurrently with the fingers closure. The thesis explores wrist hand coordination for robust robotic grasping.
How can a robot explore complex kinematic chains? How can it learn about the kinematic dependencies in such a chain? In this work we developed and tested rule based heuristics and more sophisticated planning methods to explore and manipulate complex kinematic mechanisms in a simulation. In experiments with different, automatically generated lockboxes we evaluated their performance.
Both experimental and computational methods have limitations in determining protein structures. Comparative modeling is successful when utilizing sequence similarity, but when templates are not available, ab initio modeling can be used. However, this method suffers from the vastness of the search space. To address this, an algorithm combining ab initio and comparative modeling was developed to retrieve templates independent of sequence similarity. This method generates ab initio decoys for all targets and a set of templates and compares them using various metrics. Testing the method on 14 targets showed that the presented method succeeds mainly due to similarities of the decoys to the native structures, with evidence for similar energy landscapes also supporting the approach.
Most models in contact dynamics show some unrealistic behavior due to assumptions that were made for the sake of computational convenience. Unfortunately, there is a lack of experimental work to validate these assumptions and to evaluate how realistic these contact modeling approaches are, which is the purpose of this thesis.
This work proposes objectives defined by individual interaction with the environment, optimizing curiosity, novelty, and evolvability to drive diversity of behaviors. Modeling the objectives with entropy provides a unifying framework and proves that novelty can be efficiently estimated. Don't miss out on the exciting findings of how evolvability can be estimated from discarded individuals, bringing a new level of adaptation to evolutionary algorithms.
A novel method for efficient object search in realistic environments is presented. Object search is formalized as a probabilistic inference problem, using spatial relations of possible locations. Five physical world priors are identified and incorporated into a probabilistic graphical model. This model combines information for a consistent probability distribution, enabling knowledge propagation for optimal search results. Demonstrated with a simulated searching agent using noisy web knowledge.
Interactive Perception exploits the robot capabilities to interact with the environment to reveal hidden properties, like the kinematic structures of articulated objects. However, when the robot faces a new environment, it needs to decide on how to interact to maximize the information gain based on sensor data, and use compliant controllers that allow the articulation to guide the motion.
The flexibility and compliance of soft actuators offer several advantages over traditional, rigid mechanisms. They are inherently safe and light, robust to impact and collision, and can be designed and build quickly at low costs. There are situations, however, in which softness and compliance become a disadvantage. Softness can limit the amount of force an actuator can exert, for example, when lifting a heavy object or when pressing a switch. To alleviate this limitation we propose soft actuators capable of changing their stiffness by employing jamming.
The thesis presents an incremental method for motion generation in environments with unpredictable and initially unknown obstacles. The method locally augments and adapts global motion plans in response to changes in the environment, addressing three sub-problems of motion generation with three algorithmic components. The first component reactively adapts plans in response to small, continuous changes, while the second augments the plan locally in response to connectivity changes. The third extracts a global, goal-directed motion from the representation maintained by the first two components. The proposed method is evaluated in a real-world mobile manipulation experiment, where a robot executes a whole-body motion task in an initially unknown environment, while incrementally maintaining a plan using only on-board sensors.
At the core of robotics research lies the challenge of grasping objects in unstructured environments. But current state-of-the-art approaches fall short in real-world settings. Our lab's project breaks away from tradition by introducing a groundbreaking solution inspired by the "mitten thought experiment". Our strategy combines interactive exploration and perception with a compliance-based control approach, resulting in reliable and successful grasping of a wide range of objects.
A new approach to protein structure prediction is presented using Building Blocks, which are structurally contiguous motifs. Two algorithms, the foldtree and constraint approach, are introduced to improve prediction using these Building Blocks. Results showed significant improvement over traditional methods, with the constraint approach performing better but having longer run times. This study lays the foundation for future research to solve the protein structure prediction problem.
Autonomous execution of mobile manipulation tasks in real-world environments is challenging due to the need for complex motion capabilities, task and environment constraints, and unpredictable changes. Conventional planning techniques are not enough to meet the sensory feedback requirements of such tasks. This thesis aims to address these challenges by exploring new techniques for mobile manipulation.
Discover the secrets of genetic sequences! In this master project, we took on the challenge of identifying relationships between a target sequence and other sets of sequence fragments. By combining simple alignment methods with advanced techniques like sequence profiles and secondary structure prediction, we built a framework for extracting related sets of sequence fragments for a given target sequence. Our journey delved into the depths of genetic sequences and unlocked their potential.
Grasping unknown objects is a crucial aspect in robotics. The human hand's compliance and observation of humans grasping objects suggest that finding the complete hand configuration can be reduced to finding a pre-shape of the hand. The proposed approach uses a 3D depth sensor to extract shape primitives and estimate the object's pose. A simple heuristic finds the best pre-grasp for the object. Experiments have shown that this approach can help a robot grasp a range of objects.
Our study introduces a novel library of "building blocks" for improved protein modeling. By leveraging the dependencies between fragments, our proposed scoring scheme outperforms traditional methods, resulting in 61 targets being covered with 100% near-native building block matches in a recent evaluation. Our innovative approach combines different features and machine learning algorithms for optimal results.
This work aims to investigate coupling methods for a stable, transparent, and plausible interaction with virtual objects having physical dynamics using a wearable haptic interface. The research examines different interaction modes, including pushing, carrying, and throwing, and investigates the possibility of making the virtual object's physical properties, such as their mass and inertia, haptically sensible. The study explores the haptic rendering of rigid body, cloth, and soft body physics using the Nvidia PhysX SDK as the simulation engine. The developed interface between physical simulation and haptic rendering is to be integrated into the current software framework for the wearable haptic device, and the results will be evaluated qualitatively.
This thesis presents a new approach to the protein loop closure problem in protein structure prediction. The method, inspired by robotics, uses a kinematic chain representation and a motion planning technique to improve conformational sampling. The results of this proof-of-concept study may contribute to more efficient and accurate predictions in the field, and provide a foundation for future advancements in the area.
Humans interact with objects in the 3D world robustly without complicated 3D sensors like lidars. Instead they only have 2D sensors in the eyes. If compared (rather naively) to widely available camera sensors, the human retina has vastly diminished capabilities, such as resolution, refresh rate etc. How then can humans interact with the 3D world so robustly?
Air mass control for soft pneumatic actuators is the proper actuation scheme to avoid compromising the intrinsic compliance of the system during control. The enclosed air-mass in a soft system is independent of shape changes during interaction with the environment. In this work, we investigate different airflow sensors to increase the accuracy of our current data-driven approach to air mass control.
Protein motion is crucial for its function, and predicting it is important for drug design. This thesis aims to improve protein motion prediction by applying deep neural networks to breaking contact prediction during structural transitions. Using Elastic Network Models, certain residue pairs become separated, which is important for predicting protein motion. By incorporating this breaking contact information, the prediction accuracy can be improved, and state-of-the-art machine learning techniques will be utilized for this purpose.
Behavior of a soft robot is not exclusively determined by the controller, but also by the interaction of its physical form with the environment. This morphological computation is a core feature of compliant robots. Much research has been done on this field but using morphological computation as a design metric is yet a very novel concept. The goal of this thesis is to apply the concept of a presently published metric to a real world design loop.
Knowing which ligands can reach a protein's active site is crucial in drug design. Protein-ligand docking is computationally expensive due to high dimensionality of the search space. This work uses partitioning of degrees of freedom and decoupled motion calculations to solve protein-ligand disassembly problems. The Exploring/Exploiting Tree (EET) algorithm is extended to handle internal mobility of ligands and relevant side chains, enabling solutions to interactions where side chains hinder the ligand from exiting the protein. This method makes problems with complex ligands manageable and improves the efficiency of handling internal mobility of ligands.
This thesis aims to enable shape sensing of the PneuFlex actuator used in soft robot hands, which has complex deformations due to its soft material. By identifying complex shapes as predictable combinations of simple deformations through machine learning and strain sensor readings, a reduced sensor layout is created to accurately predict deformations and provide insight into the actuator's shape. This research will enable a better understanding of the soft robot hand's behavior during grasping tasks.
The development of soft hands has led to the need for sensing technology for soft continuum actuators. However, commercially available sensors that can withstand high levels of deformation are not yet available. This thesis evaluates three potential sensor technologies for their suitability in soft hands, focusing on their robustness, ease of use, long term stability, and responsiveness in relation to the challenges of continuously deforming actuators.
This work focuses on improving the speed and accuracy of robotics grasping through Image Based Visual Servoing (IBVS). Three technical aspects are approached to increase accuracy by tracking kinematic chains and speed by GPU processing and strategic utilization of encoders to focus on important parts of the input image. The most important parameter influencing accuracy and speed of pose estimation is the number of particles in the particle filter used for visual data processing.
This guide aims to encourage practical work with Predictive state representations (PSRs) in robotics. It provides a theoretical background, practical instructions and identifies areas for improvement. The re-implemented algorithm learns a PSR of a simulated mobile robot and experiments validate the accuracy of the learned PSRs. Fine-tuning was found crucial but challenging and the results provide guidance for future work and highlight problems to be addressed for PSR's application to complex real-world domains.
To extend the field of application of robots in unstructured environments it is necessary to develop new techniques of environment perception and interpretation. These methods must give machines the capability to generate sufficient information, which enables them to fulfill their tasks with the aid of their sensors.
Our cutting-edge solution enables the extraction of kinematic structures from visual sensory input in dynamic environments. Our algorithm classifies relationships within a moving 3D point cloud to determine joint types, offering a reliable and automated process. We validate the stability and effectiveness of this pioneering method through real-world experiments, making a significant contribution to the field of robot technology.
Vision is an important sensory modality for the successful execution of manipulation tasks. This is true for both humans and robots. In this work, we investigate how visual servoing can be used to continuously learn about the properties of objects and their position in the scene. When such information is available, objects in the environment can be approached and manipulated with little prior knowledge.