The RBO Hand3 (RH3) Compilation is your guide to building the state-of-the-art RH3 developed at TU Berlin's Robotics and Biology Lab. From 3D printed pieces to silicone molded fingers and laser-cut components, we'll break down the process into manageable pieces to simplify the build. Watch our accompanying video tutorial on casting molds and assembling connector plates, and be on your way to creating your own RH3.
With just a microphone and speaker, experience the power of acoustic sensing. From measuring contact on soft actuators to recognizing the type of shoe you're wearing, this innovative technology allows you to detect physical changes in objects by listening to the sounds they make. Join us in exploring the endless possibilities of acoustic sensing as we develop this cutting-edge technology. Get started now with our simple scripts and tutorials, and unleash your creativity!
The PneuFlex actuator is a fiber-reinforced, pneumatic continuum actuator made almost entirely of soft materials. In this tutorial, we show you how to build them yourself.
You will got a shopping list with all required components and tools. We provide some CAD models in STL format for the needed molds for you in the tutorial.
You can use the PneuFlex actuators by operating with a pressure of 50-400kPa (0.5-4 bar).
To control the inflation of the soft hand's pneumatic actuators we developed a custom controller board, which we call the "PneumaticBox". Here we provide an overview of the system, describe the hardware components, and link to our software stack.
Our Online Interactive Perception system extracts patterns of motion at different levels (point feature motion, rigid body motion, kinematic structure motion) and infers the kinematic structure and state of the interacted articulated objects. Optionally, it can reconstruct the shape of the moving parts and use it to improve tracking.
Instead of relying on human defined perception (mapping from observations to the current state) for a specific task, robots must be able to autonomously learn which patterns in their sensory input are important. We think that the can learn this by interacting with the world: performing actions, observing how the sensory input changes and which situations are rewarding. Here we provide the code related to our work on learning state representations with robotic priors.
Our RBO-team won the 2015 Amazon Picking Challenge by using a novel method of probabilistic multi-class segmentation for object perception in warehouse automation. The code and data for our winning solution is now available on GitHub, along with a detailed accompanying paper. This resource provides valuable insights and a starting point for further research and development in this field.
concarne is a lightweight python framework for learning with side information (aka privileged information). Side information are data that are neither from the input space nor from the output space of the function, but include useful information for learning it. This package depends on Theano and lasagne, and thus one can use their own neural network structures and easily combine them with the side information learning task.
The RBO dataset of articulated objects and interactions is a collection of 358 RGB-D video sequences (67:18 minutes) of humans manipulating 14 articulated objects under varying conditions (light, perspective, background, interaction). All sequences are annotated with ground truth of the poses of the rigid parts and the kinematic state of the articulated object (joint states) obtained with a motion capture system. We also provide complete kinematic models of these objects (kinematic structure and three-dimensional textured shape models). In 78 sequences the contact wrenches during the manipulation are also provided.
RBO Aleph is a pipeline for protein structure prediction that utilizes an ab initio approach. It was included among the methods evaluated at CASP11. The web interface for RBO Aleph is now publicly accessible, providing a resource for researchers and practitioners to use and build upon in their studies. This pipeline offers insights into the prediction of protein structures and contributes to the ongoing advancement of the field.
This approach integrates decoy-based contact prediction with machine learning and evolutionary information from multiple-sequence alignments, called EPC-map (using Evolutionary and Physicochemical information to predict Contact maps). We obtain evolutionary information from multiple-sequence alignments and physicochemical information from predicted ab initio protein structures. These structures represent low-energy states in an energy landscape and thus capture the physicochemical information encoded in the energy function.