Robotics and Biology Laboratory

Aravind Battaje

Office MAR 5-1
Room MAR 5.065
Office HoursOnly by appointment

Research Interests

How can biological creatures so adeptly maneuver and interact in the real world, while synthetic creatures we create are so clumsy outside of structured environments? One of the key issues is our basic understanding of the boundaries (if at all) between sensorimotor processes and the external world. In my work, I question some of these boundaries. For example, I have found that looking at one object while moving (fixating) serves a high utility in tasks that involve object search and fetching. Here, perception and action (as traditionally defined) are highly coupled.

On the one hand, understanding boundaries is important. But on the other hand, being able to express them as behaviors as well as having a computational framework to process information coherently over time is important. To address this, I am also interested in understanding the principles that underlie robust information processing for perception-action streams. Currently, we do this by simultaneously analyzing visual phenomena in humans while producing computational models. We quickly iterate through analysis and synthesis (of models) of visual phenomena to define the characteristics of a robust information processing system for perception and action.

Short CV

  • Oct 2019 - present
    PhD Student, TU Berlin

  • Aug 2017 - May 2019
    Master of Science, Georgia Institute of Technology

  • Sep 2013 - Jul 2017
    Senior Engineer (Computer Vision), Robert Bosch
  • Sep 2009 - May 2013
    Bachelor of Engineering, Siddaganga Institute of Technology

Personal website: https://oxidification.com/

Project

© Felix Noak

Capabilities and consequences of recursive, hierarchical information processing in visual systems

Robotic vision benefits from insights about human visual perception. But how about the other way around? Could robot visual perception help understand human visual perception better? Using a hierarchical functional architecture for synthetic perceptual systems, we study human performance and derive principles of robust information processing in perceptual systems. With this, we simultaneously advance our understanding of human vision and incorporate the underlying principles in robot vision.

Supervised Theses

Reducing Uncertainty in Kinematic Model Estimation by Fusing Audition and Vision

Amelie Froessl, June 2021

How can robots reliably estimate the state of mechanical objects around them? While visual estimation offers a way to precisely estimate the state of mechanisms such as drawers or doors, visual estimation also has its shortcomings. Occlusions or bad lighting conditions make it challenging or even impossible to tackle this problem just using vision. In this thesis we explore how audio can be used as a sensor modality that augments or even replaces visual estimation in settings that are challenging to vision.

Estimating Objectness From Motion and Appearance

Vito Mengers, July 2021

This thesis presents a method for estimating objectness in a visual scene by fusing information from motion and appearance. Two interconnected recursive estimators estimate objectness in a way tailored to kinematic structure estimation. The method shows improved objectness estimation and improvement in estimated kinematic joints. Further analysis provides insight into the connection between objectness and kinematic joints as well interconnected recursive estimation.

© RBO

Easy grasping with a fixated robot

Amon Benson

Humans interact with objects in the 3D world robustly without complicated 3D sensors like lidars. Instead they only have 2D sensors in the eyes. If compared (rather naively) to widely available camera sensors, the human retina has vastly diminished capabilities, such as resolution, refresh rate etc. How then can humans interact with the 3D world so robustly?

© RBO

Distance estimation using fixation and event camera

Juan Antonio Gómez Daza

Humans navigate and interact with the 3D world using only 2D eye sensors and exploiting regularities in 3D space. By using gaze fixation and specific movements, humans are able to extract relevant 3D properties of the world through 2D sensors that measure changes, somewhat like an event camera. In this project, find out how event cameras, can help robots interact with the 3D world as effortlessly as humans.

Publications

2023

[en] Battaje, Aravind; Brock, Oliver; Rolfs, Martin
An interactive motion perception tool for kindergarteners (and vision scientists)
i-Perception, 14 (2) :20416695231159182
March 2023
ISSN: 2041-6695
Mengers, Vito; Battaje, Aravind; Baum, Manuel; Brock, Oliver
Combining Motion and Appearance for Robust Probabilistic Object Segmentation in Real Time
2023 IEEE International Conference on Robotics and Automation (ICRA), Page 683--689
IEEE
2023
Baum, Manuel; Froessl, Amelie; Battaje, Aravind; Brock, Oliver
Estimating the Motion of Drawers From Sound
2023 International Conference on Robotics and Automation (ICRA)
IEEE
2023

2022

Battaje, Aravind; Brock, Oliver
One Object at a Time: Accurate and Robust Structure From Motion for Robots
Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
2022

2021

Battaje, Aravind; Brock, Oliver
Interconnected Recursive Filters in Artificial and Biological Vision
Proceedings of the DGR Days, Page 32-32
2021