Making robots learn to perceive and act with understanding
At IIS we enable autonomous robots to perceive and act flexibly and robustly in unstructured environments, leveraging machine learning methods to build perceptual, motor and reasoning skills.
Our research addresses complete perception-action loops, from computer vision to grasping and manipulation. Much of our work uses machine learning to enable robots to synthesize and improve complex and robust sensorimotor behavior with experience. Related areas of interest include human-robot interaction, image and video analysis, and visual neuroscience.
Check our thesis topics for Bachelor and Master students.