Making robots learn to perceive and act with understanding
At IIS we enable autonomous robots to perceive and act flexibly and robustly in unstructured environments, leveraging machine learning methods to build perceptual, motor and reasoning skills.
We seek to answer the question: How can we enable robots to acquire the knowledge and understanding they require to interact sensibly with unstructured environments?
Our research addresses complete perception-action loops, from computer vision to grasping and manipulation, using reactive algorithms and/or cognitive models. Much of our work uses machine learning to enable robots to synthesize and improve complex and robust sensorimotor behavior with experience. Related areas of interest include human-robot interaction, image and video analysis, and visual neuroscience.
Check our thesis topics for Bachelor and Master students.