Home
People
Projects
Research
Publications
Surveys
Courses
Student Projects
Jobs
Downloads
This is an old revision of the document!
Making robots learn to perceive and act with understanding
At IIS we enable autonomous robots to perceive and act flexibly and robustly in unstructured environments, leveraging machine learning methods to build perceptual, motor and reasoning skills.
We seek to answer the question: How can we enable robots to acquire the knowledge and understanding they require to interact sensibly with unstructured environments?
Our research addresses complete perception-action loops, from computer vision to grasping and manipulation, using reactive algorithms and/or cognitive models. Much of our work uses machine learning to enable robots to synthesize and improve complex and robust sensorimotor behavior with experience. Related areas of interest include human-robot interaction, image and video analysis, and visual neuroscience.
2020-11-20 | Simon Haller-Seeber and Patrick Lamprecht present a show Explainable AI: A sneak peek into the Black-Box at Science Slam , online. |
2020-06-22 | Justus Piater gives an invited talk Conditional Neural Movement Primitives at GdR
ISIS Réunion Apprentissage et Robotique, online. [Abstract]Conditional Neural Movement Primitives (CNMP) constitute a novel framework for robot programming by demonstration based on Conditional Neural Processes (CNP). Like Bayesian methods such as Gaussian Processes (GP), CNP learn how target distributions depend on data, and can be conditioned on specific data points to infer new target distributions at test time. Unlike GP that are expensive to train and scale poorly to high dimensions, CNP are neural networks and are trained by gradient descent. CNMP leverage CNP to represent motion trajectories that can be conditioned, at test time, on task paramters such as goal locations, via points, and/or force readings. Moreover, CNMP are conditioned on sensor readings during execution, resulting in robust, reactive behavior. This talk will present an overview of how CNMP work and how they can be used in various robot applications. |
2020-06-03 | Justus Piater appears in the media: Wie der Roboter denken lernt. |
2020-01-29 | Justus Piater gives an invited talk Digital Science at Vortragsreihe
„Primers for Predocs – Strategien für eine erfolgreiche
Promotion“, Universität Heidelberg. [Abstract]Massive availability of data and computing power are promoting data-driven methods in all areas of science and technology. I will describe how the University of Innsbruck supports this via its new Digital Science Center, and will give a flavor of machine learning for data analysis. |
2020-01-20 | Joanna Chimiak-Opoka, Carina König, and Justus Piater appear in the media: Ergänzung Digital Science erfolgreich gestartet – UIBK Newsroom. |
2020-01-03 | Justus Piater gives an invited talk Künstliche Intelligenz: Grundlagen, Erfolge, Herausforderungen at 47. Tagung des Innsbrucker Kreises von MoraltheologInnen und SozialethikerInnen, Innsbruck. |
2019-12-19 | Justus Piater appears in the media: TV interview by ORF 2 Tirol Heute RedHaus (in German). |
2019-12-12 | Justus Piater gives an invited lecture Too Smart to Be Trusted – Do I Even Want to Understand My Robot? at TrustRobots Lecture series Trust in Robots, TU Vienna. |
2019-12-05 | IIS guest Heiko Neumann, University of Ulm, gives an invited colloquium Biologically inspired visual-auditory processing – from
brain-like computation to neuromorphic algorithms at IFI Lunchtime Seminar. [Abstract]A fundamental task of sensory processing is to detect and integrate feature items to group them into perceptual units segregating them from other objects and the background. A framework is discussed which explains how perceptual grouping at early as well as higher-level cognitive stages may be implemented in cortex. Different grouping mechanisms are implemented which are attuned to basic features and feature combinations and mainly evaluated along the forward sweep of stimulus processing. However, due to limitations of local feature detection mechanisms and inherent ambiguities, top-down feedback is required to deliver contextual information helping to disambiguate initial measurements. Feedback of contextual information is demonstrated to improve object recognition performance, stabilize learning of object categories, and integrate multi-sensory representations. The canonical principles of neural computation define a set of core operations to implement above-mentioned mechanisms of perceptual and cognitive inference. These operations can be mapped, in a simplified form, onto neuromorphic platforms to emulate brain-like computation. It is demonstrated that an architecture composed of canonical circuit mechanisms can be mapped onto neuromorphic chip technology facilitating low-energy non-von Neumann computation. |
2019-11-26 | IIS guest Tamim Asfour, Karlsruhe Institute of Technology, gives an invited keynote Engineering Humanoids with Motion Intelligence at inday students. [Abstract]Humanoid robotics plays a central role in robotics research as well as in understanding intelligence. Engineering humanoid robots that are able to learn from humans and sensorimotor experience, to predict the consequences of actions and exploit the interaction with the world to extend their cognitive horizon remains a research grand challenge. Currently, we are experiencing AI systems with superhuman performance in games, image and speech processing. However, the generation of robot behaviors with human-like motion intelligence and performance has yet to be achieved. In this talk, I will present recent progress towards engineering 24/7 humanoid robots that link perception and action to generate intelligent behavior. I will show the ARMAR humanoid robots performing complex grasping and manipulation tasks in kitchen and industrial environments, learning actions from human observation and experience as well as reasoning about object-action relations. |
University of Innsbruck
Department of Computer Science
Technikerstr. 21a
6020 Innsbruck
Austria
How to find us: See the directions.
Legal Notice: See the Impress and Privacy Notice.