Intelligent and Interactive Systems

User Tools

Site Tools


research:projects

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
research:projects [2020/03/05 11:36]
Alejandro Agostini
research:projects [2021/11/22 10:39]
Alejandro Agostini
Line 3: Line 3:
 ===== Current Projects ===== ===== Current Projects =====
  
-**Seamless Levels of Abstraction for Robot Cognition** - (FWF Lise Meitner Project2019-2021): The project tackles the problem ​of lack of transferability ​and robustness of robotic cognitive architectures for the execution of everyday tasksThese architectures integrate artificial intelligence (AI) and robotic techniques that were conceived independently of each of other to solve problems at different levels ​of abstractionsranging from finding abstract instructionse.g“pick ​the cup from the table”to specific object trajectories. This makes integration ​and consistency checking across them very complicated. We propose a unified approach that permits searching for feasible solutions ​at all the levels of abstractions simultaneouslywhere symbolic descriptions are only evaluated from the physical parameters they represent. The unified approach estimates the density of physical experiences that permitted ​successful (or failed) execution of tasks to quickly generate feasible solutions ​for new tasks using AI planning ​mechanisms.  +**ELSA** - Effective Learning of Social Affordances for Human-Robot Interaction ​(ANR/FWF AAPG2022-2026): Affordances are action opportunities directly perceived by an agent to interact with its environment. ​The concept is gaining interest in robotics, where it offers a rich description ​of the objects ​and the environment,​ focusing on the potential interactions rather than the sole physical propertiesIn this project, we extend this notion ​to social affordances. The goal is for robots to autonomously learn not only the physical effects ​of interactive actions with humansbut also the humansʼ reactions they produce (emotionspeech, movement)For instance, pointing and gazing in the same direction make humans orient towards ​the pointed directionwhile pointing ​and looking ​at the finger make humans look at the finger. Besidesscratching ​the robotʼs chin makes some but not all humans smile. The project will investigate how learning human- general and human-specific social affordances can enrich ​robotʼs action repertoire ​for human-aware task planning ​and efficient human-robot interaction.
-.+
  
-**OLIVER** - Open-Ended Learning ​for Interactive Robots ​(EUREGIO IPN, 2019-2022): We would like to be able to teach robots to perform ​great variety ​of tasks, including collaborative tasks, and tasks not specifically foreseen by its designers Thus, the space of potentially-important aspects of perception ​and action is by necessity extremely large, since every aspect may become important at some point in time ​Conventional machine learning methods cannot be directly applied in such unconstrained circumstances,​ as the training demands increase with the sizes of the input and output spaces. +<​html>​ 
-Thus, central problem ​for the robot is to understand which aspects ​of a demonstrated action ​are crucial. ​ Such understanding allows a robot to perform robustly even if the scenario and context change, to adapt its strategy, and to judge its success. Moreover, it allows the robot to infer the human intent and task progress with respect to the goal, enabling it to share the task with humans, offer help or ask for help, resulting in natural human-robot cooperative behavior.+<div style="​clear:​both"><​br></​div>​ 
 +</​html>​ 
 + 
 +**SEAROCO** - Seamless Levels of Abstraction ​for Robot Cognition ​(Austrian Science Fund (FWF) - Lise Meitner Project, 2019-2023): The project seeks to develop ​robotic cognitive architecture that overcomes the difficulties found when integrating different levels ​of abstractions (e.g. AI and robotic techniques) for task plan and execution ​in unstructured scenariosThe backbone ​of the project is unified approach that permits searching ​for feasible solutions for new tasks execution at all the levels ​of abstractions simultaneously,​ where symbolic descriptions ​are no longer disentangled from the physical aspects they represent
  
 <​html>​ <​html>​
Line 13: Line 15:
 </​html>​ </​html>​
  
-{{:​research:​imagine-transparent.png?​nolink&​200 ||}}[[https://​www.imagine-h2020.eu|IMAGINE ​- Robots ​Understanding Their Actions by Imagining Their Effects ]] (EU H20202017-2021): seeks to enable ​robots to understand ​the structure ​of their environment ​and how it is affected ​by its actions“Understanding” here means the ability ​of the robot (a) to determine ​the applicability ​of an action ​along with parameters ​to achieve ​the desired effectand (b) to discern to what extent an action succeeded, and to infer possible causes of failure ​and generate recovery actions.+**OLIVER** ​Open-Ended Learning for Interactive ​Robots (EUREGIO IPN2019-2022): We would like to be able to teach robots to perform a great variety of tasks, including collaborative tasks, and tasks not specifically foreseen by its designers. ​ Thus, the space of potentially-important aspects of perception ​and action ​is by necessity extremely large, since every aspect may become important at some point in time ​Conventional machine learning methods cannot be directly applied in such unconstrained circumstances,​ as the training demands increase with the sizes of the input and output spaces. 
 +Thus, central problem for the robot is to understand which aspects ​of a demonstrated ​action ​are crucial. ​ Such understanding allows a robot to perform robustly even if the scenario and context change, to adapt its strategy, and to judge its success. Moreover, it allows the robot to infer the human intent ​and task progress with respect to the goal, enabling it to share the task with humans, offer help or ask for help, resulting in natural human-robot cooperative behavior.
  
 <​html>​ <​html>​
Line 21: Line 24:
  
 ===== Completed Projects (Selection) ===== ===== Completed Projects (Selection) =====
 +
 +{{:​research:​imagine-transparent.png?​nolink&​200 ||}}[[https://​www.imagine-h2020.eu|IMAGINE - Robots Understanding Their Actions by Imagining Their Effects ]] (EU H2020, 2017-2021): seeks to enable robots to understand the structure of their environment and how it is affected by its actions. “Understanding” here means the ability of the robot (a) to determine the applicability of an action along with parameters to achieve the desired effect, and (b) to discern to what extent an action succeeded, and to infer possible causes of failure and generate recovery actions.
 +
 +<​html>​
 +<div style="​clear:​both"><​br></​div>​
 +</​html>​
  
 {{:​research:​flexrop-logo.png?​nolink&​200 ||}}[[https://​www.profactor.at/​en/​research/​industrial-assistive-systems/​roboticassistance/​projects/​flexrop/​|FlexRoP - Flexible, assistive robot for customized production]] (FFG (Austria) ICT of the Future, 2016-2019): Production of mass customized products is not easy to automate since objects and object positions remain more uncertain compared to mass production scenarios. Uncertainty handling motivates the application of advanced sensor-based control strategies which increases system complexity of robot applications dramatically. A possible solution to this conflict is the concept of task level or skill based programming that will render modern robot systems. Such systems can be applied without safety fence, are easier to program, more applicable and transformable into capable robot assistants. The project will implement a skill based programming framework and will apply it on selected industrial demo scenarios and evaluate research results. The main focus of the project is the application of methods to acquire process information by worker monitoring and thus make the robot assistants self-learning. {{:​research:​flexrop-logo.png?​nolink&​200 ||}}[[https://​www.profactor.at/​en/​research/​industrial-assistive-systems/​roboticassistance/​projects/​flexrop/​|FlexRoP - Flexible, assistive robot for customized production]] (FFG (Austria) ICT of the Future, 2016-2019): Production of mass customized products is not easy to automate since objects and object positions remain more uncertain compared to mass production scenarios. Uncertainty handling motivates the application of advanced sensor-based control strategies which increases system complexity of robot applications dramatically. A possible solution to this conflict is the concept of task level or skill based programming that will render modern robot systems. Such systems can be applied without safety fence, are easier to program, more applicable and transformable into capable robot assistants. The project will implement a skill based programming framework and will apply it on selected industrial demo scenarios and evaluate research results. The main focus of the project is the application of methods to acquire process information by worker monitoring and thus make the robot assistants self-learning.
Line 40: Line 49:
 </​html>​ </​html>​
  
-{{:​research:​pacman_logo2.png?​nolink&​110 |PaCMan}} [[http://​www.pacman-project.eu/​|PaCMan]] (EU FP7-ICT-STREP,​ 2013-2016) advances methods for object perception, representation and manipulation so that a robot is able to robustly manipulate objects even when those objects are unfamiliar, and even though the robot has unreliable perception and action. The proposal is founded on two assumptions. The first of these is that the representation of the object'​s shape in particular and of other properties in general will benefit from being compositional (or very loosely hierarchical and part based). The second is that manipulation planning and execution benefits from explicitly reasoning about uncertainty in object pose, shape etcetera; how it changes under the robot'​s actions, and the robot should plan actions that not only achieve the task, but gather information to make task achievement more reliable.+{{:​research:​pacman_logo2.png?​nolink&​110 |PaCMan}} [[http://​www.pacman-project.eu/​|PaCMan]] ​- Probabilistic and Compositional Representations for Object Manipulation ​(EU FP7-ICT-STREP,​ 2013-2016) advances methods for object perception, representation and manipulation so that a robot is able to robustly manipulate objects even when those objects are unfamiliar, and even though the robot has unreliable perception and action. The proposal is founded on two assumptions. The first of these is that the representation of the object'​s shape in particular and of other properties in general will benefit from being compositional (or very loosely hierarchical and part based). The second is that manipulation planning and execution benefits from explicitly reasoning about uncertainty in object pose, shape etcetera; how it changes under the robot'​s actions, and the robot should plan actions that not only achieve the task, but gather information to make task achievement more reliable.
 <​html>​ <​html>​
 <div style="​clear:​both"><​br></​div>​ <div style="​clear:​both"><​br></​div>​
research/projects.txt · Last modified: 2024/02/19 12:24 by Antonio Rodriguez-Sanchez