Intelligent and Interactive Systems

User Tools

Site Tools


research:projects

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
research:projects [2021/11/22 10:39]
Alejandro Agostini
research:projects [2024/02/19 12:24] (current)
Antonio Rodriguez-Sanchez [Completed Projects (Selection)]
Line 3: Line 3:
 ===== Current Projects ===== ===== Current Projects =====
  
-**ELSA** - Effective Learning of Social Affordances for Human-Robot Interaction ​(ANR/FWF AAPG2022-2026): ​Affordances are action opportunities directly perceived ​by an agent to interact with its environmentThe concept is gaining interest in roboticswhere it offers ​rich description of the objects ​and the environment,​ focusing ​on the potential interactions rather than the sole physical ​properties. In this project, we extend this notion to social affordances. The goal is for robots to autonomously learn not only the physical ​effects ​of interactive actions with humans, but also the humansʼ reactions they produce (emotion, speech, movement). For instance, pointing and gazing ​in the same direction make humans orient towards ​the pointed direction, while pointing and looking at the finger make humans look at the finger. Besidesscratching ​the robotʼs chin makes some but not all humans smile. The project will investigate how learning humangeneral and human-specific social affordances can enrich a robotʼs action repertoire for human-aware task planning ​and efficient human-robot interaction.+[[https://​doi.org/​10.55776/​P36965|{{:​research:​doi.svg?​13|}}]] ​**PURSUIT** - Purposeful Signal-symbol Relations for Manipulation Planning (Austrian Science Fund (FWF)Principal Investigator Project, 2023-2026): ​Artificial intelligence (AI) task planning approaches permits projecting robotic applications outside industrial settings ​by automatically generating the required instructions for task executionHoweverthe abstract representation used by AI planning methods makes it complicated to encode physical constraints that are critical to successfully execute ​task: What specific movements are necessary to remove a cup from a shelf without collisions? ​ At which precise point should a bottle be grasped for a stable pouring afterwards? These physical constraints are normally evaluated outside AI planning using computationally expensive trial-and-error strategies. PURSUIT focuses ​on a new task and motion planning (TAMP) approach where the evaluation of physical ​constraints for task execution starts at perception stage and propagates through planning and execution using a single heuristic search. The approach ​is based on a common signal-symbol representation that encodes ​physical ​constraints in terms of the “purpose” of object relations ​in the context of a task: Is the hand-bottle relation adequate for picking up the bottle for a stable pouring? Our TAMP approach aims to quickly render task plans that are physically feasibleavoiding ​the intensive computations of trial-and-error approaches.
  
 <​html>​ <​html>​
Line 9: Line 9:
 </​html>​ </​html>​
  
-**SEAROCCO** - Seamless Levels ​of Abstraction ​for Robot Cognition (Austrian Science Fund (FWF) - Lise Meitner Project2019-2023): The project ​seeks to develop a robotic cognitive architecture that overcomes ​the difficulties found when integrating different levels ​of abstractions ​(e.gAI and robotic techniques) for task plan and execution ​in unstructured scenarios. The backbone of the project is a unified approach that permits searching for feasible solutions for new tasks execution ​at all the levels of abstractions simultaneouslywhere symbolic descriptions are no longer disentangled from the physical aspects they represent+[[https://​doi.org/​10.55776/​I5755|{{:​research:​doi.svg?​13|}}]] ​**ELSA** - Effective Learning ​of Social Affordances ​for Human-Robot Interaction ​(ANR/FWF AAPG2022-2026): Affordances are action opportunities directly perceived by an agent to interact with its environment. ​The concept is gaining interest in robotics, where it offers a rich description of the objects and the environment,​ focusing on the potential interactions rather than the sole physical properties. In this project, we extend this notion ​to social affordances. The goal is for robots to autonomously learn not only the physical effects ​of interactive actions with humans, but also the humansʼ reactions they produce ​(emotion, speech, movement)For instance, pointing ​and gazing ​in the same direction make humans orient towards the pointed direction, while pointing and looking ​at the finger make humans look at the finger. Besidesscratching ​the robotʼs chin makes some but not all humans smile. The project will investigate how learning human- general and human-specific social affordances can enrich a robotʼs action repertoire for human-aware task planning and efficient human-robot interaction.
  
 <​html>​ <​html>​
Line 15: Line 15:
 </​html>​ </​html>​
  
-**OLIVER** - Open-Ended Learning for Interactive Robots ​(EUREGIO IPN2019-2022): We would like to be able to teach robots to perform a great variety of tasksincluding collaborative tasks, and tasks not specifically foreseen by its designers. ​ Thus, the space of potentially-important aspects of perception ​and action is by necessity extremely large, since every aspect may become important at some point in time.  Conventional machine ​learning ​methods cannot be directly applied ​in such unconstrained circumstances,​ as the training demands increase with the sizes of the input and output spaces. +**[[https://​innalp.at|INNALP Education Hub]]** ([[https://​projekte.ffg.at/​projekt/​4119035|FFG 41190352021-2024]]- creates innovativeinclusive, and sustainable teaching ​and learning ​projects ​in the heart of the Alpssystematically testing and scientifically tailoring educational innovations ​for lasting integration into the education systemThe INNALP Education Hub includes (so far) 18 innovation projects, assigned ​to the three underlying innovation fields"​DigiTech Space,"​ "​Media,​ Inclusion & AI Space," ​and "Green Space." The researched areas of the project range from digitization and robotics ​to inclusive artificial intelligence and environmental education. 
-Thusa central problem ​for the robot is to understand which aspects of a demonstrated action are crucial Such understanding allows a robot to perform robustly even if the scenario and context changeto adapt its strategy, and to judge its successMoreover, it allows ​the robot to infer the human intent ​and task progress with respect ​to the goal, enabling it to share the task with humansoffer help or ask for help, resulting ​in natural human-robot cooperative behavior.+One of the innovation projects is the Software Testing AI Robotic (STAIR) Lab. The [[https://​stair-lab.uibk.ac.at|STAIR Lab]] provides learning materials, workshops, ​and a simulation environment for minibots. The efforts of the STAIR Learning Lab are dedicated ​to the goal of establishing roboticsartificial intelligence (AI)and software testing ​in schools. 
  
 <​html>​ <​html>​
 <div style="​clear:​both"><​br></​div>​ <div style="​clear:​both"><​br></​div>​
 </​html>​ </​html>​
 +
 +** DESDET'​s ** (Desinfection Detective, [[https://​www.standort-tirol.at/​unternehmen/​foerderungen/​gefoerderte-k-regio-projekte|K-Regio]]) aim is to develop a procedure to prove the disinfection effect and consequently the correct application (e.g.: compliance with the exposure time specified in the EN test) of chemical and/or physical and/or physical disinfection methods in real time and without increased technical effort. Based on such a new procedure, it should be possible, for example, for quality assurance employees or users of the be able to check the effect of the disinfection steps on site in just a few minutes. Furthermore,​ the aim is to replace the current gold standard for quality control of disinfection processes (phase 2 stage 2 tests, EN 16615, EN 16616) by the optical and AI methods to be developed.
 +
 +<​html>​
 +<div style="​clear:​both"><​br></​div>​
 +</​html>​
 +
  
  
 ===== Completed Projects (Selection) ===== ===== Completed Projects (Selection) =====
 +
 +[[https://​doi.org/​10.55776/​M2659|{{:​research:​doi.svg?​13|}}]] **SEAROCO** - Seamless Levels of Abstraction for Robot Cognition (Austrian Science Fund (FWF) - Lise Meitner Project, 2019-2023): The project seeks to develop a robotic cognitive architecture that overcomes the difficulties found when integrating different levels of abstractions (e.g. AI and robotic techniques) for task plan and execution in unstructured scenarios. The backbone of the project is a unified approach that permits searching for feasible solutions for new tasks execution at all the levels of abstractions simultaneously,​ where symbolic descriptions are no longer disentangled from the physical aspects they represent. ​
 +
 +<​html>​
 +<div style="​clear:​both"><​br></​div>​
 +</​html>​
 +
 +**OLIVER** - Open-Ended Learning for Interactive Robots (EUREGIO IPN, 2019-2022): We would like to be able to teach robots to perform a great variety of tasks, including collaborative tasks, and tasks not specifically foreseen by its designers. ​ Thus, the space of potentially-important aspects of perception and action is by necessity extremely large, since every aspect may become important at some point in time.  Conventional machine learning methods cannot be directly applied in such unconstrained circumstances,​ as the training demands increase with the sizes of the input and output spaces.
 +Thus, a central problem for the robot is to understand which aspects of a demonstrated action are crucial. ​ Such understanding allows a robot to perform robustly even if the scenario and context change, to adapt its strategy, and to judge its success. Moreover, it allows the robot to infer the human intent and task progress with respect to the goal, enabling it to share the task with humans, offer help or ask for help, resulting in natural human-robot cooperative behavior.
 +
 +<​html>​
 +<div style="​clear:​both"​ id="​OLIVER"><​br></​div>​
 +</​html>​
 +
 +**CADS ** (FFG, [[https://​www.lo-la.info/​cads-update/​|Camera Avalanche Detection System]]) proposes a novel approach to automating avalanche detection via analysis of webcam streams with deep learning models. To assess the viability of this approach, we trained convolutional neural networks on a publicly-released dataset of 4090 mountain photographs and achieved avalanche detection F1 scores of 92.9% per image and 64.0% per avalanche. Notably, our models do not require a digital elevation model, enabling straightforward integration with existing webcams in new geographic regions. The paper concludes with findings from an initial case study conducted in the Austrian Alps and our vision for operational applications of trained models.
 +
 +<​html>​
 +<div style="​clear:​both"​ id="​OLIVER"><​br></​div>​
 +</​html>​
 +
  
 {{:​research:​imagine-transparent.png?​nolink&​200 ||}}[[https://​www.imagine-h2020.eu|IMAGINE - Robots Understanding Their Actions by Imagining Their Effects ]] (EU H2020, 2017-2021): seeks to enable robots to understand the structure of their environment and how it is affected by its actions. “Understanding” here means the ability of the robot (a) to determine the applicability of an action along with parameters to achieve the desired effect, and (b) to discern to what extent an action succeeded, and to infer possible causes of failure and generate recovery actions. {{:​research:​imagine-transparent.png?​nolink&​200 ||}}[[https://​www.imagine-h2020.eu|IMAGINE - Robots Understanding Their Actions by Imagining Their Effects ]] (EU H2020, 2017-2021): seeks to enable robots to understand the structure of their environment and how it is affected by its actions. “Understanding” here means the ability of the robot (a) to determine the applicability of an action along with parameters to achieve the desired effect, and (b) to discern to what extent an action succeeded, and to infer possible causes of failure and generate recovery actions.
Line 44: Line 72:
  
 {{:​research:​3rdhand.png?​nolink&​110 |3rdHand}} {{:​research:​3rdhand.png?​nolink&​110 |3rdHand}}
-[[https://​cordis.europa.eu/​project/​rcn/110160/​factsheet/​en|3rdHand]] (EU FP7-ICT-STREP,​ 2013-2017) develops a semi-autonomous robot assistant that acts as a third hand of a human worker. It will be straightforward to instruct even by an untrained layman worker, allow for efficient knowledge transfer between tasks, and enable effective collaboration between a human worker with a robot third hand. The main contributions of this project will be the scientific principles of semi-autonomous human-robot collaboration,​ a new semi-autonomous robotic system that is able to (i) learn cooperative tasks from demonstration,​ (ii) learn from instruction,​ and (iii) transfer knowledge between tasks and environments.+[[https://​cordis.europa.eu/​project/​id/610878|3rdHand]] (EU FP7-ICT-STREP,​ 2013-2017) develops a semi-autonomous robot assistant that acts as a third hand of a human worker. It will be straightforward to instruct even by an untrained layman worker, allow for efficient knowledge transfer between tasks, and enable effective collaboration between a human worker with a robot third hand. The main contributions of this project will be the scientific principles of semi-autonomous human-robot collaboration,​ a new semi-autonomous robotic system that is able to (i) learn cooperative tasks from demonstration,​ (ii) learn from instruction,​ and (iii) transfer knowledge between tasks and environments.
 <​html>​ <​html>​
 <div style="​clear:​both"><​br></​div>​ <div style="​clear:​both"><​br></​div>​
research/projects.1637573964.txt.gz · Last modified: 2021/11/22 10:39 by Alejandro Agostini