Intelligent and Interactive Systems

User Tools

Site Tools


research:projects

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
research:projects [2024/01/25 11:51]
Justus Piater [Current Projects]
research:projects [2024/02/19 12:24] (current)
Antonio Rodriguez-Sanchez [Completed Projects (Selection)]
Line 3: Line 3:
 ===== Current Projects ===== ===== Current Projects =====
  
-[[https://​doi.org/​10.55776/​P36965|{{:​research:​doi.svg?​13|}}]] ** ** - Purposeful Signal-symbol Relations for Manipulation Planning (Austrian Science Fund (FWF), 2023-2026)+[[https://​doi.org/​10.55776/​P36965|{{:​research:​doi.svg?​13|}}]] **PURSUIT** - Purposeful Signal-symbol Relations for Manipulation Planning (Austrian Science Fund (FWF), Principal Investigator Project, 2023-2026): Artificial intelligence (AI) task planning approaches permits projecting robotic applications outside industrial settings by automatically generating the required instructions for task execution. However, the abstract representation used by AI planning methods makes it complicated to encode physical constraints that are critical to successfully execute a task: What specific movements are necessary to remove a cup from a shelf without collisions? ​ At which precise point should a bottle be grasped for a stable pouring afterwards? These physical constraints are normally evaluated outside AI planning using computationally expensive trial-and-error strategies. PURSUIT focuses on a new task and motion planning (TAMP) approach where the evaluation of physical constraints for task execution starts at perception stage and propagates through planning and execution using a single heuristic search. The approach is based on a common signal-symbol representation that encodes physical constraints in terms of the “purpose” of object relations in the context of a task: Is the hand-bottle relation adequate for picking up the bottle for a stable pouring? Our TAMP approach aims to quickly render task plans that are physically feasible, avoiding the intensive computations of trial-and-error approaches.
  
 <​html>​ <​html>​
Line 15: Line 15:
 </​html>​ </​html>​
  
-[[https://doi.org/10.55776/M2659|{{:​research:​doi.svg?​13|}}]] **SEAROCO** ​Seamless Levels ​of Abstraction ​for Robot Cognition ​(Austrian Science Fund (FWF- Lise Meitner Project2019-2023): ​The project ​seeks to develop a robotic cognitive architecture that overcomes the difficulties found when integrating different levels ​of abstractions ​(e.gAI and robotic techniques) ​for task plan and execution in unstructured scenarios. The backbone ​of the project is a unified approach that permits searching for feasible solutions for new tasks execution at all the levels ​of abstractions simultaneouslywhere symbolic descriptions are no longer disentangled from the physical aspects they represent+**[[https://innalp.at|INNALP Education Hub]]** ([[https://projekte.ffg.at/projekt/​4119035|FFG 4119035, 2021-2024]]creates innovative, inclusive, and sustainable teaching and learning projects in the heart of the Alps, systematically testing and scientifically tailoring educational innovations ​for lasting integration into the education system. The INNALP Education Hub includes ​(so far18 innovation projectsassigned to the three underlying innovation fields, "​DigiTech Space,"​ "​Media,​ Inclusion & AI Space,"​ and "Green Space." ​The researched areas of the project ​range from digitization and robotics ​to inclusive artificial intelligence and environmental education. 
 +One of the innovation projects is the Software Testing AI Robotic ​(STAIR) LabThe [[https://​stair-lab.uibk.ac.at|STAIR Lab]] provides learning materials, workshops, ​and a simulation environment ​for minibots. The efforts ​of the STAIR Learning Lab are dedicated to the goal of establishing roboticsartificial intelligence (AI), and software testing in schools. 
  
 <​html>​ <​html>​
 <div style="​clear:​both"><​br></​div>​ <div style="​clear:​both"><​br></​div>​
 </​html>​ </​html>​
 +
 +** DESDET'​s ** (Desinfection Detective, [[https://​www.standort-tirol.at/​unternehmen/​foerderungen/​gefoerderte-k-regio-projekte|K-Regio]]) aim is to develop a procedure to prove the disinfection effect and consequently the correct application (e.g.: compliance with the exposure time specified in the EN test) of chemical and/or physical and/or physical disinfection methods in real time and without increased technical effort. Based on such a new procedure, it should be possible, for example, for quality assurance employees or users of the be able to check the effect of the disinfection steps on site in just a few minutes. Furthermore,​ the aim is to replace the current gold standard for quality control of disinfection processes (phase 2 stage 2 tests, EN 16615, EN 16616) by the optical and AI methods to be developed.
 +
 +<​html>​
 +<div style="​clear:​both"><​br></​div>​
 +</​html>​
 +
  
  
 ===== Completed Projects (Selection) ===== ===== Completed Projects (Selection) =====
 +
 +[[https://​doi.org/​10.55776/​M2659|{{:​research:​doi.svg?​13|}}]] **SEAROCO** - Seamless Levels of Abstraction for Robot Cognition (Austrian Science Fund (FWF) - Lise Meitner Project, 2019-2023): The project seeks to develop a robotic cognitive architecture that overcomes the difficulties found when integrating different levels of abstractions (e.g. AI and robotic techniques) for task plan and execution in unstructured scenarios. The backbone of the project is a unified approach that permits searching for feasible solutions for new tasks execution at all the levels of abstractions simultaneously,​ where symbolic descriptions are no longer disentangled from the physical aspects they represent. ​
 +
 +<​html>​
 +<div style="​clear:​both"><​br></​div>​
 +</​html>​
 +
 +**OLIVER** - Open-Ended Learning for Interactive Robots (EUREGIO IPN, 2019-2022): We would like to be able to teach robots to perform a great variety of tasks, including collaborative tasks, and tasks not specifically foreseen by its designers. ​ Thus, the space of potentially-important aspects of perception and action is by necessity extremely large, since every aspect may become important at some point in time.  Conventional machine learning methods cannot be directly applied in such unconstrained circumstances,​ as the training demands increase with the sizes of the input and output spaces.
 +Thus, a central problem for the robot is to understand which aspects of a demonstrated action are crucial. ​ Such understanding allows a robot to perform robustly even if the scenario and context change, to adapt its strategy, and to judge its success. Moreover, it allows the robot to infer the human intent and task progress with respect to the goal, enabling it to share the task with humans, offer help or ask for help, resulting in natural human-robot cooperative behavior.
 +
 +<​html>​
 +<div style="​clear:​both"​ id="​OLIVER"><​br></​div>​
 +</​html>​
 +
 +**CADS ** (FFG, [[https://​www.lo-la.info/​cads-update/​|Camera Avalanche Detection System]]) proposes a novel approach to automating avalanche detection via analysis of webcam streams with deep learning models. To assess the viability of this approach, we trained convolutional neural networks on a publicly-released dataset of 4090 mountain photographs and achieved avalanche detection F1 scores of 92.9% per image and 64.0% per avalanche. Notably, our models do not require a digital elevation model, enabling straightforward integration with existing webcams in new geographic regions. The paper concludes with findings from an initial case study conducted in the Austrian Alps and our vision for operational applications of trained models.
 +
 +<​html>​
 +<div style="​clear:​both"​ id="​OLIVER"><​br></​div>​
 +</​html>​
 +
  
 {{:​research:​imagine-transparent.png?​nolink&​200 ||}}[[https://​www.imagine-h2020.eu|IMAGINE - Robots Understanding Their Actions by Imagining Their Effects ]] (EU H2020, 2017-2021): seeks to enable robots to understand the structure of their environment and how it is affected by its actions. “Understanding” here means the ability of the robot (a) to determine the applicability of an action along with parameters to achieve the desired effect, and (b) to discern to what extent an action succeeded, and to infer possible causes of failure and generate recovery actions. {{:​research:​imagine-transparent.png?​nolink&​200 ||}}[[https://​www.imagine-h2020.eu|IMAGINE - Robots Understanding Their Actions by Imagining Their Effects ]] (EU H2020, 2017-2021): seeks to enable robots to understand the structure of their environment and how it is affected by its actions. “Understanding” here means the ability of the robot (a) to determine the applicability of an action along with parameters to achieve the desired effect, and (b) to discern to what extent an action succeeded, and to infer possible causes of failure and generate recovery actions.
research/projects.1706179906.txt.gz · Last modified: 2024/01/25 11:51 by Justus Piater