Intelligent and Interactive Systems

User Tools

Site Tools


research:projects

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
research:projects [2024/02/19 12:16]
Antonio Rodriguez-Sanchez [Current Projects]
research:projects [2026/03/16 21:10] (current)
Justus Piater
Line 3: Line 3:
 ===== Current Projects ===== ===== Current Projects =====
  
-[[https://doi.org/10.55776/P36965|{{:research:​doi.svg?13|}}]] **PURSUIT** ​Purposeful Signal-symbol Relations for Manipulation Planning (Austrian Science Fund (FWF), Principal Investigator Project, 2023-2026): Artificial intelligence (AI) task planning approaches permits projecting robotic applications outside industrial settings by automatically generating the required instructions for task execution. However, the abstract representation used by AI planning methods makes it complicated to encode physical constraints that are critical to successfully execute a task: What specific movements are necessary to remove a cup from a shelf without collisions At which precise point should a bottle ​be grasped for a stable pouring afterwards? These physical constraints are normally evaluated outside AI planning using computationally expensive trial-and-error strategiesPURSUIT focuses on a new task and motion planning (TAMP) approach where the evaluation ​of physical constraints for task execution starts at perception stage and propagates through planning and execution using a single heuristic searchThe approach is based on a common signal-symbol representation that encodes physical constraints in terms of the “purpose” of object relations ​in the context ​of a task: Is the hand-bottle relation adequate for picking up the bottle for a stable pouring? Our TAMP approach aims to quickly render task plans that are physically feasibleavoiding ​the intensive computations ​of trial-and-error approaches.+**CEIS** - Comparative Ecological Innovation Styles (FWF [[https://excellentaustria.fwf.ac.at/|Emerging Field]] 2026-2031, [[https://www.fwf.ac.at/​aktuelles/​detail/​emerging-fields-35-millionen-euro-fuer-neue-forschungsfelder-in-oesterreich|FWF]] and [[https://​www.uibk.ac.at/​de/​newsroom/​2026/​millionenforderung-roboter-lernen-von-der-natur/​|UIBK]] news items): Where does inventiveness come from? It is not limited to humans but can also be found across different animalsThe project “Comparative Ecological Innovation Styles” investigates how different body structures, ecological niches, ​and cognitive abilities shape the emergence ​of novel behaviorsRather than comparing only successful problem-solving outcomes, ​the team analyses detailed learning and developmental trajectories ​in some of the most innovative animals, including parrots, corvids, and great apes. This generates a process-based understanding of how innovation arises, ​the role played by motor abilities and environmental conditionsand why creative strategies differ across species. These insights will not only expand our understanding of animal intelligence but also contribute to the development ​of robotic systems that act more flexibly ​and adaptively.
  
 <​html>​ <​html>​
Line 9: Line 9:
 </​html>​ </​html>​
  
-[[https://​doi.org/​10.55776/​I5755|{{:​research:​doi.svg?​13|}}]] ​**ELSA** - Effective Learning of Social Affordances for Human-Robot Interaction ​(ANR/FWF AAPG2022-2026): Affordances are action opportunities directly perceived by an agent to interact with its environment. The concept is gaining interest in robotics, where it offers ​rich description ​of the objects and the environmentfocusing ​on the potential ​interactions ​rather than the sole physical propertiesIn this project, ​we extend ​this notion ​to social ​affordances. The goal is for robots ​to autonomously learn not only the physical effects ​of interactive ​actions with humans, but also the humansʼ reactions they produce ​(emotion, speech, movement). For instance, pointing ​and gazing in the same direction make humans orient towards ​the pointed direction, while pointing ​and looking at the finger make humans ​look at the finger. Besides, scratching the robotʼs chin makes some but not all humans smile. The project will investigate how learning ​human- general ​and human-specific social affordances ​can enrich a robotʼs action repertoire for human-aware task planning and efficient human-robot interaction.+**Abstractron** - Conceptual Abstraction in Humans and Robots ​(Research Südtirol/Alto Adige2024-2027): This project seeks to develop ​proof-of-concept theoretical framework and implementation for learning conceptual abstractions by robots via autonomous sensorimotor interaction with objects and by observing and interacting with humans. These abstractions will allow the robot to reason about and solve tasks irrespective of their concretesensory manifestation,​ to transfer skills to novel tasks, and to communicate with humans ​on the basis of shared conceptualizations. Inspired by cognitive science, the key innovation and enabling techology is to build on a logical formalisation of such interactions ​based on image schemas (simple yet abstract notions such as containment and support humans learn in early childhood for conceptual and metaphoric thinking) and on affordances (actions an object offers an actor such as putting ​the cake on the plate)The specific core aims of the project ​are therefore fourfold: (1) To define the basic ontological structure and terminology for experiential learning and abstractionand to extend ​and modify formal logical approaches ​to image schemas and affordances to enable robotics-specific representation and reasoning capabilities;​ (2) To extract higher-level conceptual descriptions from observed human-robot interaction data to support algorithms for the automatic recognition ​of actions ​& plans with automatic labelling and algorithms for orchestration of actions with information regarding ​the capabilities of the involved agents; ​(3To develop a workflow ​and layered architecture to extract higher level conceptual descriptions from sensory data and robotic actions that can be linked up with automatically learned as well as humanly curated formalisations of image schemas; (4) To provide a detailed validation of the approach through a carefully designed simple robotic world that foresees ​the interaction with objects ​and humans ​in which transfer ​learning and acquisition of conceptual abstractions ​can be systematically verified.
  
 <​html>​ <​html>​
Line 15: Line 15:
 </​html>​ </​html>​
  
-**[[https://innalp.at|INNALP Education Hub]]** ([[https://​projekte.ffg.at/​projekt/​4119035|FFG 41190352021-2024]]- creates innovative, inclusive, and sustainable teaching and learning projects in the heart of the Alps, systematically testing and scientifically tailoring educational innovations ​for lasting integration into the education systemThe INNALP Education Hub includes (so far) 18 innovation projectsassigned to the three underlying innovation fields, "​DigiTech Space,"​ "​Media,​ Inclusion & AI Space," ​and "Green Space." The researched areas of the project range from digitization ​and robotics to inclusive artificial intelligence ​and environmental education. +[[https://doi.org/​10.55776/​P36965|{{:​research:​doi.svg?​13|}}]] **PURSUIT** - Purposeful Signal-symbol Relations for Manipulation Planning ​(Austrian Science Fund (FWF)Principal Investigator Project, 2023-2027): Artificial intelligence (AI) task planning approaches permits projecting robotic applications outside industrial settings by automatically generating ​the required instructions ​for task executionHowever, the abstract representation used by AI planning methods makes it complicated to encode physical constraints that are critical to successfully execute a task: What specific movements are necessary to remove a cup from a shelf without collisions? ​ At which precise point should a bottle be grasped for a stable pouring afterwards? These physical constraints are normally evaluated outside AI planning using computationally expensive trial-and-error strategiesPURSUIT focuses on a new task and motion planning (TAMP) approach where the evaluation of physical constraints for task execution starts at perception stage and propagates through planning ​and execution using a single heuristic searchThe approach is based on a common signal-symbol representation that encodes physical constraints in terms of the “purpose” of object relations in the context of a taskIs the hand-bottle relation adequate ​for picking up the bottle for a stable pouring? Our TAMP approach aims to quickly render task plans that are physically feasible, avoiding ​the intensive computations ​of trial-and-error approaches.
-One of the innovation projects is the Software Testing AI Robotic (STAIR) Lab. The [[https://stair-lab.uibk.ac.at|STAIR Lab]] provides learning materials, workshops, and a simulation environment ​for minibots. The efforts of the STAIR Learning Lab are dedicated to the goal of establishing robotics, artificial intelligence (AI), and software testing in schools. +
  
 <​html>​ <​html>​
Line 23: Line 21:
 </​html>​ </​html>​
  
-*** DESDET'​s *** ([[K-Regio|https://www.standort-tirol.at/​unternehmen/foerderungen/​gefoerderte-k-regio-projekte]]aim Tis to develop ​procedure to prove the disinfection effect ​and consequently ​the +[[https://doi.org/10.55776/I5755|{{:​research:​doi.svg?​13|}}]] **ELSA** ​Effective Learning of Social Affordances for Human-Robot Interaction (ANR/FWF AAPG, 2022-2026): Affordances are action opportunities directly perceived by an agent to interact with its environment. The concept is gaining interest in robotics, where it offers ​rich description of the objects ​and the environment,​ focusing on the potential interactions rather than the sole physical ​propertiesIn this projectwe extend this notion to social affordances. The goal is for robots ​to autonomously learn not only the physical effects ​of interactive actions with humans, but also the humansʼ reactions they produce (emotion, speech, movement)For instancepointing and gazing in the same direction make humans orient towards ​the pointed directionwhile pointing and looking at the finger make humans look at the finger. Besidesscratching ​the robotʼs chin makes some but not all humans smile. The project will investigate how learning human- general ​and human-specific social affordances can enrich a robotʼs action repertoire for human-aware task planning and efficient human-robot interaction.
-correct application (e.g.: compliance with the exposure time specified in the EN test) of chemical and/​or ​physical +
-and/or physical disinfection methods in real time and without increased technical effortBased on such a new procedure, it should be possible, for example, for quality assurance employees or users of the be able to check the effect ​of the disinfection steps on site in just a few minutesFurthermore, the aim is to replace ​the current gold standard for quality control of disinfection processes (phase 2 stage 2 testsEN 16615EN 16616) by the optical ​and AI methods to be developed.+
  
 <​html>​ <​html>​
 <div style="​clear:​both"><​br></​div>​ <div style="​clear:​both"><​br></​div>​
 </​html>​ </​html>​
- 
  
  
 ===== Completed Projects (Selection) ===== ===== Completed Projects (Selection) =====
 +
 +**[[https://​innalp.at|INNALP Education Hub]]** ([[https://​projekte.ffg.at/​projekt/​4119035|FFG 4119035, 2021-2025]]) - creates innovative, inclusive, and sustainable teaching and learning projects in the heart of the Alps, systematically testing and scientifically tailoring educational innovations for lasting integration into the education system. The INNALP Education Hub includes (so far) 18 innovation projects, assigned to the three underlying innovation fields, "​DigiTech Space,"​ "​Media,​ Inclusion & AI Space,"​ and "Green Space."​ The researched areas of the project range from digitization and robotics to inclusive artificial intelligence and environmental education.
 +One of the innovation projects is the Software Testing AI Robotic (STAIR) Lab. The [[https://​stair-lab.uibk.ac.at|STAIR Lab]] provides learning materials, workshops, and a simulation environment for minibots. The efforts of the STAIR Learning Lab are dedicated to the goal of establishing robotics, artificial intelligence (AI), and software testing in schools.
 +
 +<​html>​
 +<div style="​clear:​both"><​br></​div>​
 +</​html>​
 +
 +** DESDET'​s ** (Desinfection Detective, [[https://​www.standort-tirol.at/​unternehmen/​foerderungen/​gefoerderte-k-regio-projekte|K-Regio]]) aim is to develop a procedure to prove the disinfection effect and consequently the correct application (e.g.: compliance with the exposure time specified in the EN test) of chemical and/or physical and/or physical disinfection methods in real time and without increased technical effort. Based on such a new procedure, it should be possible, for example, for quality assurance employees or users of the be able to check the effect of the disinfection steps on site in just a few minutes. Furthermore,​ the aim is to replace the current gold standard for quality control of disinfection processes (phase 2 stage 2 tests, EN 16615, EN 16616) by the optical and AI methods to be developed.
 +
 +<​html>​
 +<div style="​clear:​both"><​br></​div>​
 +</​html>​
  
 [[https://​doi.org/​10.55776/​M2659|{{:​research:​doi.svg?​13|}}]] **SEAROCO** - Seamless Levels of Abstraction for Robot Cognition (Austrian Science Fund (FWF) - Lise Meitner Project, 2019-2023): The project seeks to develop a robotic cognitive architecture that overcomes the difficulties found when integrating different levels of abstractions (e.g. AI and robotic techniques) for task plan and execution in unstructured scenarios. The backbone of the project is a unified approach that permits searching for feasible solutions for new tasks execution at all the levels of abstractions simultaneously,​ where symbolic descriptions are no longer disentangled from the physical aspects they represent. ​ [[https://​doi.org/​10.55776/​M2659|{{:​research:​doi.svg?​13|}}]] **SEAROCO** - Seamless Levels of Abstraction for Robot Cognition (Austrian Science Fund (FWF) - Lise Meitner Project, 2019-2023): The project seeks to develop a robotic cognitive architecture that overcomes the difficulties found when integrating different levels of abstractions (e.g. AI and robotic techniques) for task plan and execution in unstructured scenarios. The backbone of the project is a unified approach that permits searching for feasible solutions for new tasks execution at all the levels of abstractions simultaneously,​ where symbolic descriptions are no longer disentangled from the physical aspects they represent. ​
Line 47: Line 55:
 <div style="​clear:​both"​ id="​OLIVER"><​br></​div>​ <div style="​clear:​both"​ id="​OLIVER"><​br></​div>​
 </​html>​ </​html>​
 +
 +**CADS ** (FFG, [[https://​www.lo-la.info/​cads-update/​|Camera Avalanche Detection System]]) proposes a novel approach to automating avalanche detection via analysis of webcam streams with deep learning models. To assess the viability of this approach, we trained convolutional neural networks on a publicly-released dataset of 4090 mountain photographs and achieved avalanche detection F1 scores of 92.9% per image and 64.0% per avalanche. Notably, our models do not require a digital elevation model, enabling straightforward integration with existing webcams in new geographic regions. The paper concludes with findings from an initial case study conducted in the Austrian Alps and our vision for operational applications of trained models.
 +
 +<​html>​
 +<div style="​clear:​both"​ id="​OLIVER"><​br></​div>​
 +</​html>​
 +
  
 {{:​research:​imagine-transparent.png?​nolink&​200 ||}}[[https://​www.imagine-h2020.eu|IMAGINE - Robots Understanding Their Actions by Imagining Their Effects ]] (EU H2020, 2017-2021): seeks to enable robots to understand the structure of their environment and how it is affected by its actions. “Understanding” here means the ability of the robot (a) to determine the applicability of an action along with parameters to achieve the desired effect, and (b) to discern to what extent an action succeeded, and to infer possible causes of failure and generate recovery actions. {{:​research:​imagine-transparent.png?​nolink&​200 ||}}[[https://​www.imagine-h2020.eu|IMAGINE - Robots Understanding Their Actions by Imagining Their Effects ]] (EU H2020, 2017-2021): seeks to enable robots to understand the structure of their environment and how it is affected by its actions. “Understanding” here means the ability of the robot (a) to determine the applicability of an action along with parameters to achieve the desired effect, and (b) to discern to what extent an action succeeded, and to infer possible causes of failure and generate recovery actions.
research/projects.1708341383.txt.gz · Last modified: 2024/02/19 12:16 by Antonio Rodriguez-Sanchez