This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
research:projects [2024/02/19 12:24] Antonio Rodriguez-Sanchez [Completed Projects (Selection)] |
research:projects [2024/10/07 14:05] (current) Justus Piater |
||
---|---|---|---|
Line 2: | Line 2: | ||
===== Current Projects ===== | ===== Current Projects ===== | ||
+ | |||
+ | **Abstractron** - Conceptual Abstraction in Humans and Robots (Research Südtirol/Alto Adige, 2024-2027): This project seeks to develop a proof-of-concept theoretical framework and implementation for learning conceptual abstractions by robots via autonomous sensorimotor interaction with objects and by observing and interacting with humans. These abstractions will allow the robot to reason about and solve tasks irrespective of their concrete, sensory manifestation, to transfer skills to novel tasks, and to communicate with humans on the basis of shared conceptualizations. Inspired by cognitive science, the key innovation and enabling techology is to build on a logical formalisation of such interactions based on image schemas (simple yet abstract notions such as containment and support humans learn in early childhood for conceptual and metaphoric thinking) and on affordances (actions an object offers an actor such as putting the cake on the plate). The specific core aims of the project are therefore fourfold: (1) To define the basic ontological structure and terminology for experiential learning and abstraction, and to extend and modify formal logical approaches to image schemas and affordances to enable robotics-specific representation and reasoning capabilities; (2) To extract higher-level conceptual descriptions from observed human-robot interaction data to support algorithms for the automatic recognition of actions & plans with automatic labelling and algorithms for orchestration of actions with information regarding the capabilities of the involved agents; (3) To develop a workflow and layered architecture to extract higher level conceptual descriptions from sensory data and robotic actions that can be linked up with automatically learned as well as humanly curated formalisations of image schemas; (4) To provide a detailed validation of the approach through a carefully designed simple robotic world that foresees the interaction with objects and humans in which transfer learning and acquisition of conceptual abstractions can be systematically verified. | ||
+ | |||
+ | <html> | ||
+ | <div style="clear:both"><br></div> | ||
+ | </html> | ||
[[https://doi.org/10.55776/P36965|{{:research:doi.svg?13|}}]] **PURSUIT** - Purposeful Signal-symbol Relations for Manipulation Planning (Austrian Science Fund (FWF), Principal Investigator Project, 2023-2026): Artificial intelligence (AI) task planning approaches permits projecting robotic applications outside industrial settings by automatically generating the required instructions for task execution. However, the abstract representation used by AI planning methods makes it complicated to encode physical constraints that are critical to successfully execute a task: What specific movements are necessary to remove a cup from a shelf without collisions? At which precise point should a bottle be grasped for a stable pouring afterwards? These physical constraints are normally evaluated outside AI planning using computationally expensive trial-and-error strategies. PURSUIT focuses on a new task and motion planning (TAMP) approach where the evaluation of physical constraints for task execution starts at perception stage and propagates through planning and execution using a single heuristic search. The approach is based on a common signal-symbol representation that encodes physical constraints in terms of the “purpose” of object relations in the context of a task: Is the hand-bottle relation adequate for picking up the bottle for a stable pouring? Our TAMP approach aims to quickly render task plans that are physically feasible, avoiding the intensive computations of trial-and-error approaches. | [[https://doi.org/10.55776/P36965|{{:research:doi.svg?13|}}]] **PURSUIT** - Purposeful Signal-symbol Relations for Manipulation Planning (Austrian Science Fund (FWF), Principal Investigator Project, 2023-2026): Artificial intelligence (AI) task planning approaches permits projecting robotic applications outside industrial settings by automatically generating the required instructions for task execution. However, the abstract representation used by AI planning methods makes it complicated to encode physical constraints that are critical to successfully execute a task: What specific movements are necessary to remove a cup from a shelf without collisions? At which precise point should a bottle be grasped for a stable pouring afterwards? These physical constraints are normally evaluated outside AI planning using computationally expensive trial-and-error strategies. PURSUIT focuses on a new task and motion planning (TAMP) approach where the evaluation of physical constraints for task execution starts at perception stage and propagates through planning and execution using a single heuristic search. The approach is based on a common signal-symbol representation that encodes physical constraints in terms of the “purpose” of object relations in the context of a task: Is the hand-bottle relation adequate for picking up the bottle for a stable pouring? Our TAMP approach aims to quickly render task plans that are physically feasible, avoiding the intensive computations of trial-and-error approaches. | ||
Line 15: | Line 21: | ||
</html> | </html> | ||
- | **[[https://innalp.at|INNALP Education Hub]]** ([[https://projekte.ffg.at/projekt/4119035|FFG 4119035, 2021-2024]]) - creates innovative, inclusive, and sustainable teaching and learning projects in the heart of the Alps, systematically testing and scientifically tailoring educational innovations for lasting integration into the education system. The INNALP Education Hub includes (so far) 18 innovation projects, assigned to the three underlying innovation fields, "DigiTech Space," "Media, Inclusion & AI Space," and "Green Space." The researched areas of the project range from digitization and robotics to inclusive artificial intelligence and environmental education. | + | **[[https://innalp.at|INNALP Education Hub]]** ([[https://projekte.ffg.at/projekt/4119035|FFG 4119035, 2021-2025]]) - creates innovative, inclusive, and sustainable teaching and learning projects in the heart of the Alps, systematically testing and scientifically tailoring educational innovations for lasting integration into the education system. The INNALP Education Hub includes (so far) 18 innovation projects, assigned to the three underlying innovation fields, "DigiTech Space," "Media, Inclusion & AI Space," and "Green Space." The researched areas of the project range from digitization and robotics to inclusive artificial intelligence and environmental education. |
One of the innovation projects is the Software Testing AI Robotic (STAIR) Lab. The [[https://stair-lab.uibk.ac.at|STAIR Lab]] provides learning materials, workshops, and a simulation environment for minibots. The efforts of the STAIR Learning Lab are dedicated to the goal of establishing robotics, artificial intelligence (AI), and software testing in schools. | One of the innovation projects is the Software Testing AI Robotic (STAIR) Lab. The [[https://stair-lab.uibk.ac.at|STAIR Lab]] provides learning materials, workshops, and a simulation environment for minibots. The efforts of the STAIR Learning Lab are dedicated to the goal of establishing robotics, artificial intelligence (AI), and software testing in schools. | ||