This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
research:projects [2013/06/28 13:14] c703101 [Current EU Projects] |
research:projects [2024/10/07 14:05] (current) Justus Piater |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Externally-Funded, Collaborative Projects ====== | ====== Externally-Funded, Collaborative Projects ====== | ||
- | ===== Current EU Projects ===== | + | ===== Current Projects ===== |
- | {{:research:pacman_logo2.png?nolink&110 |PaCMan}} PaCMan (EU FP7-ICT-STREP, 2013-2016) advances methods for object perception, representation and manipulation so that a robot is able to robustly manipulate objects even when those objects are unfamiliar, and even though the robot has unreliable perception and action. The proposal is founded on two assumptions. The first of these is that the representation of the object's shape in particular and of other properties in general will benefit from being compositional (or very loosely hierarchical and part based). The second is that manipulation planning and execution benefits from explicitly reasoning about uncertainty in object pose, shape etcetera; how it changes under the robot's actions, and the robot should plan actions that not only achieve the task, but gather information to make task achievement more reliable. | + | **Abstractron** - Conceptual Abstraction in Humans and Robots (Research Südtirol/Alto Adige, 2024-2027): This project seeks to develop a proof-of-concept theoretical framework and implementation for learning conceptual abstractions by robots via autonomous sensorimotor interaction with objects and by observing and interacting with humans. These abstractions will allow the robot to reason about and solve tasks irrespective of their concrete, sensory manifestation, to transfer skills to novel tasks, and to communicate with humans on the basis of shared conceptualizations. Inspired by cognitive science, the key innovation and enabling techology is to build on a logical formalisation of such interactions based on image schemas (simple yet abstract notions such as containment and support humans learn in early childhood for conceptual and metaphoric thinking) and on affordances (actions an object offers an actor such as putting the cake on the plate). The specific core aims of the project are therefore fourfold: (1) To define the basic ontological structure and terminology for experiential learning and abstraction, and to extend and modify formal logical approaches to image schemas and affordances to enable robotics-specific representation and reasoning capabilities; (2) To extract higher-level conceptual descriptions from observed human-robot interaction data to support algorithms for the automatic recognition of actions & plans with automatic labelling and algorithms for orchestration of actions with information regarding the capabilities of the involved agents; (3) To develop a workflow and layered architecture to extract higher level conceptual descriptions from sensory data and robotic actions that can be linked up with automatically learned as well as humanly curated formalisations of image schemas; (4) To provide a detailed validation of the approach through a carefully designed simple robotic world that foresees the interaction with objects and humans in which transfer learning and acquisition of conceptual abstractions can be systematically verified. |
- | {{:research:xperience.png?nolink&110 |Xperience}} [[http://www.xperience.org/|Xperience]] (EU FP7-ICT-IP, 2011-2015) pursues two principal objectives. The first goal is to show that the state of the art enactive embodied cognition systems can be significantly enhanced by using structural bootstrapping - a concept taken from language learning. The second goal is to implement a complete robot system for automating introspective, predictive, and interactive understanding of actions and dynamic situations. | + | <html> |
+ | <div style="clear:both"><br></div> | ||
+ | </html> | ||
+ | |||
+ | [[https://doi.org/10.55776/P36965|{{:research:doi.svg?13|}}]] **PURSUIT** - Purposeful Signal-symbol Relations for Manipulation Planning (Austrian Science Fund (FWF), Principal Investigator Project, 2023-2026): Artificial intelligence (AI) task planning approaches permits projecting robotic applications outside industrial settings by automatically generating the required instructions for task execution. However, the abstract representation used by AI planning methods makes it complicated to encode physical constraints that are critical to successfully execute a task: What specific movements are necessary to remove a cup from a shelf without collisions? At which precise point should a bottle be grasped for a stable pouring afterwards? These physical constraints are normally evaluated outside AI planning using computationally expensive trial-and-error strategies. PURSUIT focuses on a new task and motion planning (TAMP) approach where the evaluation of physical constraints for task execution starts at perception stage and propagates through planning and execution using a single heuristic search. The approach is based on a common signal-symbol representation that encodes physical constraints in terms of the “purpose” of object relations in the context of a task: Is the hand-bottle relation adequate for picking up the bottle for a stable pouring? Our TAMP approach aims to quickly render task plans that are physically feasible, avoiding the intensive computations of trial-and-error approaches. | ||
<html> | <html> | ||
- | <div style="clear:both"></div> | + | <div style="clear:both"><br></div> |
</html> | </html> | ||
- | {{:research:intellact.png?nolink&110 |IntellAct}} [[http://www.intellact.eu/|IntellAct]] (EU FP7-ICT-STREP, 2011-2014) addresses the problem of understanding and exploiting the meaning (semantics) of manipulations in terms of objects, actions and their consequences for reproducing human actions with machines. This is in particular required for the interaction between humans and robots in which the robot has to understand the human action and then to transfer it to its own embodiment. | + | [[https://doi.org/10.55776/I5755|{{:research:doi.svg?13|}}]] **ELSA** - Effective Learning of Social Affordances for Human-Robot Interaction (ANR/FWF AAPG, 2022-2026): Affordances are action opportunities directly perceived by an agent to interact with its environment. The concept is gaining interest in robotics, where it offers a rich description of the objects and the environment, focusing on the potential interactions rather than the sole physical properties. In this project, we extend this notion to social affordances. The goal is for robots to autonomously learn not only the physical effects of interactive actions with humans, but also the humansʼ reactions they produce (emotion, speech, movement). For instance, pointing and gazing in the same direction make humans orient towards the pointed direction, while pointing and looking at the finger make humans look at the finger. Besides, scratching the robotʼs chin makes some but not all humans smile. The project will investigate how learning human- general and human-specific social affordances can enrich a robotʼs action repertoire for human-aware task planning and efficient human-robot interaction. |
<html> | <html> | ||
- | <div style="clear:both"></div> | + | <div style="clear:both"><br></div> |
</html> | </html> | ||
+ | |||
+ | **[[https://innalp.at|INNALP Education Hub]]** ([[https://projekte.ffg.at/projekt/4119035|FFG 4119035, 2021-2025]]) - creates innovative, inclusive, and sustainable teaching and learning projects in the heart of the Alps, systematically testing and scientifically tailoring educational innovations for lasting integration into the education system. The INNALP Education Hub includes (so far) 18 innovation projects, assigned to the three underlying innovation fields, "DigiTech Space," "Media, Inclusion & AI Space," and "Green Space." The researched areas of the project range from digitization and robotics to inclusive artificial intelligence and environmental education. | ||
+ | One of the innovation projects is the Software Testing AI Robotic (STAIR) Lab. The [[https://stair-lab.uibk.ac.at|STAIR Lab]] provides learning materials, workshops, and a simulation environment for minibots. The efforts of the STAIR Learning Lab are dedicated to the goal of establishing robotics, artificial intelligence (AI), and software testing in schools. | ||
+ | |||
+ | |||
+ | <html> | ||
+ | <div style="clear:both"><br></div> | ||
+ | </html> | ||
+ | |||
+ | ** DESDET's ** (Desinfection Detective, [[https://www.standort-tirol.at/unternehmen/foerderungen/gefoerderte-k-regio-projekte|K-Regio]]) aim is to develop a procedure to prove the disinfection effect and consequently the correct application (e.g.: compliance with the exposure time specified in the EN test) of chemical and/or physical and/or physical disinfection methods in real time and without increased technical effort. Based on such a new procedure, it should be possible, for example, for quality assurance employees or users of the be able to check the effect of the disinfection steps on site in just a few minutes. Furthermore, the aim is to replace the current gold standard for quality control of disinfection processes (phase 2 stage 2 tests, EN 16615, EN 16616) by the optical and AI methods to be developed. | ||
+ | |||
+ | <html> | ||
+ | <div style="clear:both"><br></div> | ||
+ | </html> | ||
+ | |||
===== Completed Projects (Selection) ===== | ===== Completed Projects (Selection) ===== | ||
- | {{:research:learnbip.png?nolink&110 |LearnBiP}} [[http://www.learnbip.eu/|LearnBiP]] (EU FP7-ICT ECHORD Experiment, 2011-2012) has two main aims. First it utilizes the huge amount of data generated in industrial bin-picking for the introduction of grasp learning. Second it evaluates the potential of the SCHUNK dexterous hand SDH-2 for its application in industrial bin-picking. | + | [[https://doi.org/10.55776/M2659|{{:research:doi.svg?13|}}]] **SEAROCO** - Seamless Levels of Abstraction for Robot Cognition (Austrian Science Fund (FWF) - Lise Meitner Project, 2019-2023): The project seeks to develop a robotic cognitive architecture that overcomes the difficulties found when integrating different levels of abstractions (e.g. AI and robotic techniques) for task plan and execution in unstructured scenarios. The backbone of the project is a unified approach that permits searching for feasible solutions for new tasks execution at all the levels of abstractions simultaneously, where symbolic descriptions are no longer disentangled from the physical aspects they represent. |
+ | |||
+ | <html> | ||
+ | <div style="clear:both"><br></div> | ||
+ | </html> | ||
+ | |||
+ | **OLIVER** - Open-Ended Learning for Interactive Robots (EUREGIO IPN, 2019-2022): We would like to be able to teach robots to perform a great variety of tasks, including collaborative tasks, and tasks not specifically foreseen by its designers. Thus, the space of potentially-important aspects of perception and action is by necessity extremely large, since every aspect may become important at some point in time. Conventional machine learning methods cannot be directly applied in such unconstrained circumstances, as the training demands increase with the sizes of the input and output spaces. | ||
+ | Thus, a central problem for the robot is to understand which aspects of a demonstrated action are crucial. Such understanding allows a robot to perform robustly even if the scenario and context change, to adapt its strategy, and to judge its success. Moreover, it allows the robot to infer the human intent and task progress with respect to the goal, enabling it to share the task with humans, offer help or ask for help, resulting in natural human-robot cooperative behavior. | ||
+ | |||
+ | <html> | ||
+ | <div style="clear:both" id="OLIVER"><br></div> | ||
+ | </html> | ||
+ | |||
+ | **CADS ** (FFG, [[https://www.lo-la.info/cads-update/|Camera Avalanche Detection System]]) proposes a novel approach to automating avalanche detection via analysis of webcam streams with deep learning models. To assess the viability of this approach, we trained convolutional neural networks on a publicly-released dataset of 4090 mountain photographs and achieved avalanche detection F1 scores of 92.9% per image and 64.0% per avalanche. Notably, our models do not require a digital elevation model, enabling straightforward integration with existing webcams in new geographic regions. The paper concludes with findings from an initial case study conducted in the Austrian Alps and our vision for operational applications of trained models. | ||
+ | |||
+ | <html> | ||
+ | <div style="clear:both" id="OLIVER"><br></div> | ||
+ | </html> | ||
+ | |||
+ | |||
+ | {{:research:imagine-transparent.png?nolink&200 ||}}[[https://www.imagine-h2020.eu|IMAGINE - Robots Understanding Their Actions by Imagining Their Effects ]] (EU H2020, 2017-2021): seeks to enable robots to understand the structure of their environment and how it is affected by its actions. “Understanding” here means the ability of the robot (a) to determine the applicability of an action along with parameters to achieve the desired effect, and (b) to discern to what extent an action succeeded, and to infer possible causes of failure and generate recovery actions. | ||
+ | |||
+ | <html> | ||
+ | <div style="clear:both"><br></div> | ||
+ | </html> | ||
+ | |||
+ | {{:research:flexrop-logo.png?nolink&200 ||}}[[https://www.profactor.at/en/research/industrial-assistive-systems/roboticassistance/projects/flexrop/|FlexRoP - Flexible, assistive robot for customized production]] (FFG (Austria) ICT of the Future, 2016-2019): Production of mass customized products is not easy to automate since objects and object positions remain more uncertain compared to mass production scenarios. Uncertainty handling motivates the application of advanced sensor-based control strategies which increases system complexity of robot applications dramatically. A possible solution to this conflict is the concept of task level or skill based programming that will render modern robot systems. Such systems can be applied without safety fence, are easier to program, more applicable and transformable into capable robot assistants. The project will implement a skill based programming framework and will apply it on selected industrial demo scenarios and evaluate research results. The main focus of the project is the application of methods to acquire process information by worker monitoring and thus make the robot assistants self-learning. | ||
+ | |||
+ | <html> | ||
+ | <div style="clear:both"><br></div> | ||
+ | </html> | ||
+ | |||
+ | {{:research:squirrel.png?nolink&200 |}}[[http://www.squirrel-project.eu/|SQUIRREL]] (EU FP7-ICT-STREP, 2014-2018): Clutter in an open world is a challenge for many aspects of robotic systems, especially for autonomous robots deployed in unstructured domestic settings, affecting navigation, manipulation, vision, human robot interaction and planning. SQUIRREL addresses these issues by actively controlling clutter and incrementally learning to extend the robot's capabilities while doing so. We term this the B3 (bit by bit) approach, as the robot tackles clutter one bit at a time and also extends its knowledge continuously as new bits of information become available. SQUIRREL is inspired by a user driven scenario, that exhibits all the rich complexity required to convincingly drive research, but allows tractable solutions with high potential for exploitation. We propose a toy cleaning scenario, where a robot learns to collect toys scattered in loose clumps or tangled heaps on the floor in a child's room, and to stow them in designated target locations. | ||
+ | |||
+ | <html> | ||
+ | <div style="clear:both"><br></div> | ||
+ | </html> | ||
+ | |||
+ | {{:research:3rdhand.png?nolink&110 |3rdHand}} | ||
+ | [[https://cordis.europa.eu/project/id/610878|3rdHand]] (EU FP7-ICT-STREP, 2013-2017) develops a semi-autonomous robot assistant that acts as a third hand of a human worker. It will be straightforward to instruct even by an untrained layman worker, allow for efficient knowledge transfer between tasks, and enable effective collaboration between a human worker with a robot third hand. The main contributions of this project will be the scientific principles of semi-autonomous human-robot collaboration, a new semi-autonomous robotic system that is able to (i) learn cooperative tasks from demonstration, (ii) learn from instruction, and (iii) transfer knowledge between tasks and environments. | ||
+ | <html> | ||
+ | <div style="clear:both"><br></div> | ||
+ | </html> | ||
+ | |||
+ | {{:research:pacman_logo2.png?nolink&110 |PaCMan}} [[http://www.pacman-project.eu/|PaCMan]] - Probabilistic and Compositional Representations for Object Manipulation (EU FP7-ICT-STREP, 2013-2016) advances methods for object perception, representation and manipulation so that a robot is able to robustly manipulate objects even when those objects are unfamiliar, and even though the robot has unreliable perception and action. The proposal is founded on two assumptions. The first of these is that the representation of the object's shape in particular and of other properties in general will benefit from being compositional (or very loosely hierarchical and part based). The second is that manipulation planning and execution benefits from explicitly reasoning about uncertainty in object pose, shape etcetera; how it changes under the robot's actions, and the robot should plan actions that not only achieve the task, but gather information to make task achievement more reliable. | ||
+ | <html> | ||
+ | <div style="clear:both"><br></div> | ||
+ | </html> | ||
+ | |||
+ | {{:research:xperience.png?nolink&110 |Xperience}} [[http://www.xperience.org/|Xperience]] (EU FP7-ICT-IP, 2011-2015) pursues two principal objectives. The first goal is to show that the state of the art enactive embodied cognition systems can be significantly enhanced by using structural bootstrapping - a concept taken from language learning. The second goal is to implement a complete robot system for automating introspective, predictive, and interactive understanding of actions and dynamic situations. | ||
+ | |||
+ | <html> | ||
+ | <div style="clear:both"><br></div> | ||
+ | </html> | ||
+ | |||
+ | |||
+ | {{:research:intellact.png?nolink&110 |IntellAct}} [[https://cordis.europa.eu/project/rcn/97727/factsheet/en|IntellAct]] (EU FP7-ICT-STREP, 2011-2014) addresses the problem of understanding and exploiting the meaning (semantics) of manipulations in terms of objects, actions and their consequences for reproducing human actions with machines. This is in particular required for the interaction between humans and robots in which the robot has to understand the human action and then to transfer it to its own embodiment. | ||
+ | |||
+ | <html> | ||
+ | <div style="clear:both"><br></div> | ||
+ | </html> | ||
+ | |||
+ | {{:research:learnbip.png?nolink&110 |LearnBiP}} [[http://www.echord.info/wikis/website/learnbip.html|LearnBiP]] (EU FP7-ICT ECHORD Experiment, 2011-2012) has two main aims. First it utilizes the huge amount of data generated in industrial bin-picking for the introduction of grasp learning. Second it evaluates the potential of the SCHUNK dexterous hand SDH-2 for its application in industrial bin-picking. | ||
<html> | <html> | ||
- | <div style="clear:both"></div> | + | <div style="clear:both"><br></div> |
</html> | </html> | ||
- | {{:research:signspeak.png?nolink&110 |SignSpeak}} [[http://www.signspeak.eu/|SignSpeak]] (EU FP7-ICT-STREP, 2009-2012) focused on scientific understanding and vision-based technological development for continuous sign language recognition and translation. The aim was to increase the linguistic understanding of sign languages and to create methods for transcribing sign language into text. See an [[http://viewer.zmags.com/publication/4c7a6b67#/4c7a6b67/53|article in the Projects magazine]]. | + | {{:research:signspeak.png?nolink&110 |SignSpeak}} [[http://www.signspeak.eu/|SignSpeak]] (EU FP7-ICT-STREP, 2009-2012) focused on scientific understanding and vision-based technological development for continuous sign language recognition and translation. The aim was to increase the linguistic understanding of sign languages and to create methods for transcribing sign language into text. |
<html> | <html> | ||
- | <div style="clear:both"></div> | + | <div style="clear:both"><br></div> |
</html> | </html> | ||
Line 35: | Line 116: | ||
<html> | <html> | ||
- | <div style="clear:both"></div> | + | <div style="clear:both"><br></div> |
</html> | </html> | ||
- | {{:research:logo_trictrac.jpg?nolink&110 |TRICTRAC}} [[http://www.multitel.be/trictrac/|TRICTRAC]] (2003-2006), directed by J. Piater, aimed at the development of algorithms for real-time object tracking in one or more live video streams. It was a joint project between the [[http://www.intelsig.ulg.ac.be|Université de Liège]] and the [[http://www.tele.ucl.ac.be|Université Catholique de Louvain]] funded by the Walloon Region. Some results are summarized in a [[:research:trictrac-video|video]]. | + | {{:research:logo_trictrac.jpg?nolink&110 |TRICTRAC}} TRICTRAC (2003-2006), directed by J. Piater, aimed at the development of algorithms for real-time object tracking in one or more live video streams. It was a joint project between the [[http://www.intelsig.ulg.ac.be|Université de Liège]] and the [[http://www.tele.ucl.ac.be|Université Catholique de Louvain]] funded by the Walloon Region. Some results are summarized in a [[:research:trictrac-video|video]]. |