Intelligent and Interactive Systems

User Tools

Site Tools


research:projects

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
research:projects [2017/09/14 17:22]
c703101
research:projects [2024/02/19 12:24] (current)
Antonio Rodriguez-Sanchez [Completed Projects (Selection)]
Line 1: Line 1:
 ====== Externally-Funded,​ Collaborative Projects ====== ====== Externally-Funded,​ Collaborative Projects ======
  
-===== Current ​EU Projects =====+===== Current Projects =====
  
-{{:​research:​imagine-transparent.svg?nolink&​200 ||}}[[https://​www.imagine-h2020.eu|IMAGINE - Robots Understanding Their Actions by Imagining Their Effects ​]] (EU H20202017-2020): seeks to enable robots ​to understand ​the structure ​of their environment ​and how it is affected by its actions. “Understanding” here means the ability ​of the robot (a) to determine ​the applicability of an action along with parameters to achieve ​the desired effect, and (b) to discern to what extent an action succeededand to infer possible causes ​of failure ​and generate recovery actions.+[[https://​doi.org/​10.55776/​P36965|{{:​research:​doi.svg?13|}}]] **PURSUIT** - Purposeful Signal-symbol Relations for Manipulation Planning ​(Austrian Science Fund (FWF)Principal Investigator Project, 2023-2026): Artificial intelligence (AI) task planning approaches permits projecting robotic applications outside industrial settings by automatically generating the required instructions for task execution. However, the abstract representation used by AI planning methods makes it complicated ​to encode physical constraints that are critical ​to successfully execute a task: What specific movements are necessary to remove a cup from a shelf without collisions? ​ At which precise point should a bottle be grasped for a stable pouring afterwards? These physical constraints are normally evaluated outside AI planning using computationally expensive trial-and-error strategies. PURSUIT focuses on a new task and motion planning (TAMP) approach where the evaluation ​of physical constraints for task execution starts at perception stage and propagates through planning and execution using a single heuristic searchThe approach is based on a common signal-symbol representation that encodes physical constraints in terms of the purpose” of object relations in the context of task: Is the hand-bottle relation adequate for picking up the bottle for a stable pouring? Our TAMP approach aims to quickly render task plans that are physically feasibleavoiding the intensive computations ​of trial-and-error approaches.
  
 <​html>​ <​html>​
Line 9: Line 9:
 </​html>​ </​html>​
  
-{{:​research:​squirrel.png?nolink&​200 ​|}}[[http://​www.squirrel-project.eu/​|SQUIRREL]] (EU FP7-ICT-STREP2014-2018): Clutter in an open world is a challenge for many aspects ​of robotic systemsespecially ​for autonomous ​robots ​deployed in unstructured domestic settingsaffecting navigationmanipulationvisionhuman robot interaction ​and planning. ​ SQUIRREL addresses these issues by actively controlling clutter and incrementally learning to extend ​the robot'​s capabilities while doing so. We term this the B3 (bit by bit) approachas the robot tackles clutter one bit at a time and also extends its knowledge continuously as new bits of information become available ​SQUIRREL is inspired by a user driven scenariothat exhibits all the rich complexity required to convincingly drive research, ​but allows tractable solutions with high potential for exploitationWe propose a toy cleaning scenario, where a robot learns to collect toys scattered in loose clumps or tangled heaps on the floor in a child'room, and to stow them in designated target locations.+[[https://​doi.org/​10.55776/​I5755|{{:​research:​doi.svg?13|}}]] **ELSA** ​Effective Learning of Social Affordances for Human-Robot Interaction (ANR/FWF AAPG2022-2026): Affordances are action opportunities directly perceived by an agent to interact with its environment. The concept ​is gaining interest in robotics, where it offers ​rich description ​of the objects and the environmentfocusing on the potential interactions rather than the sole physical properties. In this project, we extend this notion to social affordances. The goal is for robots ​to autonomously learn not only the physical effects of interactive actions with humansbut also the humansʼ reactions they produce (emotionspeechmovement). For instancepointing ​and gazing in the same direction make humans orient towards ​the pointed directionwhile pointing and looking at the finger make humans look at the fingerBesidesscratching ​the robotʼs chin makes some but not all humans smileThe project will investigate how learning human- general and human-specific social affordances can enrich ​a robotʼaction repertoire for human-aware task planning ​and efficient human-robot interaction.
  
 <​html>​ <​html>​
Line 15: Line 15:
 </​html>​ </​html>​
  
-{{:research:​3rdhand.png?​nolink&​110 ​|3rdHand}} +**[[https://innalp.at|INNALP Education Hub]]** ([[https://projekte.ffg.at/projekt/​4119035|FFG 41190352021-2024]]) - creates innovative, inclusive, and sustainable teaching and learning projects in the heart of the Alps, systematically testing and scientifically tailoring educational innovations for lasting integration into the education systemThe INNALP Education Hub includes (so far) 18 innovation projects, assigned ​to the three underlying innovation fields"​DigiTech Space," "​Media,​ Inclusion & AI Space," ​and "Green Space." ​The researched areas of the project range from digitization and robotics to inclusive artificial intelligence and environmental education. 
-[[http://3rdhandrobot.eu/|3rdHand]] (EU FP7-ICT-STREP2013-2017develops a semi-autonomous robot assistant that acts as a third hand of a human workerIt will be straightforward ​to instruct even by an untrained layman workerallow for efficient knowledge transfer between tasks, and enable effective collaboration between a human worker with a robot third hand. The main contributions ​of this project will be the scientific principles ​of semi-autonomous human-robot collaboration, a new semi-autonomous robotic system that is able to (i) learn cooperative tasks from demonstration, (iilearn from instruction, and (iii) transfer knowledge between tasks and environments.+One of the innovation projects is the Software Testing AI Robotic (STAIR) Lab. The [[https://​stair-lab.uibk.ac.at|STAIR Lab]] provides learning materialsworkshops, and simulation environment for minibots. The efforts of the STAIR Learning Lab are dedicated ​to the goal of establishing roboticsartificial intelligence ​(AI), and software testing in schools. 
 + 
 <​html>​ <​html>​
 <div style="​clear:​both"><​br></​div>​ <div style="​clear:​both"><​br></​div>​
 </​html>​ </​html>​
 +
 +** DESDET'​s ** (Desinfection Detective, [[https://​www.standort-tirol.at/​unternehmen/​foerderungen/​gefoerderte-k-regio-projekte|K-Regio]]) aim is to develop a procedure to prove the disinfection effect and consequently the correct application (e.g.: compliance with the exposure time specified in the EN test) of chemical and/or physical and/or physical disinfection methods in real time and without increased technical effort. Based on such a new procedure, it should be possible, for example, for quality assurance employees or users of the be able to check the effect of the disinfection steps on site in just a few minutes. Furthermore,​ the aim is to replace the current gold standard for quality control of disinfection processes (phase 2 stage 2 tests, EN 16615, EN 16616) by the optical and AI methods to be developed.
 +
 +<​html>​
 +<div style="​clear:​both"><​br></​div>​
 +</​html>​
 +
  
  
 ===== Completed Projects (Selection) ===== ===== Completed Projects (Selection) =====
  
-{{:​research:​pacman_logo2.png?​nolink&​110 |PaCMan}} [[http://​www.pacman-project.eu/​|PaCMan]] (EU FP7-ICT-STREP,​ 2013-2016) advances methods for object perception, representation and manipulation so that a robot is able to robustly manipulate objects even when those objects are unfamiliar, and even though the robot has unreliable perception and action. The proposal is founded on two assumptions. The first of these is that the representation of the object'​s shape in particular and of other properties in general will benefit from being compositional (or very loosely hierarchical and part based). The second is that manipulation planning and execution benefits from explicitly reasoning about uncertainty in object pose, shape etcetera; how it changes under the robot'​s actions, and the robot should plan actions that not only achieve the task, but gather information to make task achievement more reliable.+[[https://​doi.org/​10.55776/​M2659|{{:​research:​doi.svg?​13|}}]] **SEAROCO** - Seamless Levels of Abstraction for Robot Cognition (Austrian Science Fund (FWF) - Lise Meitner Project, 2019-2023): The project seeks to develop a robotic cognitive architecture that overcomes the difficulties found when integrating different levels of abstractions (e.g. AI and robotic techniques) for task plan and execution in unstructured scenarios. The backbone of the project is a unified approach that permits searching for feasible solutions for new tasks execution at all the levels of abstractions simultaneously,​ where symbolic descriptions are no longer disentangled from the physical aspects they represent.  
 + 
 +<​html>​ 
 +<div style="​clear:​both"><​br></​div>​ 
 +</​html>​ 
 + 
 +**OLIVER** - Open-Ended Learning for Interactive Robots (EUREGIO IPN, 2019-2022): We would like to be able to teach robots to perform a great variety of tasks, including collaborative tasks, and tasks not specifically foreseen by its designers. ​ Thus, the space of potentially-important aspects of perception and action is by necessity extremely large, since every aspect may become important at some point in time.  Conventional machine learning methods cannot be directly applied in such unconstrained circumstances,​ as the training demands increase with the sizes of the input and output spaces. 
 +Thus, a central problem for the robot is to understand which aspects of a demonstrated action are crucial. ​ Such understanding allows a robot to perform robustly even if the scenario and context change, to adapt its strategy, and to judge its success. Moreover, it allows the robot to infer the human intent and task progress with respect to the goal, enabling it to share the task with humans, offer help or ask for help, resulting in natural human-robot cooperative behavior. 
 + 
 +<​html>​ 
 +<div style="​clear:​both"​ id="​OLIVER"><​br></​div>​ 
 +</​html>​ 
 + 
 +**CADS ** (FFG, [[https://​www.lo-la.info/​cads-update/​|Camera Avalanche Detection System]]) proposes a novel approach to automating avalanche detection via analysis of webcam streams with deep learning models. To assess the viability of this approach, we trained convolutional neural networks on a publicly-released dataset of 4090 mountain photographs and achieved avalanche detection F1 scores of 92.9% per image and 64.0% per avalanche. Notably, our models do not require a digital elevation model, enabling straightforward integration with existing webcams in new geographic regions. The paper concludes with findings from an initial case study conducted in the Austrian Alps and our vision for operational applications of trained models. 
 + 
 +<​html>​ 
 +<div style="​clear:​both"​ id="​OLIVER"><​br></​div>​ 
 +</​html>​ 
 + 
 + 
 +{{:​research:​imagine-transparent.png?​nolink&​200 ||}}[[https://​www.imagine-h2020.eu|IMAGINE - Robots Understanding Their Actions by Imagining Their Effects ]] (EU H2020, 2017-2021): seeks to enable robots to understand the structure of their environment and how it is affected by its actions. “Understanding” here means the ability of the robot (a) to determine the applicability of an action along with parameters to achieve the desired effect, and (b) to discern to what extent an action succeeded, and to infer possible causes of failure and generate recovery actions. 
 + 
 +<​html>​ 
 +<div style="​clear:​both"><​br></​div>​ 
 +</​html>​ 
 + 
 +{{:​research:​flexrop-logo.png?​nolink&​200 ||}}[[https://​www.profactor.at/​en/​research/​industrial-assistive-systems/​roboticassistance/​projects/​flexrop/​|FlexRoP - Flexible, assistive robot for customized production]] (FFG (Austria) ICT of the Future, 2016-2019): Production of mass customized products is not easy to automate since objects and object positions remain more uncertain compared to mass production scenarios. Uncertainty handling motivates the application of advanced sensor-based control strategies which increases system complexity of robot applications dramatically. A possible solution to this conflict is the concept of task level or skill based programming that will render modern robot systems. Such systems can be applied without safety fence, are easier to program, more applicable and transformable into capable robot assistants. The project will implement a skill based programming framework and will apply it on selected industrial demo scenarios and evaluate research results. The main focus of the project is the application of methods to acquire process information by worker monitoring and thus make the robot assistants self-learning. 
 + 
 +<​html>​ 
 +<div style="​clear:​both"><​br></​div>​ 
 +</​html>​ 
 + 
 +{{:​research:​squirrel.png?​nolink&​200 |}}[[http://​www.squirrel-project.eu/​|SQUIRREL]] (EU FP7-ICT-STREP,​ 2014-2018): Clutter in an open world is a challenge for many aspects of robotic systems, especially for autonomous robots deployed in unstructured domestic settings, affecting navigation, manipulation,​ vision, human robot interaction and planning. ​ SQUIRREL addresses these issues by actively controlling clutter and incrementally learning to extend the robot'​s capabilities while doing so. We term this the B3 (bit by bit) approach, as the robot tackles clutter one bit at a time and also extends its knowledge continuously as new bits of information become available. ​ SQUIRREL is inspired by a user driven scenario, that exhibits all the rich complexity required to convincingly drive research, but allows tractable solutions with high potential for exploitation. We propose a toy cleaning scenario, where a robot learns to collect toys scattered in loose clumps or tangled heaps on the floor in a child'​s room, and to stow them in designated target locations. 
 + 
 +<​html>​ 
 +<div style="​clear:​both"><​br></​div>​ 
 +</​html>​ 
 + 
 +{{:​research:​3rdhand.png?​nolink&​110 |3rdHand}} 
 +[[https://​cordis.europa.eu/​project/​id/​610878|3rdHand]] (EU FP7-ICT-STREP,​ 2013-2017) develops a semi-autonomous robot assistant that acts as a third hand of a human worker. It will be straightforward to instruct even by an untrained layman worker, allow for efficient knowledge transfer between tasks, and enable effective collaboration between a human worker with a robot third hand. The main contributions of this project will be the scientific principles of semi-autonomous human-robot collaboration,​ a new semi-autonomous robotic system that is able to (i) learn cooperative tasks from demonstration,​ (ii) learn from instruction,​ and (iii) transfer knowledge between tasks and environments. 
 +<​html>​ 
 +<div style="​clear:​both"><​br></​div>​ 
 +</​html>​ 
 + 
 +{{:​research:​pacman_logo2.png?​nolink&​110 |PaCMan}} [[http://​www.pacman-project.eu/​|PaCMan]] ​- Probabilistic and Compositional Representations for Object Manipulation ​(EU FP7-ICT-STREP,​ 2013-2016) advances methods for object perception, representation and manipulation so that a robot is able to robustly manipulate objects even when those objects are unfamiliar, and even though the robot has unreliable perception and action. The proposal is founded on two assumptions. The first of these is that the representation of the object'​s shape in particular and of other properties in general will benefit from being compositional (or very loosely hierarchical and part based). The second is that manipulation planning and execution benefits from explicitly reasoning about uncertainty in object pose, shape etcetera; how it changes under the robot'​s actions, and the robot should plan actions that not only achieve the task, but gather information to make task achievement more reliable.
 <​html>​ <​html>​
 <div style="​clear:​both"><​br></​div>​ <div style="​clear:​both"><​br></​div>​
Line 36: Line 89:
  
  
-{{:​research:​intellact.png?​nolink&​110 |IntellAct}} [[http://intellact.sdu.dk/​|IntellAct]] ​ (EU FP7-ICT-STREP,​ 2011-2014) addresses the problem of understanding and exploiting the meaning (semantics) of manipulations in terms of objects, actions and their consequences for reproducing human actions with machines. This is in particular required for the interaction between humans and robots in which the robot has to understand the human action and then to transfer it to its own embodiment.+{{:​research:​intellact.png?​nolink&​110 |IntellAct}} [[https://cordis.europa.eu/project/​rcn/​97727/​factsheet/​en|IntellAct]] ​ (EU FP7-ICT-STREP,​ 2011-2014) addresses the problem of understanding and exploiting the meaning (semantics) of manipulations in terms of objects, actions and their consequences for reproducing human actions with machines. This is in particular required for the interaction between humans and robots in which the robot has to understand the human action and then to transfer it to its own embodiment.
  
 <​html>​ <​html>​
Line 42: Line 95:
 </​html>​ </​html>​
  
-{{:​research:​learnbip.png?​nolink&​110 |LearnBiP}} [[http://​www.learnbip.eu/​|LearnBiP]] (EU FP7-ICT ECHORD Experiment, 2011-2012) has two main aims. First it utilizes the huge amount of data generated in industrial bin-picking for the introduction of grasp learning. Second it evaluates the potential of the SCHUNK dexterous hand SDH-2 for its application in industrial bin-picking.+{{:​research:​learnbip.png?​nolink&​110 |LearnBiP}} [[http://​www.echord.info/wikis/​website/​learnbip.html|LearnBiP]] (EU FP7-ICT ECHORD Experiment, 2011-2012) has two main aims. First it utilizes the huge amount of data generated in industrial bin-picking for the introduction of grasp learning. Second it evaluates the potential of the SCHUNK dexterous hand SDH-2 for its application in industrial bin-picking.
  
 <​html>​ <​html>​
Line 48: Line 101:
 </​html>​ </​html>​
  
-{{:​research:​signspeak.png?​nolink&​110 |SignSpeak}} [[http://​www.signspeak.eu/​|SignSpeak]] (EU FP7-ICT-STREP,​ 2009-2012) focused on scientific understanding and vision-based technological development for continuous sign language recognition and translation. The aim was to increase the linguistic understanding of sign languages and to create methods for transcribing sign language into text. See an [[http://​viewer.zmags.com/​publication/​4c7a6b67#/​4c7a6b67/​53|article in the Projects magazine]].+{{:​research:​signspeak.png?​nolink&​110 |SignSpeak}} [[http://​www.signspeak.eu/​|SignSpeak]] (EU FP7-ICT-STREP,​ 2009-2012) focused on scientific understanding and vision-based technological development for continuous sign language recognition and translation. The aim was to increase the linguistic understanding of sign languages and to create methods for transcribing sign language into text.
  
 <​html>​ <​html>​
Line 60: Line 113:
 </​html>​ </​html>​
  
-{{:​research:​logo_trictrac.jpg?​nolink&​110 |TRICTRAC}} ​[[http://​www.multitel.be/​trictrac/​|TRICTRAC]] (2003-2006),​ directed by J. Piater, aimed at the development of algorithms for real-time object tracking in one or more live video streams. It was a joint project between the [[http://​www.intelsig.ulg.ac.be|Université de Liège]] and the [[http://​www.tele.ucl.ac.be|Université Catholique de Louvain]] funded by the Walloon Region. Some results are summarized in a [[:​research:​trictrac-video|video]].+{{:​research:​logo_trictrac.jpg?​nolink&​110 |TRICTRAC}} TRICTRAC (2003-2006),​ directed by J. Piater, aimed at the development of algorithms for real-time object tracking in one or more live video streams. It was a joint project between the [[http://​www.intelsig.ulg.ac.be|Université de Liège]] and the [[http://​www.tele.ucl.ac.be|Université Catholique de Louvain]] funded by the Walloon Region. Some results are summarized in a [[:​research:​trictrac-video|video]].
  
  
research/projects.1505402563.txt.gz · Last modified: 2018/09/03 14:57 (external edit)