Intelligent and Interactive Systems

User Tools

Site Tools


research:projects

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
research:projects [2017/11/03 09:59]
c7031007
research:projects [2020/03/05 15:01]
Alejandro Agostini
Line 3: Line 3:
 ===== Current Projects ===== ===== Current Projects =====
  
-[[https://www.profactor.at/​en/​research/​industrial-assistive-systems/​roboticassistance/​projects/​flexrop/​|FlexRoP - Flexible, assistive robot for customized production]] ​(FFG (Austria) ICT of the Future2016-2018): Production of mass customized products is not easy to automate since objects and object positions remain more uncertain compared ​to mass production scenariosUncertainty handling motivates ​the application ​of advanced sensor-based control strategies which increases system complexity ​of robot applications dramatically. A possible solution to this conflict ​is the concept of task level or skill based programming that will render modern robot systemsSuch systems can be applied ​without safety fenceare easier to program, more applicable ​and transformable into capable ​robot assistantsThe project will implement ​skill based programming framework ​and will apply it on selected industrial demo scenarios ​and evaluate research resultsThe main focus of the project is the application of methods ​to acquire process information by worker monitoring ​and thus make the robot assistants self-learning.+**SEAMLESS LEVELS OF ABSTRACTION FOR ROBOT COGNITION** - (Austrian Science Fund (FWF) - Lise Meitner Project, 2019-2021)The project seeks to develop a robotic cognitive architecture that overcomes the difficulties found when integrating different levels of abstractions (e.gAI and robotic techniques) for task plan and execution in unstructured scenarios. The backbone of the project is a unified approach that permits searching for feasible solutions for new tasks execution ​at all the levels of abstractions simultaneously,​ where symbolic descriptions are no longer disentangled from the physical aspects they represent.  
 + 
 +**OLIVER** ​Open-Ended Learning ​for Interactive Robots ​(EUREGIO IPN2019-2022): We would like to be able to teach robots to perform a great variety of tasks, including collaborative tasks, and tasks not specifically foreseen by its designers ​Thus, ​the space of potentially-important aspects ​of perception and action ​is by necessity extremely large, since every aspect may become important at some point in time ​Conventional machine learning methods cannot ​be directly ​applied ​in such unconstrained circumstancesas the training demands increase with the sizes of the input and output spaces. 
 +Thus, a central problem for the robot is to understand which aspects of a demonstrated action are crucial Such understanding allows ​robot to perform robustly even if the scenario ​and context change, to adapt its strategy, ​and to judge its successMoreover, it allows ​the robot to infer the human intent ​and task progress with respect to the goal, enabling it to share the task with humans, offer help or ask for help, resulting in natural human-robot cooperative behavior.
  
 <​html>​ <​html>​
Line 9: Line 12:
 </​html>​ </​html>​
  
-{{:​research:​imagine-transparent.png?​nolink&​200 ||}}[[https://​www.imagine-h2020.eu|IMAGINE - Robots Understanding Their Actions by Imagining Their Effects ]] (EU H2020, 2017-2020): seeks to enable robots to understand the structure of their environment and how it is affected by its actions. “Understanding” here means the ability of the robot (a) to determine the applicability of an action along with parameters to achieve the desired effect, and (b) to discern to what extent an action succeeded, and to infer possible causes of failure and generate recovery actions.+{{:​research:​imagine-transparent.png?​nolink&​200 ||}}[[https://​www.imagine-h2020.eu|IMAGINE - Robots Understanding Their Actions by Imagining Their Effects ]] (EU H2020, 2017-2021): seeks to enable robots to understand the structure of their environment and how it is affected by its actions. “Understanding” here means the ability of the robot (a) to determine the applicability of an action along with parameters to achieve the desired effect, and (b) to discern to what extent an action succeeded, and to infer possible causes of failure and generate recovery actions.
  
 <​html>​ <​html>​
Line 15: Line 18:
 </​html>​ </​html>​
  
-{{:​research:​squirrel.png?​nolink&​200 |}}[[http://www.squirrel-project.eu/|SQUIRREL]] (EU FP7-ICT-STREP2014-2018): Clutter in an open world is a challenge for many aspects ​of robotic systems, especially for autonomous robots deployed in unstructured domestic settings, affecting navigation, manipulation,​ vision, human robot interaction ​and planning. ​ SQUIRREL addresses these issues by actively controlling clutter and incrementally learning ​to extend ​the robot's capabilities while doing soWe term this the B3 (bit by bit) approach, as the robot tackles clutter one bit at a time and also extends its knowledge continuously as new bits of information become available ​SQUIRREL is inspired by a user driven scenariothat exhibits all the rich complexity required ​to convincingly drive researchbut allows tractable solutions with high potential for exploitationWe propose ​toy cleaning scenario, where a robot learns to collect toys scattered in loose clumps or tangled heaps on the floor in a child'​s room, and to stow them in designated target locations.+ 
 +===== Completed Projects (Selection) ===== 
 + 
 +{{:​research:​flexrop-logo.png?​nolink&​200 ​||}}[[https://www.profactor.at/​en/​research/​industrial-assistive-systems/​roboticassistance/​projects/​flexrop/|FlexRoP - Flexible, assistive robot for customized production]] (FFG (Austria) ​ICT of the Future2016-2019): Production ​of mass customized products is not easy to automate since objects ​and object positions remain more uncertain compared ​to mass production scenarios. Uncertainty handling motivates ​the application of advanced sensor-based control strategies which increases system complexity of robot applications dramaticallyA possible solution to this conflict is the concept of task level or skill based programming that will render modern ​robot systemsSuch systems can be applied without safety fenceare easier ​to programmore applicable and transformable into capable robot assistantsThe project will implement ​skill based programming framework and will apply it on selected industrial demo scenarios ​and evaluate research results. The main focus of the project is the application of methods ​to acquire process information by worker monitoring and thus make the robot assistants self-learning.
  
 <​html>​ <​html>​
Line 21: Line 27:
 </​html>​ </​html>​
  
 +{{:​research:​squirrel.png?​nolink&​200 |}}[[http://​www.squirrel-project.eu/​|SQUIRREL]] (EU FP7-ICT-STREP,​ 2014-2018): Clutter in an open world is a challenge for many aspects of robotic systems, especially for autonomous robots deployed in unstructured domestic settings, affecting navigation, manipulation,​ vision, human robot interaction and planning. ​ SQUIRREL addresses these issues by actively controlling clutter and incrementally learning to extend the robot'​s capabilities while doing so. We term this the B3 (bit by bit) approach, as the robot tackles clutter one bit at a time and also extends its knowledge continuously as new bits of information become available. ​ SQUIRREL is inspired by a user driven scenario, that exhibits all the rich complexity required to convincingly drive research, but allows tractable solutions with high potential for exploitation. We propose a toy cleaning scenario, where a robot learns to collect toys scattered in loose clumps or tangled heaps on the floor in a child'​s room, and to stow them in designated target locations.
  
-===== Completed Projects (Selection) =====+<​html>​ 
 +<div style="​clear:​both"><​br></​div>​ 
 +</​html>​
  
 {{:​research:​3rdhand.png?​nolink&​110 |3rdHand}} {{:​research:​3rdhand.png?​nolink&​110 |3rdHand}}
-[[http://3rdhandrobot.eu/​|3rdHand]] (EU FP7-ICT-STREP,​ 2013-2017) develops a semi-autonomous robot assistant that acts as a third hand of a human worker. It will be straightforward to instruct even by an untrained layman worker, allow for efficient knowledge transfer between tasks, and enable effective collaboration between a human worker with a robot third hand. The main contributions of this project will be the scientific principles of semi-autonomous human-robot collaboration,​ a new semi-autonomous robotic system that is able to (i) learn cooperative tasks from demonstration,​ (ii) learn from instruction,​ and (iii) transfer knowledge between tasks and environments.+[[https://cordis.europa.eu/project/​rcn/​110160/​factsheet/​en|3rdHand]] (EU FP7-ICT-STREP,​ 2013-2017) develops a semi-autonomous robot assistant that acts as a third hand of a human worker. It will be straightforward to instruct even by an untrained layman worker, allow for efficient knowledge transfer between tasks, and enable effective collaboration between a human worker with a robot third hand. The main contributions of this project will be the scientific principles of semi-autonomous human-robot collaboration,​ a new semi-autonomous robotic system that is able to (i) learn cooperative tasks from demonstration,​ (ii) learn from instruction,​ and (iii) transfer knowledge between tasks and environments.
 <​html>​ <​html>​
 <div style="​clear:​both"><​br></​div>​ <div style="​clear:​both"><​br></​div>​
Line 42: Line 51:
  
  
-{{:​research:​intellact.png?​nolink&​110 |IntellAct}} [[http://intellact.sdu.dk/​|IntellAct]] ​ (EU FP7-ICT-STREP,​ 2011-2014) addresses the problem of understanding and exploiting the meaning (semantics) of manipulations in terms of objects, actions and their consequences for reproducing human actions with machines. This is in particular required for the interaction between humans and robots in which the robot has to understand the human action and then to transfer it to its own embodiment.+{{:​research:​intellact.png?​nolink&​110 |IntellAct}} [[https://cordis.europa.eu/project/​rcn/​97727/​factsheet/​en|IntellAct]] ​ (EU FP7-ICT-STREP,​ 2011-2014) addresses the problem of understanding and exploiting the meaning (semantics) of manipulations in terms of objects, actions and their consequences for reproducing human actions with machines. This is in particular required for the interaction between humans and robots in which the robot has to understand the human action and then to transfer it to its own embodiment.
  
 <​html>​ <​html>​
Line 48: Line 57:
 </​html>​ </​html>​
  
-{{:​research:​learnbip.png?​nolink&​110 |LearnBiP}} [[http://​www.learnbip.eu/​|LearnBiP]] (EU FP7-ICT ECHORD Experiment, 2011-2012) has two main aims. First it utilizes the huge amount of data generated in industrial bin-picking for the introduction of grasp learning. Second it evaluates the potential of the SCHUNK dexterous hand SDH-2 for its application in industrial bin-picking.+{{:​research:​learnbip.png?​nolink&​110 |LearnBiP}} [[http://​www.echord.info/wikis/​website/​learnbip.html|LearnBiP]] (EU FP7-ICT ECHORD Experiment, 2011-2012) has two main aims. First it utilizes the huge amount of data generated in industrial bin-picking for the introduction of grasp learning. Second it evaluates the potential of the SCHUNK dexterous hand SDH-2 for its application in industrial bin-picking.
  
 <​html>​ <​html>​
Line 54: Line 63:
 </​html>​ </​html>​
  
-{{:​research:​signspeak.png?​nolink&​110 |SignSpeak}} [[http://​www.signspeak.eu/​|SignSpeak]] (EU FP7-ICT-STREP,​ 2009-2012) focused on scientific understanding and vision-based technological development for continuous sign language recognition and translation. The aim was to increase the linguistic understanding of sign languages and to create methods for transcribing sign language into text. See an [[http://​viewer.zmags.com/​publication/​4c7a6b67#/​4c7a6b67/​53|article in the Projects magazine]].+{{:​research:​signspeak.png?​nolink&​110 |SignSpeak}} [[http://​www.signspeak.eu/​|SignSpeak]] (EU FP7-ICT-STREP,​ 2009-2012) focused on scientific understanding and vision-based technological development for continuous sign language recognition and translation. The aim was to increase the linguistic understanding of sign languages and to create methods for transcribing sign language into text.
  
 <​html>​ <​html>​
Line 66: Line 75:
 </​html>​ </​html>​
  
-{{:​research:​logo_trictrac.jpg?​nolink&​110 |TRICTRAC}} ​[[http://​www.multitel.be/​trictrac/​|TRICTRAC]] (2003-2006),​ directed by J. Piater, aimed at the development of algorithms for real-time object tracking in one or more live video streams. It was a joint project between the [[http://​www.intelsig.ulg.ac.be|Université de Liège]] and the [[http://​www.tele.ucl.ac.be|Université Catholique de Louvain]] funded by the Walloon Region. Some results are summarized in a [[:​research:​trictrac-video|video]].+{{:​research:​logo_trictrac.jpg?​nolink&​110 |TRICTRAC}} TRICTRAC (2003-2006),​ directed by J. Piater, aimed at the development of algorithms for real-time object tracking in one or more live video streams. It was a joint project between the [[http://​www.intelsig.ulg.ac.be|Université de Liège]] and the [[http://​www.tele.ucl.ac.be|Université Catholique de Louvain]] funded by the Walloon Region. Some results are summarized in a [[:​research:​trictrac-video|video]].
  
  
research/projects.txt · Last modified: 2024/02/19 12:24 by Antonio Rodriguez-Sanchez