Intelligent and Interactive Systems

User Tools

Site Tools


research:projects

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
research:projects [2019/10/29 07:41]
Simon Haller-Seeber
research:projects [2019/10/29 12:29]
Justus Piater
Line 32: Line 32:
  
 {{:​research:​3rdhand.png?​nolink&​110 |3rdHand}} {{:​research:​3rdhand.png?​nolink&​110 |3rdHand}}
-[[http://3rdhandrobot.eu/​|3rdHand]] (EU FP7-ICT-STREP,​ 2013-2017) develops a semi-autonomous robot assistant that acts as a third hand of a human worker. It will be straightforward to instruct even by an untrained layman worker, allow for efficient knowledge transfer between tasks, and enable effective collaboration between a human worker with a robot third hand. The main contributions of this project will be the scientific principles of semi-autonomous human-robot collaboration,​ a new semi-autonomous robotic system that is able to (i) learn cooperative tasks from demonstration,​ (ii) learn from instruction,​ and (iii) transfer knowledge between tasks and environments.+[[https://cordis.europa.eu/project/​rcn/​110160/​factsheet/​en|3rdHand]] (EU FP7-ICT-STREP,​ 2013-2017) develops a semi-autonomous robot assistant that acts as a third hand of a human worker. It will be straightforward to instruct even by an untrained layman worker, allow for efficient knowledge transfer between tasks, and enable effective collaboration between a human worker with a robot third hand. The main contributions of this project will be the scientific principles of semi-autonomous human-robot collaboration,​ a new semi-autonomous robotic system that is able to (i) learn cooperative tasks from demonstration,​ (ii) learn from instruction,​ and (iii) transfer knowledge between tasks and environments.
 <​html>​ <​html>​
 <div style="​clear:​both"><​br></​div>​ <div style="​clear:​both"><​br></​div>​
 </​html>​ </​html>​
  
-{{:​research:​pacman_logo2.png?​nolink&​110 |PaCMan}} [[https://cordis.europa.eu/project/​rcn/​106859/​en|PaCMan]] (EU FP7-ICT-STREP,​ 2013-2016) advances methods for object perception, representation and manipulation so that a robot is able to robustly manipulate objects even when those objects are unfamiliar, and even though the robot has unreliable perception and action. The proposal is founded on two assumptions. The first of these is that the representation of the object'​s shape in particular and of other properties in general will benefit from being compositional (or very loosely hierarchical and part based). The second is that manipulation planning and execution benefits from explicitly reasoning about uncertainty in object pose, shape etcetera; how it changes under the robot'​s actions, and the robot should plan actions that not only achieve the task, but gather information to make task achievement more reliable.+{{:​research:​pacman_logo2.png?​nolink&​110 |PaCMan}} [[http://www.pacman-project.eu/​|PaCMan]] (EU FP7-ICT-STREP,​ 2013-2016) advances methods for object perception, representation and manipulation so that a robot is able to robustly manipulate objects even when those objects are unfamiliar, and even though the robot has unreliable perception and action. The proposal is founded on two assumptions. The first of these is that the representation of the object'​s shape in particular and of other properties in general will benefit from being compositional (or very loosely hierarchical and part based). The second is that manipulation planning and execution benefits from explicitly reasoning about uncertainty in object pose, shape etcetera; how it changes under the robot'​s actions, and the robot should plan actions that not only achieve the task, but gather information to make task achievement more reliable.
 <​html>​ <​html>​
 <div style="​clear:​both"><​br></​div>​ <div style="​clear:​both"><​br></​div>​
Line 49: Line 49:
  
  
-{{:​research:​intellact.png?​nolink&​110 |IntellAct}} [[http://intellact.sdu.dk/​|IntellAct]] ​ (EU FP7-ICT-STREP,​ 2011-2014) addresses the problem of understanding and exploiting the meaning (semantics) of manipulations in terms of objects, actions and their consequences for reproducing human actions with machines. This is in particular required for the interaction between humans and robots in which the robot has to understand the human action and then to transfer it to its own embodiment.+{{:​research:​intellact.png?​nolink&​110 |IntellAct}} [[https://cordis.europa.eu/project/​rcn/​97727/​factsheet/​en|IntellAct]] ​ (EU FP7-ICT-STREP,​ 2011-2014) addresses the problem of understanding and exploiting the meaning (semantics) of manipulations in terms of objects, actions and their consequences for reproducing human actions with machines. This is in particular required for the interaction between humans and robots in which the robot has to understand the human action and then to transfer it to its own embodiment.
  
 <​html>​ <​html>​
Line 55: Line 55:
 </​html>​ </​html>​
  
-{{:​research:​learnbip.png?​nolink&​110 |LearnBiP}} [[http://​www.learnbip.eu/​|LearnBiP]] (EU FP7-ICT ECHORD Experiment, 2011-2012) has two main aims. First it utilizes the huge amount of data generated in industrial bin-picking for the introduction of grasp learning. Second it evaluates the potential of the SCHUNK dexterous hand SDH-2 for its application in industrial bin-picking.+{{:​research:​learnbip.png?​nolink&​110 |LearnBiP}} [[http://​www.echord.info/wikis/​website/​learnbip.html|LearnBiP]] (EU FP7-ICT ECHORD Experiment, 2011-2012) has two main aims. First it utilizes the huge amount of data generated in industrial bin-picking for the introduction of grasp learning. Second it evaluates the potential of the SCHUNK dexterous hand SDH-2 for its application in industrial bin-picking.
  
 <​html>​ <​html>​
Line 61: Line 61:
 </​html>​ </​html>​
  
-{{:​research:​signspeak.png?​nolink&​110 |SignSpeak}} [[http://​www.signspeak.eu/​|SignSpeak]] (EU FP7-ICT-STREP,​ 2009-2012) focused on scientific understanding and vision-based technological development for continuous sign language recognition and translation. The aim was to increase the linguistic understanding of sign languages and to create methods for transcribing sign language into text. See an [[http://​viewer.zmags.com/​publication/​4c7a6b67#/​4c7a6b67/​53|article in the Projects magazine]].+{{:​research:​signspeak.png?​nolink&​110 |SignSpeak}} [[http://​www.signspeak.eu/​|SignSpeak]] (EU FP7-ICT-STREP,​ 2009-2012) focused on scientific understanding and vision-based technological development for continuous sign language recognition and translation. The aim was to increase the linguistic understanding of sign languages and to create methods for transcribing sign language into text.
  
 <​html>​ <​html>​
Line 73: Line 73:
 </​html>​ </​html>​
  
-{{:​research:​logo_trictrac.jpg?​nolink&​110 |TRICTRAC}} ​[[http://​www.multitel.be/​trictrac/​|TRICTRAC]] (2003-2006),​ directed by J. Piater, aimed at the development of algorithms for real-time object tracking in one or more live video streams. It was a joint project between the [[http://​www.intelsig.ulg.ac.be|Université de Liège]] and the [[http://​www.tele.ucl.ac.be|Université Catholique de Louvain]] funded by the Walloon Region. Some results are summarized in a [[:​research:​trictrac-video|video]].+{{:​research:​logo_trictrac.jpg?​nolink&​110 |TRICTRAC}} TRICTRAC (2003-2006),​ directed by J. Piater, aimed at the development of algorithms for real-time object tracking in one or more live video streams. It was a joint project between the [[http://​www.intelsig.ulg.ac.be|Université de Liège]] and the [[http://​www.tele.ucl.ac.be|Université Catholique de Louvain]] funded by the Walloon Region. Some results are summarized in a [[:​research:​trictrac-video|video]].
  
  
research/projects.txt · Last modified: 2024/02/19 12:24 by Antonio Rodriguez-Sanchez