Intelligent and Interactive Systems

User Tools

Site Tools


research

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
research [2014/07/29 18:22]
c7031018
research [2018/09/03 19:35] (current)
Line 13: Line 13:
 The following list highlights some examples of our current and past work. Follow the links for more information. The following list highlights some examples of our current and past work. Follow the links for more information.
  
-{{ :​research:​visualcortex.png?​nolink&​260|Areas of the visual cortex}} **[[https://​iis.uibk.ac.at/​public/​antonio/​Research.html|Computational neuroscience]]** - Due to the complexity of vision tasks for computers, many scientists and engineers have resorted towards neurophysiology for solutions on how the human visual system solves such difficult tasks with astonishing efficiency and accuracy. Recent approaches to object recognition are mainly driven by the "edge doctrine"​ of visual processing pioneered by Hubel and Wiesel'​s work (that led to their 1981 Nobel Prize in Medicine). Edges - as detected by responses of simple and complex cells - provide important information about the presence of shapes in visual scenes. We consider that their detection is only a first step at generating an interpretation of images. Our line of work focuses on intermediate level processing areas, these operate upon initial simple and complex cell outputs towards the formation of neural representations of scene content that allow robust shape inference and object recognition. \\ Check our PLOS ONE publications in [[http://​dx.doi.org/​10.1371/​journal.pone.0042058|Rodriguez-Sanchez and Tsotsos (2012)]] and [[http://​dx.doi.org/​10.1371/​journal.pone.0098424|Azzopardi,​ Rodriguez-Sanchez,​ Piater and Petkov (2014)]].+{{ :​research:​visualcortex.png?​nolink&​260|Areas of the visual cortex}} **[[https://​iis.uibk.ac.at/​public/​antonio/​Research.html|Computational neuroscience]]** - Due to the complexity of vision tasks (object recognition,​ motion and stereo analysis) ​for computers, many scientists and engineers have resorted towards neurophysiology for solutions on how the human visual system solves such difficult tasks with astonishing efficiency and accuracy. Recent approaches to object recognition are mainly driven by the "edge doctrine"​ of visual processing pioneered by Hubel and Wiesel'​s work (that led to their 1981 Nobel Prize in Medicine). Edges - as detected by responses of simple and complex cells - provide important information about the presence of shapes in visual scenes. We consider that their detection is only a first step at generating an interpretation of images. Our line of work focuses on intermediate level processing areas, these operate upon initial simple and complex cell outputs towards the formation of neural representations of scene content that allow robust shape inference and object recognition. \\ Check our PLOS ONE publications in [[http://​dx.doi.org/​10.1371/​journal.pone.0042058|Rodriguez-Sanchez and Tsotsos (2012)]] and [[http://​dx.doi.org/​10.1371/​journal.pone.0098424|Azzopardi,​ Rodriguez-Sanchez,​ Piater and Petkov (2014)]] as well as our [[http://​dx.doi.org/​10.1109/​TPAMI.2012.272|summary PAMI paper]].
  
 <​html><​div style="​clear:​both"></​div><​br></​html>​ <​html><​div style="​clear:​both"></​div><​br></​html>​
Line 31: Line 31:
 <​html><​div style="​clear:​both"></​div><​br></​html>​ <​html><​div style="​clear:​both"></​div><​br></​html>​
  
 +<​html>​
 +  <div style="​border:​0;​float:​right;​margin:​0 0 0 1em">​
 +     <​video width="​270"​ height="​180"​ controls preload="​metadata">​
 +       <​source src="/​public/​videos/​symbol-formation.ogg"​ type='​video/​ogg;​codecs="​theora,​ vorbis"'>​
 +       <​applet code="​com.fluendo.player.Cortado.class"​ archive="/​public/​cortado.jar"​ width="​280"​ height="​170">​
 +        <param name="​autoPlay"​ value"​false"​ />
 +        <param name="​url"​ value="/​public/​videos/​symbol-formation.ogg"/>​
 +       </​applet>​
 +     </​video>​ </​div>​
 +</​html>​
 +**[[https://​iis.uibk.ac.at/​public/​emre/​research.html|From Continuous Manipulative Exploration to Symbolic Planning]]** - This work aims for bottom-up and autonomous development of symbolic planning operators from continuous interaction experience of a manipulator robot that explores the environment using its action repertoire. In the first stage, the robot explores the environment by executing actions on single objects, forms effect and object categories, and gains the ability to predict the object/​effect categories from the visual properties of the objects by learning the nonlinear and complex relations among them. In the next stage, with further interactions that involve stacking actions on pairs of objects, the system learns logical high-level rules that return a stacking-effect category given the categories of the involved objects and the discrete relations between them. Finally, these categories and rules are encoded in PDDL format, enabling symbolic planning. In the third state, the robot progressively updates the previously learned concepts and rules in order to better deal with novel situations that appear during multi-step plan executions. This way, categories of novel objects can be inferred or new categories can be formed based on previously learned rules. Our system further learns probabilistic rules that predict the action effects and the next object states. After learning, the robot was able to build stable towers in real world, exhibiting some interesting reasoning capabilities such as stacking larger objects before smaller ones, and predicting that cups remain insertable even with other objects inside. ([[https://​iis.uibk.ac.at/​public/​emre/​papers/​ICRA2015.pdf|ICRA2015.pdf]],​ [[https://​iis.uibk.ac.at/​public/​emre/​papers/​humanoids.pdf|humanoids.pdf]]).
 +
 +<​html><​div style="​clear:​both"></​div><​br></​html>​
 +
 +<​html>​
 +  <div style="​border:​0;​float:​right;​margin:​0 0 0 1em">​
 +     <​video width="​270"​ height="​180"​ controls preload="​metadata">​
 +       <​source src="/​public/​videos/​bootstrapping.ogg"​ type='​video/​ogg;​codecs="​theora,​ vorbis"'>​
 +       <​applet code="​com.fluendo.player.Cortado.class"​ archive="/​public/​cortado.jar"​ width="​280"​ height="​170">​
 +        <param name="​autoPlay"​ value"​false"​ />
 +        <param name="​url"​ value="/​public/​videos/​bootstrapping.ogg"/>​
 +       </​applet>​
 +     </​video>​ </​div>​
 +</​html>​
 +**[[https://​iis.uibk.ac.at/​public/​emre/​research.html|Bootstrapped learning and Emergent Structuring of interdependent single and multi-object affordances]]** - Inspired from infant development,​ we propose a learning system for a developmental robotic system that benefits from bootstrapping,​ where learned simpler structures (affordances) that encode robot'​s interaction dynamics with the world are used in learning of complex affordances ([[https://​iis.uibk.ac.at/​public/​emre/​papers/​ICDL2014-Bootstrapping.pdf|ICDL2014-Bootstrapping]]). In order to discover the developmental order of different affordances,​ we use Intrinsic Motivation approach that can guide the robot to explore the actions it should execute in order to maximize the learning progress. During this learning, the robot also discovers the structure by learning and using the most distinctive object features for predicting affordances. The results show that the hierarchical structure and the development order emerged from the learning dynamics that is guided by Intrinsic Motivation mechanisms and distinctive feature selection approach ([[https://​iis.uibk.ac.at/​public/​emre/​papers/​ICDL2014-EmergentStructuring.pdf|ICDL2014-EmergentStructuring.pdf]]).
 +
 +<​html><​div style="​clear:​both"></​div><​br></​html>​
  
 {{ :​research:​teney-2013-crv4.jpg?​nolink&​240|Probabilistic models of object appearance}} **[[:​research:​appearance-models|Probabilistic models of appearance]] for object recognition and pose estimation in 2D images** ​ -  We developed methods to represent the appearance of objects, and associated inference methods to identify them in images of cluttered scenes. The goal here is to leverage, to a maximum, the information conveyed by 2D images alone, without resorting to stereo or other 3D sensing techniques. We are also interested in recovering the precise pose (3D orientation) of objects, so as to ultimately use such information in the context of robotic interaction and grasping. {{ :​research:​teney-2013-crv4.jpg?​nolink&​240|Probabilistic models of object appearance}} **[[:​research:​appearance-models|Probabilistic models of appearance]] for object recognition and pose estimation in 2D images** ​ -  We developed methods to represent the appearance of objects, and associated inference methods to identify them in images of cluttered scenes. The goal here is to leverage, to a maximum, the information conveyed by 2D images alone, without resorting to stereo or other 3D sensing techniques. We are also interested in recovering the precise pose (3D orientation) of objects, so as to ultimately use such information in the context of robotic interaction and grasping.
research.1406650933.txt.gz · Last modified: 2018/09/03 14:57 (external edit)