Intelligent and Interactive Systems

User Tools

Site Tools


research

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
research [2016/06/06 08:01]
Emre Ugur
research [2016/06/06 08:21] (current)
Emre Ugur
Line 34: Line 34:
   <div style="​border:​0;​float:​right;​margin:​0 0 0 1em">​   <div style="​border:​0;​float:​right;​margin:​0 0 0 1em">​
      <​video width="​270"​ height="​180"​ controls preload="​metadata">​      <​video width="​270"​ height="​180"​ controls preload="​metadata">​
-       <​source src="/​public/​videos/​bootstrapping.ogg" type='​video/​ogg;​codecs="​theora,​ vorbis"'>​+       <​source src="/​public/​videos/​symbol-formation.ogg" type='​video/​ogg;​codecs="​theora,​ vorbis"'>​
        <​applet code="​com.fluendo.player.Cortado.class"​ archive="/​public/​cortado.jar"​ width="​280"​ height="​170">​        <​applet code="​com.fluendo.player.Cortado.class"​ archive="/​public/​cortado.jar"​ width="​280"​ height="​170">​
         <param name="​autoPlay"​ value"​false"​ />         <param name="​autoPlay"​ value"​false"​ />
-        <param name="​url"​ value="/​public/​videos/​pacman_demo.ogg"/>​+        <param name="​url"​ value="/​public/​videos/​symbol-formation.ogg"/>​
        </​applet>​        </​applet>​
      </​video>​ </​div>​      </​video>​ </​div>​
 </​html>​ </​html>​
-**[[https://​iis.uibk.ac.at/​public/​emre/​research.html|From Continuous Manipulative Exploration to Symbolic Planning]]** - This work aims for bottom-up and autonomous development of symbolic planning operators from continuous interaction experience of a manipulator robot that explores the environment using its action repertoire. Development of the symbolic knowledge is achieved in two stages. In the first stage, the robot explores the environment by executing actions on single objects, forms effect and object categories, and gains the ability to predict the object/​effect categories from the visual properties of the objects by learning the nonlinear and complex relations among them. In the next stage, with further interactions that involve stacking actions on pairs of objects, the system learns logical high-level rules that return a stacking-effect category given the categories of the involved objects and the discrete relations between them. Finally, these categories and rules are encoded in PDDL format, enabling symbolic planning.We realized our method by learning ​the categories and rules in a physics-based simulator. Next, the robot progressively ​updated ​the previously learned concepts and rules in order to better deal with novel situations that appear during multi-step ​action ​executions. ​The system can infer categories of the novel objects based on previously learned rules, and form new object categories for these novel objects if their interaction characteristics and appearance do not match with the existing categories. Our system further learns probabilistic rules that predict the action effects and the next object states. After learning, the robot was able to build stable towers in real world, exhibiting some interesting reasoning capabilities such as stacking larger objects before smaller ones, and predicting that cups remain insertable even with other objects inside. ([[https://​iis.uibk.ac.at/​public/​emre/​papers/​ICRA2015.pdf|ICRA2015.pdf]]).+**[[https://​iis.uibk.ac.at/​public/​emre/​research.html|From Continuous Manipulative Exploration to Symbolic Planning]]** - This work aims for bottom-up and autonomous development of symbolic planning operators from continuous interaction experience of a manipulator robot that explores the environment using its action repertoire. In the first stage, the robot explores the environment by executing actions on single objects, forms effect and object categories, and gains the ability to predict the object/​effect categories from the visual properties of the objects by learning the nonlinear and complex relations among them. In the next stage, with further interactions that involve stacking actions on pairs of objects, the system learns logical high-level rules that return a stacking-effect category given the categories of the involved objects and the discrete relations between them. Finally, these categories and rules are encoded in PDDL format, enabling symbolic planning. ​In the third state, the robot progressively ​updates ​the previously learned concepts and rules in order to better deal with novel situations that appear during multi-step ​plan executions. ​This way, categories of novel objects ​can be inferred or new categories can be formed ​based on previously learned rules. Our system further learns probabilistic rules that predict the action effects and the next object states. After learning, the robot was able to build stable towers in real world, exhibiting some interesting reasoning capabilities such as stacking larger objects before smaller ones, and predicting that cups remain insertable even with other objects inside. ([[https://​iis.uibk.ac.at/​public/​emre/​papers/​ICRA2015.pdf|ICRA2015.pdf]], [[https://​iis.uibk.ac.at/​public/​emre/​papers/​humanoids.pdf|humanoids.pdf]]).
  
 <​html><​div style="​clear:​both"></​div><​br></​html>​ <​html><​div style="​clear:​both"></​div><​br></​html>​
Line 51: Line 51:
        <​applet code="​com.fluendo.player.Cortado.class"​ archive="/​public/​cortado.jar"​ width="​280"​ height="​170">​        <​applet code="​com.fluendo.player.Cortado.class"​ archive="/​public/​cortado.jar"​ width="​280"​ height="​170">​
         <param name="​autoPlay"​ value"​false"​ />         <param name="​autoPlay"​ value"​false"​ />
-        <param name="​url"​ value="/​public/​videos/​pacman_demo.ogg"/>​+        <param name="​url"​ value="/​public/​videos/​bootstrapping.ogg"/>​
        </​applet>​        </​applet>​
      </​video>​ </​div>​      </​video>​ </​div>​
research.1465192900.txt.gz · Last modified: 2016/06/06 08:01 by Emre Ugur