Intelligent and Interactive Systems

User Tools

Site Tools



This shows you the differences between two versions of the page.

Link to this comparison view

research:grasp-densities [2012/06/19 15:47]
c7031007 created
research:grasp-densities [2018/09/03 19:35]
Line 1: Line 1:
-====== Interactive,​ Visuomotor Learning of Grasp Models ====== 
-We are working on the modeling and learning object grasp affordances,​ i.e. relative object-gripper poses that yield stable grasps. These affordances are represented probabilistically with grasp densities ([[@/​publications#​Detry-2011-PJBR|Detry et al. 2011]]), which correspond to continuous density functions defined on the space of 6D gripper poses – 3D position and orientation. 
-{{:​research:​6dof_grasp_density.png?​300|Projection of a 6DOF grasp density on a 2D image. Grasp success likelihood is proportional to the intensity of the green mask.}} 
-Grasp densities are linked to visual stimuli through registration with a visual model of the object they characterize,​ which allows the robot to grasp objects lying in arbitrary poses: to grasp an object, the object'​s model is visually aligned to the correct pose; the aligned grasp density is then combined to reaching constraints to select the maximum-likelihood achievable grasp. Grasp densities are learned and refined through exploration:​ grasps sampled randomly from a density are performed, and an importance-sampling algorithm learns a refined density from the outcomes of these experiences. Initial grasp densities are computed from the visual model of the object. 
-Combining the visual model described above to the grasp-densities framework yields a largely autonomous visuomotor learning platform. In a recent experiment ([[@/​publications#​Detry-2010-ICRA|Detry et al. 2010]]), this platform was used to learn and refine grasp densities. The experiment demonstrated that the platforms allows a robot to become increasingly efficient at inferring grasp parameters from visual evidence. The experiment also yielded conclusive results in practical scenarios where the robot needs to repeatedly grasp an object lying in an arbitrary pose, where each pose imposes a specific reaching constraint, and thus forces the robot to make use of the entire grasp density to select the most promising achievable grasp. This work led to publications in the fields of robotics ([[@/​publications#​Detry-2010-ICRA|Detry et al. 2010]]) and developmental learning ([[@/​publications#​Detry-2009-ICDL|Detry et al. 2009]]). 
-<video width="​560"​ height="​340"​ controls preload="​metadata">​ 
-   <​source src="/​public/​research/​grasp_densities.ogv"​ type='​video/​ogg;​ codecs="​theora,​ vorbis"'>​ 
-   <​applet code="​com.fluendo.player.Cortado.class"​ archive="/​public/​cortado.jar"​ width="​560"​ height="​340">​ 
-      <param name="​autoPlay"​ value"​false"​ /> 
-      <param name="​url"​ value="/​public/​research/​grasp_densities.ogv"/>​ 
-   </​applet>​ 
-</​video> ​ 
-==== Acknowledgments ==== 
-This work is supported by the Belgian National Fund for Scientific Research (FNRS) and the EU Cognitive Systems project PACO-PLUS (IST-FP6-IP-027657). 
research/grasp-densities.txt · Last modified: 2018/09/03 19:35 (external edit)