Intelligent and Interactive Systems

User Tools

Site Tools


research:grasp-densities

Interactive, Visuomotor Learning of Grasp Models

We are interested in probabilistic representations and learning methods that allow a robotic agent to infer useful behaviors from perceptual observations. We develop probabilistic models of 3D visual and haptic object properties, along with means of learning the model autonomously from exploration: A robotic agent physically experiences the correlation between successful grasps and local visual appearance by “playing” with an object. With time, it becomes increasingly efficient at inferring grasp parameters from visual evidence. Our visuomotor object model relies on

  • a grasp model representing the grasp success likelihood of relative hand-object configurations, and
  • a 3D model of visual object structure, which aligns the grasp model to arbitrary object poses (3D positions and orientations).

These models are discussed below.

We are working on the modeling and learning object grasp affordances, i.e. relative object-gripper poses that yield stable grasps. These affordances are represented probabilistically with grasp densities (Detry et al. 2011), which correspond to continuous density functions defined on the space of 6D gripper poses – 3D position and orientation.

Projection of a 6DOF grasp density on a 2D image. Grasp success likelihood is proportional to the intensity of the green mask.

Grasp densities are linked to visual stimuli through registration with a visual model of the object they characterize, which allows the robot to grasp objects lying in arbitrary poses: to grasp an object, the object's model is visually aligned to the correct pose; the aligned grasp density is then combined to reaching constraints to select the maximum-likelihood achievable grasp. Grasp densities are learned and refined through exploration: grasps sampled randomly from a density are performed, and an importance-sampling algorithm learns a refined density from the outcomes of these experiences. Initial grasp densities are computed from the visual model of the object.

Combining the visual model described above to the grasp-densities framework yields a largely autonomous visuomotor learning platform. In a recent experiment (Detry et al. 2010), this platform was used to learn and refine grasp densities. The experiment demonstrated that the platforms allows a robot to become increasingly efficient at inferring grasp parameters from visual evidence. The experiment also yielded conclusive results in practical scenarios where the robot needs to repeatedly grasp an object lying in an arbitrary pose, where each pose imposes a specific reaching constraint, and thus forces the robot to make use of the entire grasp density to select the most promising achievable grasp. This work led to publications in the fields of robotics (Detry et al. 2010) and developmental learning (Detry et al. 2009).


Acknowledgments

This work is supported by the Belgian National Fund for Scientific Research (FNRS) and the EU Cognitive Systems project PACO-PLUS (IST-FP6-IP-027657).

research/grasp-densities.txt · Last modified: 2018/09/03 19:35 (external edit)