Computational Models of Affordances in Robotics.

Abstract: J. J. Gibson’s concept of affordance, one of the central pillars of ecological psychology, is a truly remarkable idea that provides a concise theory of animal perception predicated on environmental interaction. It is thus not surprising that this idea has also found its way into robotics research as one of the underlying theories for action perception. The success of the theory in this regard has meant that existing research is both abundant and diffuse by virtue of the pursuit of multiple different paths and techniques with the common goal of enabling robots to learn, perceive, and act upon affordances. Up until now, there has existed no systematic investigation of existing work in this field. Motivated by this circumstance, in this article, we begin by defining a taxonomy for computational models of affordances rooted in a comprehensive analysis of the most prominent theoretical ideas of import in the field. Subsequently, after performing a systematic literature review, we provide a classification of existing research within our proposed taxonomy. Finally, by both quantitatively and qualitatively assessing the data resulting from the classification process, we highlight gaps in the research terrain and outline open questions for the investigation of affordances in robotics that we believe will help inform future work, prioritize research goals, and potentially advance the field toward greater robot autonomy.

Original publication: Sage (+errata)

Authors Copy: Link

Download the data as MS Excel file (50 KB): Download»


Title Authors Year Perception Perspective Level Order Temporality Selective Attention Abstraction Competitive Chaining Acquisition Prediction Generalization Exploitation Learning Kind Abstraction Brain Areas Method Features Training Evaluation Data added
Title Authors Year Perception Perspective Level Order Temporality Selective Attention Abstraction Competitive Chaining Acquisition Prediction Generalization Exploitation Learning Kind Abstraction Brain Areas Method Features Training Evaluation Data added
A model-based approach to finding substitute tools in 3D vision dataAbelha et al.2016agentlocal0thstablenomicroyesnoground truthOptimizationyesaction selectionofflinetool-usemathematicalgeometric part fittingpoint clouds, superquadricssupervisedbenchmarkoriginal study
Unsupervised learning of affordance relations on a humanoid robotAkgun et al.2009agentmeso1ststablenomicrononoexplorationClassificationyesaction selectionofflinerollabilitymathematicalnot specifiedSOM, SVMshape, sizeunsupervisedreal robotoriginal study
Supervised learning of hidden and non-hidden 0-order affordances and detection in real scenesAldoma et al.2012agentglobal1ststablenomicrononoground truthClassificationyesnot specifiedofflinegeneralmathematicalnot specifiedSVM, Boost, RFSEE, SHOT, NDS, SI, PFHsupervisedbenchmarkoriginal study
From Human Instructions to Robot Actions: Formulation of Goals, Affordances and Probabilistic PlanningAntunes et al.2016agentglobal2ndstable+variablenomicro+macroyesyesexplorationInferenceyesplanningonlinepulling, dragging, graspingmathematicalBN, PRAXICON Semantic Network, PRADA Planner2D geom feat., 2D tracked object displacementnot specifiedreal robotoriginal study
On Exploiting Haptic Cues for Self-Supervised Learning of Depth-Based Robot Navigation AffordancesBaleia et al2015agentglobal1ststablenomicrononoexplorationInferenceyesplanningonlinetraversabilitymathematicalhistograms, clustering, similarity metricsdepth, hapticself-supervisedreal robotoriginal study
Self-supervised learning of depth-based navigation affordances from haptic cuesBaleia et al.2014agentglobal1ststablenomicrononoexplorationInferenceyesplanningonlinetraversabilitymathematicalhistograms, clustering, similarity metricsdepth, hapticself-supervisedreal robotoriginal study
Learning grasping affordance using probabilistic and ontological approachesBarck-Horst et al2009agentmeso1ststablenomicrononoexplorationInferenceyesplanningonlinegraspingmathematicalvoting function, ontological rule-engineshape, size, grasp region, forceself-supervisedsimulationoriginal study
Grasp affordances from multi-fingered tactile exploration using dynamic potential fieldsBierbaum et al.2009agentlocal1ststablenomicrononoexplorationRegressionyesplanningofflinegraspingmathematicalPotential Fields planar faces of objectself-supervisedsimulationoriginal study
Behavioural plasticity in evolving robotsCarvalho and Nolfi2016agentglobal1ststablenomicrononoexplorationRegressionyesaction selectiononlinetraversabilitymathematicalNNdepth, hapticself-supervisedsimulationoriginal study
Using Object Affordances to Improve Object RecognitionCastellini et al.2011agentmeso1ststablenomicrononodemonstrationRegressionyesaction selectionofflinegraspingmathematicalMLP, SVM, k-means, histogramsSIFT BoW, contact jointssupervisedbenchmarkoriginal study
A Probabilistic Concept Web on a Humanoid RobotCelikkanat at al.2015agentmeso1ststablenomicrononoexplorationOptimizationnolanguageofflinepushing, grasping, throwing, shakingmathematicalMRF, Loopy belief propagationdepth, haptic, proprioceptive and audiosemi-supervisedreal robotoriginal study
Determining proper grasp configurations for handovers through observation of object movement patterns and inter-object interactions during usageChan et al.2014agentmeso1ststablenomicroyesnodemonstrationOptimizationyesaction selectiononlinegraspingmathematicalk-means, nearest neighborpose, action-object relationunsupervisedreal robotoriginal study
A Bio-Inspired Robot with Visual Perception of AffordancesChang2015agentmeso1ststablenomicrononoground truthClassificationyesaction selectionofflinecutting, paintingneuralAL, MBANNedges, TSSCsupervisedreal robotoriginal study
DeepDriving: Learning Affordance for Direct Perception in Autonomous DrivingChen et al.2015agentglobal1ststable+variablenomicrononoground truthRegressionyesplanningofflinetraversabilitymathematicalCNNRGB images, motor controlssupervisedsimulationoriginal study
Learning haptic affordances from demonstration and human-guided explorationChu et al.2016agentmeso1ststablenomicrononodemonstration+explorationClassificationyesaction selectionofflineopenable, scoopablemathematicalHMMforces and torquessupervisedreal robotoriginal study
Learning Object Affordances by Leveraging the Combination of Human-Guidance and Self-Exploration Chu et al.2016observermeso1ststablenomicrononodemonstration+explorationOptimizationnoaction selectionofflinepushing, openning, turningmathematicalHMMcolor, size, pose, force torque, robot arm poseself-supervisedreal robotoriginal study
Learning Affordances of Consummatory Behaviors: Motivation-Driven Adaptive PerceptionCos et al.2010agentmeso1ststablenomicrononoexplorationClassificationyesplanningonlinegeneralmathematicalGWRN+RLilluminationsupervisedsimulationoriginal study
Training Agents with Interactive Reinforcement Learning and Contextual AffordancesCruz et al.2016agentmeso1ststable+variablenomicrononoground truthClassificationnoaction selectionofflinemanipulation, locomotionmathematicalDMLPagent state, action, objectunsupervisedsimulationoriginal study
Interactive reinforcement learning through speech guidance in a domestic scenarioCruz et al. 2015agentmeso1ststablenomicrononoground truthClassificationnoaction selectionofflinegraspable, dropable, moveable, cleanablemathematicalDMLProbot state, intended action, object informationsupervisedsimulationoriginal study
A Cognitive Control Architecture for the Perception–Action Cycle in Robots and AgentsCutsuridis and Taylor2013agentmeso1ststableyesmicroyesnoexplorationInferenceyesaction selectiononlinegraspingneuralAIP, DVvANNshapeunsupervisedreal robotoriginal study
Learning Affordances for Categorizing Objects and Their PropertiesDag et al2010observermeso1ststablenomicrononodemonstrationClassificationnosingle-/multi-step predictionofflinemanipulationmathematical-SVM, k-means, spectral clustering3D position, orientation, shape, sizeunsupervisedbenchmarkoriginal study
Semantic grasping: planning task-specific stable robotic graspsDang and Allen2014agentlocal1ststablenomicroyesnoexplorationOptimizationyesaction selectionofflinegraspingmathematicalNearest Neighborgrasp, shape contextsupervisedreal robotoriginal study
Denoising Auto-encoders for Learning of Objects and Tools Affordances in Continuous SpaceDehban et al.2016agentmeso1ststablenomicrononoexplorationInferenceyesaction selectiononlinepulling, draggingmathematicalDenoising auto-encoders 2D shape, object displacementunsupervisedsimulation+real robotoriginal study
Predicting Functional Regions on ObjectsDesai and Ramanan2013agentglobal0thstablenomicrononoground truthOptimizationyesnot specifiedofflinegrasping, supportmathematicalDeformable Part ModelsHOGsupervisedbenchmarkoriginal study
Learning object-specific grasp affordance densitiesDetry et al.2009agentmeso1ststablenomicrononoexplorationOptimizationnoplanningonlinegraspingmathematicalnot specifiedKDE, SamplingECVself-supervisedreal robotoriginal study
Learning grasp affordance densitiesDetry et al.2011agentmeso1ststablenomicrononoexplorationOptimizationnoplanningonlinegraspingmathematicalnot specifiedKDE, SamplingECVself-supervisedreal robotoriginal study
Refining Grasp Affordance Models by ExperienceDetry et al.2010agentmeso1ststablenomicrononoexplorationOptimizationnoaction selectiononlinegraspingmathematicalKDE, SamplingECVself-supervisedreal robotoriginal study
From primitive behaviors to goal-directed behavior using affordancesDogar et al.2007agentmeso2ndstableyesmicroyesyesexplorationClassificationyesaction selectionofflinetraversabilitymathematicalnot specifiedk-Means, SVMshape, distanceunsupervisedreal robotoriginal study
Ecological RoboticsDuchon et al.1998agentglobal1stvariablenomicrononoexplorationOptimizationyesplanningonlinelocomotion, survivalmathematicallaw of controloptical flownot specifiedsimulation+real robotoriginal study
Predicting the Intention of Human Activities for Real-Time Human-Robot Interaction (HRI)Dutta and Zielinski2016agentmeso1ststablenomicroyesnoground truthOptimizationyesaction selectionofflinereachable, pourable, movable, drinkablemathematicalHeat mapsangular, location + dist. to object, sematic labelssupervisedsimulationoriginal study
Discrete fuzzy grasp affordance for robotic manipulators Eizicovits et al.2012agentmeso1ststablenomicroyesnodemonstrationOptimizationnoplanningofflinegraspingmathematicalAffordance manifoldswrist location, roll anglesupervisedreal robotoriginal study
Learning structural affordances through self-explorationErdemir et al.2012agentglobal1ststablenomicrononoexplorationClassificationyesplanningofflinecrawlingmathematicalSOM, k-means, LVQfixation point, motor valuessemi-supervisedreal robotoriginal study
A robot rehearses internally and learns an affordance relationErdemir et al.2008agentmeso1ststablenomicrononoexplorationRegressionnoplanningofflinetraversabiltymathematicalGMMobject edgesself-supervisedsimulation+real robotoriginal study
Learning probabilistic discriminative models of grasp affordances under limited supervisionErkan et al.2010agentmeso1ststablenomicroyesnoground truthOptimizationyesaction selectiononlinegraspingmathematicalKernel Logistic RegressionECVsemi-supervisedreal robotoriginal study
Bootstrapping Relational Affordances of Object Pairs using TransferFichtl et al.2016agentmeso1ststablenomicroyesnoexplorationClassificationyessingle-/multi-step predictiononlinerake, pull/sh, move, lift, take, pour, slidemathematicalRandom Forestspose, size; relational hist.feat./PCA on PCLsemi-supervisedsimulationoriginal study
Learning About Objects Through Action - Initial Steps Towards Artificial CognitionFitzpatrick et al.2003agentmeso1ststablenomicrononoexplorationOptimizationnoaction selectionofflinegeneralneuralF5/AIPHistogramshape, identityunsupervisedreal robotoriginal study
Neural Model for the Visual Recognition of Goal-Directed MovementsFleischer et al.2008agentmeso1ststablenomicroyesnoground truthClassificationyesaction selectionofflinegraspingneuralAIPNN, k-meansorientation, object + hand shape, saliency of feat.unsupervisedbenchmarkoriginal study
Learning Predictive Features in Affordance based Robotic Perception SystemsFritz et al.2006agentmeso1ststablenomicrononoground truthOptimizationyesaction selectionofflineliftingmathematicalk-means, MAP, decision treeSIFT, color, mass-center, shape descr., actuatorsupervisedsimulationoriginal study
Visual Learning of Affordance Based CuesFritz et al.2006agentmeso1ststablenomicrononoground truthClassificationnoaction selectionofflineliftingmathematicalNearest Neighbor, C4.5 Decision treeSIFTself-supervisedsimulationoriginal study
Synergy-based affordance learning for robotic graspingGeng et al.2013agentmeso2ndstablenomicroyesnodemonstration+explorationClassificationyesplanningofflinegraspingneuralVIP, CIPS, 7a, 7b, AIPGrowing Neural Gas (GNG)not specifiedunsupervisedreal robotoriginal study
Object recognition using visuo-affordance mapsGijsberts et al.2010agentmeso1ststablenomicrononoground truthRegressionyessingle-/multi-step predictionofflinegraspingmathematicalRegularized Least SquaresSIFTsupervisedbenchmarkoriginal study
Towards Lifelong Affordance Learning using a Distributed Markov ModelGlover and Wyeth2016agentmeso1ststablenomicrononoexplorationInferenceyesaction selectiononlinegraspingmathematicalDistributed Markov Modelobject pose, tactile readingsunsupervisedreal robotoriginal study
Learning visual affordances of objects and tools through autonomous robot explorationGonçalves et al.2014agentmeso1ststablenomicrononoexplorationInferenceyesaction selectionofflinepulling, draggingmathematicalBN2D geom. feat., 2D tracked object displacementself-supervisedsimulationoriginal study
Learning intermediate object affordances: Towards the development of a tool conceptGonçalves et al.2014agentmeso1ststablenomicrononoexplorationInferenceyesaction selectionofflinepulling, draggingmathematicalBN, PCA, BN structure learning2D geom. feat., 2D tracked object displacementself-supervisedsimulation+real robotoriginal study
A Behavior-Grounded Approach to Forming Object Categories: Separating Containers From NoncontainersGriffitth et al.2012agentmeso1ststablenomicrononoexplorationClassificationyesaction selectionofflinedrop, move, grasping, /shakemathematicalSOM, Spectral clustering, PCA and k-NNauditory and visual feature trajectories, depth unsupervisedreal robotoriginal study
Affordance in Autonomous RobotHakura et al.1996agentmeso1ststablenomicrononoexplorationClassificationyesaction selectiononlinetraversabilitymathematicalART, RLpulse sensor readingssemi-supervisedsimulationoriginal study
Intrinsically Motivated Affordance Discovery and ModelingHart and Grupen2013agentmeso1ststablenomicroyesnoexplorationOptimizationyesaction selectiononlinegraspingmathematicalRLhue, shape, pose of objectunsupervisedreal robotoriginal study
Attribute Based Affordance Detection from Human-Object Interaction ImagesHassan and Dharmaratne2016observermeso1ststablenomicrononodemonstrationClassificationyesaction selectionofflinegeneralmathematicalBN, SVMSIFT, HOG, textons, color hist., object attributessupervisedbenchmarkoriginal study
Affordance-Based Grasp Planning for Anthropomorphic Hands from Human DemonstrationHendrich and Bernardino2014agentmeso1ststablenomicrononodemonstrationRegressionyesaction selectionofflinegraspingmathematicalPCA, FK, IKshape, sizesupervisedreal robotoriginal study
Decoupling behavior, perception, and control for autonomous learning of affordancesHermans et al.2013agentmeso1ststablenomicrononoexplorationRegressionyesaction selectiononlinepushing, pullingmathematicalFeedback controlpose, depthsupervisedreal robotoriginal study
Learning Contact Locations for Pushing and Orienting Unknown Objects Hermans et al.2013agentmeso1ststablenomicroyesnoexplorationRegressionyesaction selectionofflinepushingmathematicalSVRHistogram of points (in pose space)semi-supervisedreal robotoriginal study
Hallucinated Humans as the Hidden Context for Labeling 3D ScenesJiang et al.2013agentmeso1ststablenomicrononodemonstrationOptimizationyesnot specifiedofflinegeneralmathematicalInfinite Factory Topic Models (DPPM)human pose, object posesupervisedbenchmarkoriginal study
Extracting whole-body affordances from multimodal explorationKaiser et al.2014agentglobal1ststablenomicrononohardcodedInferenceyesaction selectionofflinesupport, lean, grasping, holdmathematicalReasoningsurface characteristicssupervisedreal robotoriginal study
Validation of Whole-Body Loco-Manipulation Affordances for Pushability and LiftabilityKaiser et al.2015agentglobal2ndstablenomicrononohardcodedInferencenoaction selectionnot specifiedpushing, liftingmathematicalRANSAC, clusteringsurface normals, areanot specifiedreal robotoriginal study
Representation and extraction of image feature associated with maneuvering affordanceKamejima2002agentglobal1ststablenomicrononoexplorationRegressionyesplanningonlinemaneuverabilitymathematicalDirectional Fourier Imaging, Self similarityscene imageunsupervisedreal robotoriginal study
Anticipative generation and in-situ adaptation of maneuvering affordance in naturally complex sceneKamejima et al.2008agentglobal1ststable+variablenomicrononoexplorationClassificationyesplanningonlinemaneuverabilitymathematicalFractal Codingscene imageunsupervisedreal robotoriginal study
Perceiving, learning, and exploiting object affordances for autonomous pile manipulationKatz et al.2014agentlocal0thstablenomacroyesyesground truthClassificationyesaction selectionofflinepushing, pulling, graspingmathematicalSVM, PCA, Mean shiftPCA axes, size, center of gravitysupervisedreal robotoriginal study
Semantic Labeling of 3D Point Clouds with Object Affordance for Robot Manipulation Kim et al.2014agentlocal0thstablenomicrononoground truthRegressionyesnot specifiedofflinepushing, lifting, graspingmathematicalLogistic regression, k-meansgeometric features supervisedbenchmarkoriginal study
Interactive Affordance Map Building for a Robotic Task Kim et al.2015agentlocal2ndvariablenomicrononoground truthOptimizationyessingle-/multi-step predictionofflinepushingmathematicalLogistic regression, MRFgeometric features supervisedsimulationoriginal study
Traversability classification using unsupervised on-line visual learning for outdoor robot navigationKim et al.2006agentglobal1ststablenomacrononoexplorationClassificationyesplanningonlinetraversabilitymathematicalClustering, Classification3D pixel information, textureself-supervisedreal robotoriginal study
Visual object-action recognition: Inferring object affordances from human demonstration Kjellström et al.2011agentmeso0thstablenomicrononodemonstrationClassificationnonot specifiedofflineopen, pour, hammermathematicalFactorial CRFspatial pyramids of HoG supervisedbenchmarkoriginal study
Physically Grounded Spatio-temporal Object Affordances Koppula et al 2014observermeso2ndstablenomacroyesyesground truthOptimizationyesaction selectionofflinegeneralmathematicalGraphical model, GPRhuman pose, feat. w.r.t. skeleton joints / objectssupervisedbenchmarkoriginal study
Learning human activities and object affordances from RGB-D videosKoppula et al.2013observermeso2ndstableyesmicroyesyesground truthClassificationyesaction selectiononlinegeneralmathematicalnot specifiedSVM (MRF, kNN, Particle Filter)BB, centroid, SIFTsupervisedbenchmarkoriginal study
Collision risk assessment for autonomous robots by offline traversability learning Kostavelis et al.2012agentglobal1ststablenomicrononoground truthClassificationyessingle-/multi-step predictionofflinetraversabilitymathematicalSVMdispartiy maps, hist. of pixel distributionsupervisedreal robotoriginal study
A kernel-based approach to direct action perception Kroemer et al.2012agentlocal1ststablenomicroyesnodemonstrationRegressionyesaction selectionofflinepouring, graspingmathematicalnot specifiednon-parametric surface kernel, kernel logistic regression, DMPpointcloudssupervisedreal robotoriginal study
A Flexible Hybrid Framework for Modeling Complex Manipulation Tasks Kroemer et al.2011observermeso1ststablenomacroyesyeshardcodedOptimizationnoplanningofflinegrasping, pushing, strikingmathematicalRLposesupervisedreal robotoriginal study
A perceptual system for vision-based evolutionary roboticsKubota et al.2003agentglobal1stvariablenomicrononoexplorationOptimizationyesaction selectiononlinetraversabilitymathematicalSSGA, clusteringoptical flowunsupervisedreal robotoriginal study
Goal-oriented Dependable Action Selection using Probabilistic Affordance Lee et al.2010agentmeso1ststablenomicroyesyesground truthClassificationyesaction selectionofflinegeneralmathematicalmultilayer naive Bayesian classifiernot specifiedsupervisedreal robotoriginal study
Skill Learning and Inference Framework for Skilligent Robot Lee et al.2013agentmeso1ststablenomicro+macroyesyesdemonstrationOptimizationnoaction selectionofflinegeneralmathematicalBN, DMPtrajectories (joints and end-effectors)supervisedreal robotoriginal study
Foot Placement Selection Using Non-geometric Visual PropertiesLewis et al.2005agentglobal1ststablenomicroyesnoexplorationClassificationyesplanningonlinelocomotionmathematicalNNcolor, texturesupervisedsimulation+real robotoriginal study
Affordance-based imitation learning in robotsLopes et al.2007agentmeso1ststablenomicroyesnodemonstration+explorationOptimizationyesaction selectionofflinegrasping, tapping, touchingmathematicalBN, RLshape, color, scalesemi-supervisedreal robotoriginal study
Responding to affordances: Learning and Projecting a Sensorimotor MappingMacDorman2000agentmeso1ststablenomicronoyesexplorationClassificationnoplanningonlinenavigationmathematicalPartition Netscolorself-supervisedsimulationoriginal study
Multi-model approach based on 3D functional features for tool affordance learning in roboticsMar et al.2015agentmeso1ststable+variablenomicroyesnoexplorationRegressionyesnot specifiedofflinepulling/draggingmathematicalnot specifiedSOM, k-means, GRNNOMS-EGI (3D)unsupervisedreal robotoriginal study
Self-supervised learning of grasp dependent tool affordances on the iCub Humanoid robotMar et al.2015agentmeso1ststable+variablenomicrononoexplorationClassificationyessingle-/multi-step predictionofflinepulling, draggingmathematicalnot specifiedSVM, K-means.2D geometrical featuresself-supervisedsimulation+real robotoriginal study
Extending sensorimotor contingency theory: prediction, planning, and action generationMaye & Engl2013agentglobal1ststable+variablenomicrononoexplorationInferenceyessingle-/multi-step predictiononlinetraversabilitymathematicalSMC, Markov modelsnot specifiedunsupervisedreal robotoriginal study
Better Vision through ManipulationMetta & Fitzpatrick2003agentmeso1ststablenomicrononoexplorationClassificationnoaction selectiononlinerollabilityneuralAIP-F5Clusteringcolorunsupervisedreal robotoriginal study
Affordance Learning Based on Subtask's Optimal StrategyMin et al2015agentmeso1ststable+variablenomicrononoexplorationInferencenoaction selectiononlinelocomotionmathematicalHRLshapesupervisedsimulation+real robotoriginal study
The initial development of object knowledge by a learning robot Modayil et al.2008agentmeso1ststableyesmicrononoexplorationOptimizationyesplanningonlinemanipulabilitymathematicalclustering, utility functionsshapeunsupervisedreal robotoriginal study
From object-action to property-action: Learning causally dominant properties through cumulative explorative interactions Mohan et al.2014agentmeso1ststablenomicrononoexplorationClassificationyesplanningonlinereach, grasp, push, searchmathematicalSOMssize, color, shape, world mapunsupervisedreal robotoriginal study
Learning relational affordance models for robots in multi-object manipulation tasksMoldovan et al.2012agentglobal2ndstablenomicroyesyesground truthInferencenosingle-/multi-step predictionofflinegeneralmathematicalnot specifiedStastistical Relational Learningnot specifiedunsupervisedreal robotoriginal study
Occluded Object Search by Relational Affordances Moldovan et al.2014agentmeso0thstablenomicroyesnohardcodedOptimizationnoaction selectionofflinegeneralmathematicalBNgeometric propertiessupervisedsimulationoriginal study
Learning Grasping Affordances From Local Visual DescriptorsMontesano and Lopes2009agentlocal1ststablenomicrononoexplorationInferenceyesaction selectionofflinegraspingmathematicalnot specifiedBayesGaussian, Sobel, Laplacian Filtersunsupervisedreal robotoriginal study
Modelling Affordances Using Bayesian NetworksMontesano et al.2007agentmeso1ststablenomicrononoexplorationInferencenoaction selectionofflinegeneralmathematicalnot specifiedBN, MCMCcolor, shape, size, position; robot gripper poseunsupervisedreal robotoriginal study
Learning Object Affordances: From Sensory-Motor Coordination to ImitationMontesano et al.2008agentmeso2ndstablenomicroyesnoexplorationInferenceyessingle-/multi-step predictionofflinegeneralmathematicalnot specifiedBN, MCMCconvexity, compactness, circleness, squarenessunsupervisedreal robotoriginal study
Affordances, development and imitation Montesano et al.2007agentmeso1ststablenomicroyesnodemonstrationOptimizationnoaction selectionofflinegrasping, tapingmathematicalBNcolor, shape, sizesupervisedreal robotoriginal study
Case Studies of Applying Gibson’s Ecological Approach to Mobile RobotsMurphy1999agentmeso1ststablenomicrononohardcodedClassificationnoaction selectionofflinedocking, path following, pickingmathematicalHard-coded perceptual affordance detectorsHC perceptual affordance detectorsnot specifiedreal robotoriginal study
Affordance Estimation For Vision-Based Object Replacement on a Humanoid RobotMustafa et al.2016agentmeso2ndstablenomicroyesnoground truthClassificationyesaction selectionofflinegeneralmathematicalnot specifiedJointSVM3D texletsunsupervisedreal robotoriginal study
Affordance Detection of Tool Parts from Geometric FeaturesMyers et al.2015agentlocal0thstablenomicrononoground truthInferenceyesaction selectionofflinegeneralmathematicalnot specifiedSRFDepth, SNorm, PCurv, SI+CVunsupervisedbenchmarkoriginal study
Structural Feature Extraction based on Active Sensing ExperiencesNishide et al.2008agentmeso1ststablenomicro+macrononoexplorationRegressionnosingle-/multi-step predictionofflinepushingmathematicalRNNPB, hierarchical NNshape, motionsupervisedreal robotoriginal study
Active Sensing based Dynamical Object Feature ExtractionNishide et al.2008agentmeso1ststablenomicro+macrononoexplorationRegressionnosingle-/multi-step predictionofflinepushingmathematicalRNNPB, hierarchical NNshape, motionsupervisedreal robotoriginal study
Modeling Tool-Body Assimilation using Second-order Recurrent Neural NetworkNishide et al.2009agentglobal1ststablenomicro+macronoyesexplorationRegressionyessingle-/multi-step predictionofflinepulling, draggingmathematicalSOM, Multiple time-scales RNNSOM object feature from imagesupervisedreal robotoriginal study
Tool–Body Assimilation of Humanoid Robot Using a Neurodynamical SystemNishide et al.2012agentmeso1ststablenomicrononoexplorationRegressionyesaction selectionofflinemanipulabilitymathematicalSOM, MTRNN, HNNSOM outputsemi-supervisedreal robotoriginal study
Generation of behavior automaton on neural networkOgata et al.1997agentglobal1ststablenomicrononoexplorationClassificationyesplanningonlinetraversabilitymathematicalSOM, Temporal Sequence NetworkSOM outputsemi-supervisedsimulationoriginal study
Symbol Generation and Feature Selection for Reinforcement Learning Agents Using Affordances and U-Trees Oladell et al.2012agentmeso1ststablenomicroyesnohardcodedOptimizationnoaction selectionofflinelifting, dropping, stackingmathematicalMDPlocation, shape, colorsupervisedsimulationoriginal study
Autonomous acquisition of pushing actions to support object grasping with a humanoid robotOmrcen et al.2009agentmeso1ststablenot specifiedmicro+macroyesnoexplorationOptimizationyessingle-/multi-step predictionofflinegrasping, pushingmathematicalNNobjecet imagesupervisedreal robotoriginal study
Reinforcement Learning of Predictive Features in Affordance PerceptionPaletta & Fritz2008agentmeso1ststable+variablenomicrononoexplorationClassificationyesplanningonlineliftabilitymathematicalQ-Learning, k-meansSIFTsupervisedsimulationoriginal study
Perception and Developmental Learning of Affordances in Autonomous Robots Paletta et al.2007agentlocal1ststableyesmicrononoground truthOptimizationnoaction selectionofflineliftingmathematicalMDPSIFT, color, shapesupervisedreal robotoriginal study
Affordance-feasible planning with manipulator wrench spacesPrice et al.2016agentmeso1ststablenomicronoyesexplorationClassificationyesplanningofflinegraspingmathematicalBNwrenchesnot specifiedsimulation+real robotoriginal study
Bio-inspired Model of Robot Adaptive Learning and MappingRamierz & Widel2006agentglobal1ststablenomicroyesnoexplorationClassificationyesaction selectiononlinetraversabilityneuralhippocampusRL, Hebbian Learningcolorsupervisedreal robotoriginal study
Increasing the Autonomy of Mobile Robots by On-line Learning Simultaneously at Different Levels of AbstractionRichert et al2008agentmeso1ststablenomicroyesnoexplorationClassificationyesaction selectiononlinetraversabilitymathematicalRL, Decision Treescolor, distance, anglesupervisedsimulationoriginal study
Action-grounded push affordance bootstrapping of unknown objectsRidge and Ude2013agentmeso1ststablenomicrononoexplorationClassificationyessingle-/multi-step predictiononlinepushingmathematicalnot specifiedSOM, LVQ, Hebbian learning, K-meansaction-grounded 3D shapeself-supervisedreal robotoriginal study
Self-supervised cross-modal online learning of basic object affordances for developmental robotic systemsRidge et al.2010agentmeso1ststablenomicrononoexplorationClassificationyessingle-/multi-step predictiononlinepushingmathematicalnot specifiedSOM, LVQ, Hebbian learning, K-means2D,3D shape, 2D motionself-supervisedreal robotoriginal study
Self-supervised Online Learning of Basic Push AffordancesRidge et al.2015agentmeso1ststablenomicrononoexplorationClassificationyessingle-/multi-step predictiononlinepushingmathematicalnot specifiedSOM, LVQ, Hebbian learning, K-means2D,3D shape and motionself-supervisedreal robotoriginal study
The MACS Project: An Approach to Affordance-Inspired Robot ControlRome et al.2008agentglobal1ststableyesmicrononoground truthClassificationnoplanningofflinelifting, trabersabilitymathematicalNearest NeighborSIFTsupervisedreal robotoriginal study
A Multi-scale CNN for Affordance Segmentation in RGB ImagesRoy et al.2016agentglobal0thstablenomicrononoground truthClassificationyesnot specifiedofflinewalkable, sittable, lyable, and reachablemathematicalMulti-Scale CNNRGB+D, surface normals, semantic labelssupervisedbenchmarkoriginal study
Learning the Consequences of Actions: Representing Effects as Feature ChangesRudolph et al.2010agentmeso1ststablenomicrononodemonstrationInferenceyessingle-/multi-step predictiononlinegeneralmathematicalBNobject, world, meta (object-object) featuressupervisedsimulationoriginal study
To Afford or Not to Afford: A New Formalization of Affordances Toward Affordance-Based Robot ControlŞahin et al.2007agentmeso1ststablenomicrononoexplorationClassificationyesplanningonlinegeneralmathematicalnot specifiedSVM, STRIPSnot specifiednot specifiedreal robotoriginal study
The acquisition of intentionally indexed and object centered affordance gradients: A biomimetic controller and mobile robotics benchmarkSánchez-Fibla et al.2011agentmeso1ststablenomicrononoexplorationRegressionyesaction selectiononlinepushingmathematicalAffordance gradientsshape, position, orientationunsupervisedreal robotoriginal study
A Logic-based Computational Framework for Inferring Cognitive AffordancesSarathy & Scheutz2016agentmeso1ststable+variablenomicrononodemonstration+explorationInferenceyesplanningonlinecognitive affordancemathematicalLogic Programmingvisual informationunsupervisedsimulationoriginal study
Bootstrapping the Semantics of Tools: Affordance Analysis of Real World Objects on a Per-part BasisSchoeler and Wörgötter2015agentlocal1ststablenomicrononoground truthClassificationyesaction selectionofflinegeneralmathematicalnot specifiedSVMSHOT, ESFunsupervisedbenchmarkoriginal study
Bayesian Network Model for Object ConceptShinchi et al.2007agentmeso0thstablenomicrononodemonstrationInferenceyesnot specifiedonlinegeneralmathematicalBNcolor, contour, barycentric pos., num. of objectsunsupervisedbenchmarkoriginal study
Learning and generalization of behavior-grounded tool affordancesSinapov and Stoytchev2007agentmeso1ststable+variablenomicrononoexplorationClassificationyessingle-/multi-step predictionofflinepulling, draggingmathematicalnot specifiedK-NN, decision tree.changes in raw pixelsself-supervisedsimulationoriginal study
Detecting the functional similarities between tools using a hierarchical representation of outcomesSinapov and Stoytchev2008agentmeso1ststablenomicrononoexplorationClassificationnosingle-/multi-step predictiononlinepulling, draggingmathematicalnot specifiedX-Means, Ensemble of C4.5 Decision tree classifiers.raw pixels, trajectoriesself-supervisedsimulationoriginal study
Learning to Detect Visual Grasp AffordancesSong et al.2016agentmeso1ststablenomicrononoground truthClassificationyesaction selectionofflinegraspingmathematicalnot specifiedMMRBB, category, texturesupervisedreal robotoriginal study
Learning Task Constraints for Robot Grasping using Graphical Models Song et al.2010agentmeso1ststablenomicroyesnoground truthOptimizationyesaction selectionofflinegraspingmathematicalBN (GMM, Multinomial distribution)size, convexity, grasp posesupervisedsimulationoriginal study
Visual Grasp Affordances From Appearance-Based Cues Song et al.2011agentmeso0thstablenomicroyesnoground truthRegressionnonot specifiedofflinegraspingmathematicalMMRlocal features, HOG supervisedbenchmarkoriginal study
Predicting Human Intention in Visual Observations of Hand/Object InteractionsSong et al.2013observermeso1ststablenot specifiedmicroyesnodemonstrationOptimizationnoaction selectionofflinegraspingmathematicalBN, GMM, SOMgrasp parameters, dimensionsupervisedreal robotoriginal study
Embodiment-Specific Representation of Robot Grasping using Graphical Models and Latent-Space Discretization Song et al.2011observermeso1ststablenot specifiedmicroyesnodemonstrationOptimizationyesaction selectionofflinegraspingmathematicalBN, Gaussian Latent Variable Modelgrasp parameters, dimensionsupervisedsimulationoriginal study
Task-Based Robot Grasp Planning Using Probabilistic Inference Song et al.2015agentmeso1ststablenot specifiedmicroyesnodemonstrationOptimizationyesaction selectionofflinegeneralmathematicalBNshape, grasp parameterssupervisedsimulation+real robotoriginal study
Functional object class detection based on learned affordance cuesStark et al.2008agentmeso0thstablenomicrononodemonstrationClassificationyesaction selectionofflinegraspingmathematicalnot specifiedKDE, Hough transformk-adjacent segments, ISMsupervisedreal robotoriginal study
Learning the Affordances of Tools Using a Behavior-Grounded ApproachStoytchev2008agentmeso2ndstable+variablenomicroyesyesexplorationInferencenoaction selectiononlinegraspingmathematicalnot specifiedGroundingposition, colorunsupervisedreal robotoriginal study
Behavior-Grounded Representation of Tool AffordancesStoytchev2005agentmeso1ststablenomicrononoexplorationClassificationnoaction selectionofflineextend, slide, contractmathematicalAffordance tableposition, colorself-supervisedreal robotoriginal study
Behavior-Grounded Representation of Tool AffordancesStoytchev2005agentglobal1ststable+variablenomicro+macronoyesexplorationRegressionnosingle-/multi-step predictiononlinepulling, dragging, pushing, graspingmathematicalProbabilistic lookup tableobject postion, tool colorunsupervisedreal robotoriginal study
A Bayesian Approach Towards Affordance Learning in Aritifical AgentsStramandinoli et al.2015agentmeso1ststablenomicrononoexplorationOptimizationnoaction selectionofflinegeneralmathematicalnot specifiedBN, MLEnot specifiedunsupervisedreal robotoriginal study
Learning Visual Object Categories for Robot Affordance PredictionSun et al.2010agentmeso1ststablenomicronoyesground truthClassificationnoplanningofflinelocomotionmathematicalBN, EM, GM, DIRECTcolor, edgesupervisedreal robotoriginal study
A model of shared grasp affordances from demonstrationSweeney and Grupen2007agentmeso1ststablenomicrononodemonstrationInferenceyesaction selectionofflinegraspingmathematicalnot specifiedBN, MLEmoment featuresupervisedreal robotoriginal study
Knowledge Propagation and Relation Learning for Predicting Action EffectsSzedmak et al.2014agentmeso1ststablenomicrononoground truthClassificationyesnot specifiedofflineobjectmathematicalnot specifiedMMMVRshape, sizesupervisedbenchmarkoriginal study
Perception driven robotic assembly based on ecological approachTagawa et al.2002agentmeso1ststablenomicrononoexplorationInferenceyesaction selectiononlinegeneral (positive and negative)mathematicalGenetic Algorithms, State-machinesobject postionunsupervisedsimulationoriginal study
Localizing Handle-like Grasp Affordances In 3D Point CloudsTen Pas et al2016agentlocal0thstablenomicrononohardcodedOptimizationyesaction selectionnot specifiedgraspingmathematicalImportance sampling, quadratic surface fittingcurvature, circle fittingnot specifiedreal robotoriginal study
Exploring affordances and tool use on the iCubTikhanoff et al.2013agentmeso1ststablenomicroyesnoexplorationRegressionnosingle-/multi-step predictiononlinepulling, draggingmathematicalnot specifiedLinear SVM, least squaresSIFT, pull angle, tracked dist.supervisedreal robotoriginal study
Traversability: A Case Study for Learning and Perceiving Affordances in RobotsUgur et al.2010agentglobal1ststableyesmicrononoexplorationClassificationyesaction selectionofflinetraversabilitymathematicalnot specifiedSVMshape, sizeself-supervisedreal robotoriginal study
Goal emulation and planning in perceptual space using learned affordancesUgur et al.2011agentmeso1ststablenomicronoyesexplorationClassificationyesplanningofflineobjectmathematicalnot specifiedX-means, SVMshape, sizeunsupervisedreal robotoriginal study
Staged Development of Robot Skills: Behavior Formation, Affordance Learning and ImitationUgur et al.2015agentmeso1ststablenomicronoyesdemonstration+explorationClassificationyessingle-/multi-step predictionofflineobjectmathematicalnot specifiedDTW, X-means, SVM, EMshape, sizeunsupervisedreal robotoriginal study
Emergent structuring of interdependent affordance learning tasks using intrinsic motivation and empirical feature selectionUgur et al.2016agentmeso1ststableyesmicrononoground truthClassificationyesaction selectiononlineobjectmathematicalnot specifiedSVM, intrinsic motivationshape, sizesupervisedbenchmarkoriginal study
Bottom-Up Learning of Object Categories, Action Effects and Logical Rules: From Continuous Manipulative Exploration to Symbolic PlanningUgur et al.2015agentmeso1ststable+variablenomicronoyesexplorationClassificationyesplanningofflineobjectmathematicalnot specifiedSVM, C4.5 Decision tree, X-means, PDDLshape, sizeunsupervisedreal robotoriginal study
AfNet: The Affordance Network Varadarajan et al.2012agentlocal0thstablenomicrononoground truthClassificationyesaction selectionofflinegeneralmathematicalnot specifiedsuperquadricssupervisedbenchmarkoriginal study
AfRob: The Affordance Network Ontology for RobotsVaradarajan et al.2012agentlocal0thstablenomicrononoground truthClassificationyesaction selectionofflinegeneralmathematicalnot specifiedgradient image, superquadricssupervisedbenchmarkoriginal study
Predicting slippage and learning manipulation affordances through Gaussian Process regressionVina et al2013agentmeso1ststablenomicroyesnoexplorationRegressionyesplanningofflinegraspingmathematicalGPhand-object relative posesupervisedreal robotoriginal study
Robot Learning and Use of Affordances in Goal-directed Tasks Wang et al.2013agentglobal1stvariablenomicrononoexplorationOptimizationnoaction selectiononlinemoveabilitymathematicalExtended classifier system (XCS)color, sizesemi-supervisedreal robotoriginal study
An Entropy-Based Approach to the Hierarchical Acquisition of Perception-Action CapabilitiesWindridge et al.2008agentmeso1ststablenomicroyesyesexplorationOptimizationyesaction selectiononlinesortingmathematicalSGDimage point entropyunsupervisedsimulationoriginal study
A novel formalization for robot cognition based on Affordance modelYi et al.2012agentmeso1ststablenomicrononoexplorationInferenceyesaction selectiononlinecarryable, stackable, liftable, moveablemathematicalFirst-order logic, analysis functionscolor, sizeunsupervisedsimulationoriginal study
Fill and Transfer: A Simple Physics-based Approach for Containability ReasoningYu et al.2015agentmeso0thstablenomicroyesnoground truthOptimizationyesaction selectionofflinecontainabilitymathematicalsmoothing-based optimization, Gaussian samplingvoxelssupervisedbenchmarkoriginal study
The learning of adjectives and nouns from affordance and appearance featuresYuruten et al2013agentmeso1ststableyesmicrononoexplorationClassificationyeslanguageofflinemanipulationmathematical-ReliefF, SVM3D shape, sizeself-supervisedbenchmark+real robotoriginal study
Learning Adjectives and Nouns from Affordances on the iCub Humanoid RobotYuruten et al2012agentmeso1ststableyesmicrononoexplorationClassificationnolanguageofflinemanipulationmathematical-ReliefF, SVM, Growing Neural Gas3D shape, sizeself-supervisedreal robotoriginal study
Reasoning about Object Affordances in a Knowledge Base RepresentationZhu et al.2014agentmeso1ststablenomicrononohardcodedInferenceyessingle-/multi-step predictionofflinegeneralmathematicalMarkov Logic Networkpose, human-object pose infosupervisedbenchmarkoriginal study
Understanding tools: Task-oriented object modeling, learning and recognitionZhu et al.2015agentlocal1ststablenomicroyesnodemonstrationOptimizationyesaction selectiononlinetool-usemathematicalSVM, ranking functionmaterial, volume, masssupervisedbenchmarkoriginal study
Learning the semantics of object–action relations by observationAksoy et al.2011agentmeso0thstablenomicrononodemonstrationClassificationnonot specifiedofflineMoving Object, Making Sandwich, Filling Liquid, and Opening BookmathematicalSemantic Object-Hand and Object-Object relationsColor and DepthsupervisedbenchmarkEren Aksoy (09/2017)
Model-free incremental learning of the semantics of manipulation actionsAksoy et al.2015agentmeso0thstablenomicrononodemonstrationClassificationnonot specifiedonlinePushing, hiding, cutting, chopping, uncovering, puttingmathematicalSemantic Object-Hand and Object-Object relationsColor and DepthunsupervisedbenchmarkEren Aksoy (09/2017)
Object-Action Complexes: Grounded Abstractions of Sensorimotor ProcessesKrüger et al.2011agentmeso1ststablenomicro+macronoyesexplorationRegressionyesplanningonlinepushing, graspingmathematicalNN, KDE, Samplingco-planar contours, location, ECVsupervisedreal robotTamim Asfour (09/2017)
What can I do with this tool? Self-supervised learning of tool affordances from their 3D geometryMar et al.2017agentmeso1ststablenomicrononoexplorationRegressionyesaction selectionofflinetool-usemathematicalSOM, regressionOMS-EGIself-supervisedsimulation+real robotTanis Mar (09/2017)
Towards a Hierarchy of Loco-Manipulation AffordancesKaiser et al.2016agentglobal1ststablenomacronoyesground truthOptimizationyesaction selectionnot specifiedloco-manipulationmathematicalsampling, decision functionsshape, distance, orientationnot specifiedsimulation+real robotPeter Kaiser (09/2017)
A modular Dynamic Sensorimotor Model for affordances learning, sequences planning and tool-useBraud et al.2017agentlocal1ststablenomicrononodemonstration+explorationOptimizationyessingle-/multi-step predictiononlinetool-usemathematicalSensorimotor Law Encoders/Simualtor, Dynamic Sensorimotor Modelsensor and motor readingsself-supervisedsimulation+real robotAlexandre Pitti (10/2017)
Detecting object affordances using Convolutional Neural NetworksNguyen et al.2016agentmeso0thstablenomicrononoground truthClassificationyesaction selectionofflinegeneralmathematicalCNNRGB, depthsupervisedbenchmark+real robotPhilipp Zech (10/2017)
Object-Based Affordances Detection with Convolutional Neural Networks and Dense Conditional Random FieldsNguyen et al.2017agentmeso0thstablenomicrononoground truthClassificationyesaction selectionofflinegeneralmathematicalCNN, CRFRGB , depthsupervisedbenchmark+real robotPhilipp Zech (10/2017)
Iterative affordance learning with adaptive action generationMaestre et al.2017agentmeso1ststablenomicrononoexplorationInferenceyesaction selectiononlinepushingmathematicalBNpositionself-supervisedsimulation+real robotPhilipp Zech (10/2017)
Learning to Segment AffordancesLübbecke and Wörgötter2017agentglobal0thstablenomicrononoground truthClassificationyesnot specifiedofflinegeneralmathematicalCNNRGB, object segments, object partssupervisedbenchmarkPhilipp Zech (10/2017)
Discovering and Manipulating AffordancesChavez-Garcia et al.2016agentmeso1ststablenomicrononoexplorationOptimizationyesnot specifiedofflinepushingmathematicalclustering, BNsupervoxels, forcessupervisedreal robotMihai Andries (10/2017)
Title Authors Year Perception Perspective Level Order Temporality Selective Attention Abstraction Competitive Chaining Acquisition Prediction Generalization Exploitation Learning Kind Abstraction Brain Areas Method Features Training Evaluation Data added
Please cite as follows:
@Article{Zech-2017-AB,
  title = {{Computational models of affordance in robotics: a taxonomy and systematic classification}},
  author = {Zech, Philipp and Haller, Simon and Rezapour Lakani, Safoura and Ridge, Barry and Ugur, Emre and Piater, Justus},
  journal = {{Adaptive Behavior}},
  year = 2017,
  month = 10,
  volume = 25,
  number = 5,
  pages = {235--271},
  publisher = {SAGE},
  doi = {10.1177/1059712317726357},
  url = {https://iis.uibk.ac.at/public/papers/Zech-2017-AB.pdf}
}

Updates:

  • Added papers requested by Mihai Andries (10/2017)
  • Added errata for original Adaptive Behavior Publication (10/2017)
  • Added papers requested by Philipp Zech (10/2017)
  • Added papers requested by Alexandre Pitti (10/2017)
  • Added papers requested by Peter Kaiser (09/2017)
  • Added papers requested by Tanis Mar (09/2017)
  • Added papers requested by Tamim Asfour (09/2017)
  • Added papers requested by Eren Aksoy (09/2017)
  • Initial release of this page (08/2017)