Action Representations in Robotics.

Abstract: Understanding and defining the meaning of ``action'' is substantial for robotics research. This becomes utterly evident when aiming at equipping autonomous robots with robust manipulation skills for action execution. Unfortunately, to this day we still lack both a clear understanding of the concept of an action and a set of established criteria that ultimately characterize an action. In this survey we thus first review existing ideas and theories on the notion and meaning of action. Subsequently we discuss the role of action in robotics and attempt to give a seminal definition of action in accordance with its use in robotics research. Given this definition we then introduce a taxonomy for categorizing action representations in robotics along various dimensions. Finally, we provide a systematic literature survey on action representations in robotics where we categorize relevant literature along our taxonomy. After discussing the current state of the art we conclude with an outlook towards promising research directions.

Original publication: IJRR (open access)

Download the data as MS Excel file (50 KB): Download»


TitleAuthorYearPerspectiveStimuliSelective AttentionGranularityAbstractionCompetitionSequencingGeneralizationMotivationAcquisitionPredictionExploitationLearningDiscretizationGroundingAssociativityEffect CorrespondenceFormulationMethodFeaturesTrainingEvaluationDatasetsActionsDate Added
A biomimetic approach to robot table tennisMülling et al.2011agentextroceptivenomesoatomicnoyesyesnot specifiedHard codedoptimizationSingle-/Multi-step predictionnot specifiedcontinuousnounidirectionalenvironmentbiomimeticFSMpoints in 3Dnot specifiedReal Robot tennis strokesAug 2018
A framework for heading-guided recognition of human activityRosales & Sclaroff2003observerextroceptiveyesglobalatomicnonoyesnot specifiedDemonstrationoptimizationRecognitionofflinecontinuousnounidirectionalbodymathematicalEKF, PCA, EM3D trajectoriesunsupervisedBenchmark walking, running, r.blading, bikingAug 2018
A generative model for developmental understanding of visuomotor experienceNoda; K. Kawamoto; T. Hasuo; K. Sabe2011limbextroceptiveyesglobalatomicnononoextrinsicExplorationclassificationEffect predictiononlinecontinuousnounidirectionalbothbiomimeticHMMappearance in the vision, motor of arm and camera unsupervisedSimulation reaching, interactingAug 2018
A Multi-Scale Hierarchical Codebook Method for Human Action Recognition in Videos Using a Single ExampleRoshtkhari; M. D. Levine2012observerextroceptivenoglobalatomicnonoyesintrinsicExplorationclassificationRecognitiononlinenot specifiednot specifiednot specifiednot specifiedmathematicalBag of video words, Code bookSTVsupervisedBenchmarkKTH, Weizmann, MSR II Aug 2018
A New Framework for View-Invariant Human Action RecognitionJi et al.2010observerextroceptivenoglobalatomicyesnoyesnot specifiedDemonstrationclassificationRecognitionofflinecontinuousnounidirectionalbodymathematicalHMMbody key poses, contour shape featuresunsupervisedBenchmarkIXMAS Aug 2018
A new invariant descriptor for action recognition based on spherical harmonicsRazzaghi et al.2012observerextroceptivenoglobalatomicyesnoyesnot specifiedDemonstrationclassificationRecognitionofflinecategoricalnounidirectionalbodymathematicalSVM, spherical harmonicsspatio-temporal volumesupervisedBenchmarkKTH, Weizmann, IXMAS, Robust Aug 2018
A novel hierarchical Bag-of-Words model for compact action representationSun et al.2016observerextroceptiveyesglobalatomicnonoyesnot specifiedGround truthclassificationRecognitionofflinecategoricalnounidirectionalbodymathematicalHierarchical BOW, SVM2D imagessupervisedBenchmarkHollywood2, Olympic Sports, YouTube, HMDB Aug 2018
A Simple Ontology of Manipulation Actions Based on Hand-Object RelationsWörgötter et al.2013observerextroceptivenomesoatomicnonoyesnot specifiedDemonstrationclassificationRecognitionofflinecategoricalnounidirectionalenvironmentmathematicalSEC, Ontologiesobject and hand posessupervisedBenchmark manipulationAug 2018
A Spiking Neural Network Model of Multi-modal Language Processing of Robot InstructionsPanchev2005agentbothyesglobalatomicyesyesyesextrinsiccombinationregressionSingle-/Multi-step predictionofflinecontinuousyesunidirectionalbodybiomimeticSpiking Neural Networkobject features, frequency map, tactile readingsunsupervisedSimulation navigation, manipulationAug 2018
A sub-symbolic process underlying the usage-based acquisition of a compositional representation: Results of robotic learning experiments of goal-directed actionsSugita; J. Tani2008agentextroceptivenolocalatomicnoyesyesextrinsicDemonstrationregressionSingle-/Multi-step predictionofflinecategoricalyesbidirectionalbodybiomimeticNNcolored patches, speed of wheelsupervisedSimulation reachingAug 2018
TitleAuthorYearPerspectiveStimuliSelective AttentionGranularityAbstractionCompetitionSequencingGeneralizationMotivationAcquisitionPredictionExploitationLearningDiscretizationGroundingAssociativityEffect CorrespondenceFormulationMethodFeaturesTrainingEvaluationDatasetsActionsDate Added
Showing 1 to 10 of 152 entries

Please cite as follows:
 @Article{Zech-2019-IJRR,
    title = {{Action representations in robotics: A taxonomy and systematic Classification}},
    author = {Zech, Philipp and Renaudo, Erwan and Haller, Simon and Zhang, Xiang and Piater, Justus},
    journal = {{International Journal of Robotics Research}},
    year = 2019,
    publisher = {SAGE},
    doi = {10.1177/0278364919835020},
    url = {http://dx.doi.org/10.1177/0278364919835020}
  }

Updates:

  • Initial release of this page (08/2018)
  • Update Publication (03/2019)