Title |
Author |
Year |
Perspective |
Stimuli |
Selective Attention |
Granularity |
Abstraction |
Competition |
Sequencing |
Generalization |
Motivation |
Acquisition |
Prediction |
Exploitation |
Learning |
Discretization |
Grounding |
Associativity |
Effect Correspondence |
Formulation |
Method |
Features |
Training |
Evaluation |
Datasets |
Actions |
Date Added |
Title |
Author |
Year |
Perspective |
Stimuli |
Selective Attention |
Granularity |
Abstraction |
Competition |
Sequencing |
Generalization |
Motivation |
Acquisition |
Prediction |
Exploitation |
Learning |
Discretization |
Grounding |
Associativity |
Effect Correspondence |
Formulation |
Method |
Features |
Training |
Evaluation |
Datasets |
Actions |
Date Added |
Teaching new tricks to a robot learning to solve a task by imitation | Acosta et al. | 2010 | observer | extroceptive | yes | meso | atomic | no | yes | yes | extrinsic | Demonstration | classification | Single-/Multi-step prediction | offline | both | yes | bidirectional | both | mathematical | Forward and Inverse Models, FSM | marker poses | supervised | Real Robot | | pick and place | Aug 2018 |
Toward a library of manipulation actions based on semantic object-action relations | Aein et al. | 2013 | agent | both | no | meso | atomic | no | no | yes | not specified | Hard coded | classification | Single-/Multi-step prediction | offline | categorical | yes | unidirectional | both | mathematical | SEC, FSM, DMP | object and gripper poses | supervised | Real Robot | | pushing, reaching, grasping, hiding | Aug 2018 |
Action recognition by employing combined directional motion history and energy images | Ahad et al. | 2010 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Demonstration | classification | Recognition | offline | categorical | no | unidirectional | environment | mathematical | NN | cartesian moments, energy templates | unsupervised | Benchmark | | wave, bend, hug, jump | Aug 2018 |
Variable silhouette energy image representations for recognizing human actions | Ahmad & Lee | 2010 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | MCSVM | silhouette images and shape | supervised | Benchmark | KTH, FBG | | Aug 2018 |
Learning symbolic representations of actions from human demonstrations | Ahmadzadeh et al. | 2015 | agent | both | no | meso | atomic | no | yes | yes | not specified | Demonstration | optimization | Planning | offline | both | yes | unidirectional | both | mathematical | DMPs, Visuospatial Skill Learning | trajectories, object motions | supervised | Real Robot | | pull, push | Aug 2018 |
Enriched manipulation action semantics for robot execution of time constrained tasks | Aksoy et al. | 2016 | agent | both | no | meso | atomic | no | no | yes | not specified | Demonstration | classification | Single-/Multi-step prediction | offline | continuous | yes | unidirectional | both | mathematical | MMM, SEC | object and body poses | supervised | Real Robot | | manipulation, interaction | Aug 2018 |
Structural bootstrapping at the sensorimotor level for the fast acquisition of action knowledge for cognitive robots | Aksoy et al. | 2013 | observer | both | yes | meso | atomic | no | no | yes | not specified | Demonstration | classification | Recognition | offline | categorical | yes | unidirectional | both | mathematical | SEC, DMP, Structural Bootstrapping | object and hand poses | supervised | Benchmark | | cutting, stirring, chopping | Aug 2018 |
Semantic Decomposition and Recognition of Long and Complex Manipulation Action Sequences | Aksoy et al. | 2016 | observer | extroceptive | yes | meso | both | yes | yes | yes | not specified | Demonstration | optimization | Recognition | offline | categorical | no | unidirectional | environment | mathematical | SEC, Similarity Matching | depth, color | unsupervised | Benchmark | ManiAc | | Aug 2018 |
Deep Feature-Action Processing with Mixture of Updates | Altahhan | 2015 | agent | extroceptive | yes | global | atomic | yes | yes | yes | intrinsic | Exploration | regression | Single-/Multi-step prediction | online | continuous | yes | unidirectional | body | mathematical | PCA, Actor-Critic | RGB images | supervised | Real Robot | | homing | Aug 2018 |
Learning Invariant Sensorimotor Behaviors: A Developmental Approach to Imitation Mechanisms | Andry et al. | 2004 | agent | both | yes | meso | atomic | no | no | yes | not specified | Demonstration | classification | Single-/Multi-step prediction | online | continuous | no | unidirectional | environment | mathematical | NF, DS, SOM | joint configurations | unsupervised | Real Robot | | pointing, navigation, gestures | Aug 2018 |
Perceiving Objects and Movements to Generate Actions on a Humanoid Robot | Asfour et al. | 2008 | observer | extroceptive | yes | global | atomic | no | yes | yes | not specified | combination | regression | Single-/Multi-step prediction | offline | categorical | no | unidirectional | environment | mathematical | Particle filter, HMM, Clustering | invariant object features, affordances | unsupervised | Real Robot | | reaching, grasping, placing | Aug 2018 |
Human sensorimotor learning for humanoid robot skill synthesis | Babic et al. | 2011 | agent | extroceptive | no | global | atomic | no | no | yes | not specified | Demonstration | optimization | Single-/Multi-step prediction | offline | continuous | no | unidirectional | body | biomimetic | RBF network | joint configurations | supervised | Real Robot | | reaching | Aug 2018 |
Constraint-based movement representation grounded in geometric features | Bartls et al. | 2013 | agent | extroceptive | no | meso | atomic | no | yes | yes | not specified | Hard coded | optimization | Planning | offline | categorical | no | unidirectional | environment | mathematical | Task functions | points, lines, planes | supervised | Real Robot | | making pancake | Aug 2018 |
Representing robot/environment interactions using probabilities: the "beam in the bin" experiment | Bessiere et al. | 1994 | agent | both | no | global | atomic | no | no | yes | not specified | Exploration | inference | Single-/Multi-step prediction | online | continuous | yes | bidirectional | both | mathematical | Probabilistic Inference | motor command, light intensity | unsupervised | Real Robot | | reachability | Aug 2018 |
How iCub Learns to Imitate Use of a Tool Quickly by Recycling the Past Knowledge Learnt During Drawing | Bhat & Mohan | 2015 | observer | extroceptive | yes | meso | atomic | yes | yes | yes | extrinsic | Demonstration | regression | Single-/Multi-step prediction | online | continuous | yes | bidirectional | both | mathematical | PMP, Motor imagery | trajectory, motion shape | self-supervised | Real Robot | | reaching | Aug 2018 |
Recognizing actions with the associative self-organizing map | Buonamente et al. | 2013 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Ground truth | classification | Recognition | offline | not specified | not specified | not specified | not specified | mathematical | ASOM | posture vectors from movie | unsupervised | Benchmark | IXMAS | | Aug 2018 |
Learning actions from human-robot dialogues | Cantrell et al. | 2011 | agent | extroceptive | no | global | atomic | no | yes | yes | not specified | Hard coded | classification | Recognition | offline | categorical | no | unidirectional | environment | mathematical | Dependency Parsing, NN | language | unsupervised | Real Robot | | following | Aug 2018 |
An Efficient Approach for Multi-view Human Action Recognition Based on Bag-of-Key-Poses | Chaaraoui et al. | 2012 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | not specified | body | mathematical | k-Means, NN | center of mass, euclidean distance, bag of key poses | supervised | Benchmark | MuHAVi | | Aug 2018 |
Towards a Conceptual Representation of Actions | Chella et al. | 2000 | agent | both | no | global | atomic | yes | yes | yes | not specified | Ground truth | inference | Planning | offline | categorical | yes | unidirectional | body | mathematical | CS, RNN | location, poses | supervised | Real Robot | | navigation | Aug 2018 |
ReadingAct RGB-D action dataset and human action recognition from local features | Chen et al. | 2014 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | Interest Point Detection, DTW, SVM | 2D depth and intensity images | supervised | Benchmark | CAD-60, RGBD-HUDAACT, Chen_2014 | | Aug 2018 |
Learning of composite actions and visual categories via grounded linguistic instructions: Humanoid robot simulations | Chuang et al. | 2012 | agent | extroceptive | no | global | atomic | yes | no | yes | not specified | Demonstration | regression | Recognition | offline | continuous | yes | unidirectional | body | biomimetic | ANN | language, RGB images | supervised | Simulation | | open, close, lift, up, down, move left, right | Aug 2018 |
Platas—Integrating Planning and the Action Language Golog | Claßen et al. | 2012 | observer | extroceptive | no | global | atomic | no | yes | not specified | not specified | Language | optimization | Planning | not specified | categorical | no | bidirectional | bith | mathematical | Non- and Deterministic Planning Language | language | not specified | Simulation | | delivering letters | Aug 2018 |
What Will You Do Next? A Cognitive Model for Understanding Others’ Intentions Based on Shared Representations | Dindo & Chella | 2013 | observer | both | no | global | compound | yes | yes | yes | not specified | Ground truth | classification | Single-/Multi-step prediction | offline | categorical | yes | unidirectional | both | mathematical | DBN | not specified | not specified | not specified | | | Aug 2018 |
Hankelet-based action classification for motor intention recognition | Dindo et al. | 2017 | observer | extroceptive | no | meso | atomic | no | no | yes | not specified | Ground truth | classification | Single-/Multi-step prediction | offline | continuous | no | unidirectional | body | mathematical | SVM | IMU readings | supervised | Real Robot | | walking | Aug 2018 |
Learn to wipe: A case study of structural bootstrapping from sensorimotor experience | Do et al. | 2014 | agent | both | no | not specified | atomic | no | yes | yes | not specified | Demonstration | regression | Effect prediction | online | continuous | yes | unidirectional | environment | mathematical | DMP, SVR | end effector forces, vision | supervised | Real Robot | | wiping | Aug 2018 |
Learning programs is better than learning dynamics: A programmable neural network hierarchical architecture in a multi-task scenario | Donnarumma et al. | 2015 | agent | proprioceptive | no | global | atomic | yes | yes | yes | extrinsic | Ground truth | optimization | Single-/Multi-step prediction | offline | continuous | no | unidirectional | body | mathematical | HPNNA | sonar readings | supervised | Simulation | | reaching | Aug 2018 |
Learning a repertoire of actions with deep neural networks | Droniou et al. | 2014 | limb | both | no | global | atomic | no | no | yes | not specified | Demonstration | regression | Single-/Multi-step prediction | offline | continuous | yes | unidirectional | both | mathematical | ANN | cartesian positions and velocities | self-supervised | Real Robot | | handwriting | Aug 2018 |
Bayesian Approaches for Learning of Primitive-Based Compact Representations of Complex Human Activities | Endres et al. | 2016 | observer | extroceptive | no | global | atomic | yes | yes | yes | not specified | Ground truth | regression | Recognition | offline | categorical | no | not specified | body | mathematical | PCA, ICA, Bayesian Binning | EMG data | unsupervised | Benchmark | | TaekWonDo kicks and moves | Aug 2018 |
Probabilistic model-based imitation learning | Englert et al. | 2013 | agent | proprioceptive | no | meso | atomic | no | no | yes | not specified | Demonstration | optimization | Single-/Multi-step prediction | offline | continuous | no | unidirectional | both | mathematical | GP | joint configurations | supervised | Real Robot | | pendulum swing, ball hitting | Aug 2018 |
Learning to Recognize Activities from the Wrong View Point | Farhadi & Tabrizi | 2008 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | not specified | environment | mathematical | NN Classifier, MaxMargin Clustering, PCA | HOS, optical flow | supervised | Benchmark | IXMAS | | Aug 2018 |
Action Recognition Using Motion Primitives and Probabilistic Edit Distance | Fihl et al. | 2006 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Ground truth | inference | Recognition | online | categorical | no | not specified | body | mathematical | PCA, Probabilistic Edit Distance | STV, 3D joint configurations, mahalanobis distance | supervised | Virtual Reality | | arm movements | Aug 2018 |
Incremental action recognition and generalizing motion generation based on goal-directed features | Gräve & Behnke | 2012 | observer | extroceptive | no | meso | atomic | no | no | yes | not specified | Demonstration | classification | Recognition | offline | continuous | no | unidirectional | both | mathematical | HMM, GPR | object and hand poses | supervised | Benchmark | | grasping, pushing | Aug 2018 |
Synaptic plasticity in a recurrent neural network for versatile and adaptive behaviors of a walking robot | Grinke et. al. | 2015 | agent | extroceptive | no | global | atomic | yes | no | yes | extrinsic | Exploration | regression | Planning | online | continuous | no | unidirectional | body | biomimetic | Synaptic scaling | terrain information, obstacle height | unsupervised | Real Robot | | walking, avoid/climbing obstacle, escape from corner/narrow passage | Aug 2018 |
Matching Trajectories of Anatomical Landmarks Under Viewpoint, Anthropometric and Temporal Transforms | Gritai et al. | 2009 | observer | extroceptive | no | local | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | not specified | body | mathematical | Trajectory Matching | 2D motions of landmarks | supervised | Benchmark | | walking waving, bending | Aug 2018 |
Minimalist plans for interpreting manipulation actions | Guha et al. | 2013 | observer | both | no | meso | atomic | no | yes | yes | not specified | Demonstration | classification | Recognition | offline | categorical | no | unidirectional | environment | mathematical | Tracking, Planning | gripper poses, grasp type | supervised | Benchmark | UMD | | Aug 2018 |
Conceptual imitation learning based on functional effects of action | Hajimirsadeghi | 2011 | agent | extroceptive | no | meso | atomic | no | no | yes | extrinsic | Demonstration | regression | Single-/Multi-step prediction | online | categorical | yes | bidirectional | environment | biomimetic | HMM | motion sequences | supervised | Real Robot | | hand gestures | Aug 2018 |
Realtime manipulation planning system integrating symbolic and geometric planning under interactive dynamics siumlator | Haneda et al. | 2008 | agent | both | no | meso | atomic | yes | yes | yes | extrinsic | Exploration | optimization | Planning | online | both | yes | bidirectional | both | mathematical | Symbolic and Geometric Planning, Motor imagery | object and robot poses | supervised | Simulation | | stacking | Aug 2018 |
Tracking in Action Space | Herzog & Krüger | 2012 | limb | extroceptive | no | meso | atomic | yes | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | not specified | body | mathematical | PHMM, BP | edge features | supervised | Benchmark | | reaching, pointing, pushing, grasping | Aug 2018 |
Coupled learning of action parameters and forward models for manipulation | Höfer & Brock | 2016 | agent | proprioceptive | no | global | atomic | yes | yes | yes | not specified | Exploration | optimization | Planning | offline | categorical | yes | unidirectional | environment | mathematical | Iterative clustering | relational scene description | unsupervised | Simulation | | pushing, pulling | Aug 2018 |
Learning Causality and Intentional Actions | Hongeng & Wyatt | 2008 | observer | extroceptive | yes | meso | atomic | yes | yes | yes | not specified | Demonstration | inference | Recognition | offline | categorical | no | unidirectional | environment | mathematical | BN | object and gripper state, gripper-obj. and obj.-obj. relations | unsupervised | Benchmark | | reaching | Aug 2018 |
Computational modeling of observational learning inspired by the cortical underpinnings of human primates | Hourdakis & Trahanias | 2012 | observer | extroceptive | no | meso | atomic | no | no | yes | extrinsic | Demonstration | optimization | Single-/Multi-step prediction | online | continuous | no | unidirectional | not specified | biomimetic | LSM, SOM, ANN | joint confiugurations, trajectories | supervised | Simulation | | reaching | Aug 2018 |
Seamless Integration and Coordination of Cognitive Skills in Humanoid Robots: A Deep Learning Approach | Hwang & Tani | 2017 | limb | extroceptive | yes | meso | atomic | yes | no | yes | not specified | Demonstration | regression | Single-/Multi-step prediction | offline | continuous | yes | unidirectional | environment | biomimetic | Visuo-Motor Deep Dynamic Neural Network | RGB images | supervised | Real Robot | | reach and grasp | Aug 2018 |
Classification of human actions using pose-based features and stacked auto encoder | Ijjina & Mohan C | 2016 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | Stacked AE | pose data | supervised | Benchmark | CMU MoCap, Berkeley-MHAD | | Aug 2018 |
Robust Feature Extraction for Shift and Direction Invariant Action Recognition | Jeon et al. | 2015 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | not specified | body | mathematical | SVM | optical flow | supervised | Benchmark | KTH | | Aug 2018 |
View-Invariant Human Action Recognition Using Exemplar-Based Hidden Markov Models | Ji & Lui | 2009 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | not specified | body | mathematical | HMM, EM, k-Means Clustering | 3D-key poses, human silhouette, optical flow | supervised | Benchmark | IXMAS | | Aug 2018 |
A New Framework for View-Invariant Human Action Recognition | Ji et al. | 2010 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Demonstration | classification | Recognition | offline | continuous | no | unidirectional | body | mathematical | HMM | body key poses, contour shape features | unsupervised | Benchmark | IXMAS | | Aug 2018 |
Study of Human Action Recognition Based on Improved Spatio-temporal Features | Ji et al. | 2014 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | not specified | body | mathematical | SVM | 3D SIFT, PDI | supervised | Benchmark | KTH | | Aug 2018 |
Finding Actions Using Shape Flows | Jiang & Martin | 2008 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Hard coded | classification | Recognition | offline | categorical | no | not specified | environment | mathematical | Template Matching | flow lines | supervised | Benchmark | Weizmann | | Aug 2018 |
Cross-View Action Recognition from Temporal Self-similarities | Junejo et al. | 2015 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | not specified | body | mathematical | NNC, SVM | EDM, SSM | supervised | Benchmark | CMU MoCap, Weizmann, IXMAS | | Aug 2018 |
Transfer of Elementary Skills via Human-Robot Interaction | Kaiser | 1997 | agent | extroceptive | yes | meso | atomic | no | no | yes | not specified | Demonstration | optimization | Single-/Multi-step prediction | offline | continuous | no | unidirectional | environment | mathematical | RBF network | not specified | supervised | Real Robot | | force control, peg-in-hole, docking | Aug 2018 |
Improved GLOH Approach for One-Shot Learning Human Gesture Recognition | Karn & Jiang | 2016 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | not specified | body | mathematical | kNN, k-Means Clustering, LD | BOF, visual codebook, GLOH | supervised | Benchmark | ChaLearn Gesture | | Aug 2018 |
Natural Language Communication Between Human and Artificial Agents | Kemke | 2006 | agent | extroceptive | no | meso | atomic | no | yes | yes | not specified | Hard coded | classification | Language | offline | categorical | no | unidirectional | both | mathematical | NLP, Ontologies | poses, action verb | supervised | Simulation | | reaching, grasping, driving | Aug 2018 |
Visual object-action recognition: Inferring object affordances from human demonstration | Kjellström et.al. | 2011 | limb | extroceptive | no | meso | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | environment | mathematical | SVM, CRF | HOG, hand pose | supervised | Benchmark | | pour, hammer, open | Aug 2018 |
Tensor Representations via Kernel Linearization for Action Recognition from 3D Skeletons | Koniusz et al. | 2016 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Ground truth | classification | Recognition | offline | continuous | no | not specified | body | mathematical | SVM, RBF | joint configurations | supervised | Benchmark | UTKinect Action, Florence3D Action, MSR-Action 3D | | Aug 2018 |
Recognizing Action Primitives in Complex Actions Using Hidden Markov Models | Krüger | 2006 | observer | extroceptive | no | global | atomic | yes | yes | yes | not specified | Demonstration | regression | Recognition | offline | categorical | no | unidirectional | body | mathematical | HMM | joint configurations | unsupervised | Benchmark | MoPrim | | Aug 2018 |
Using Hidden Markov Models for Recognizing Action Primitives in Complex Actions | Krüger & Grest | 2007 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | not specified | body | mathematical | HMM | poses | supervised | Benchmark | MoPrim | | Aug 2018 |
Tracking in object action space | Krüger & Herzog | 2013 | observer | extroceptive | no | meso | atomic | no | yes | yes | not specified | Demonstration | optimization | Recognition | offline | continuous | no | unidirectional | both | mathematical | PHMM, Bayes | body-fix points | unsupervised | Benchmark | | pointing, pouring | Aug 2018 |
Object–Action Complexes: Grounded abstractions of sensory–motor processes | Krüger et al. | 2011 | agent | extroceptive | no | meso | both | no | yes | yes | not specified | combination | regression | Planning | offline | continuous | yes | unidirectional | environment | mathematical | NN, KDE, Sampling | co-planar contours, location, ECV | supervised | Real Robot | | grasping, pushing | Aug 2018 |
Learning Actions from Observations | Kruger; D. L. Herzog; S. Baby; A. Ude; D. Kragic | 2010 | observer | extroceptive | yes | meso | compound | no | yes | yes | extrinsic | Demonstration | classification | Single-/Multi-step prediction | online | categorical | yes | bidirectional | both | mathematical | PHMM | motion sequences, object pose | unsupervised | Real Robot | | move, push, rotate with arm | Aug 2018 |
An Unsupervised Framework for Action Recognition Using Actemes | Kulkarni et al. | 2011 | observer | extroceptive | no | global | compound | yes | yes | yes | extrinsic | Language | regression | Single-/Multi-step prediction | online | categorical | no | unidirectional | environment | mathematical | k-Means, one-pass DP decoding, HMM | 3D Visual Hull | unsupervised | Benchmark | IXMAS | | Aug 2018 |
Action representation for planning using truth maintenance system | Kulkarni; N. Parameswaran; R. Nagarajan | 1989 | agent | proprioceptive | no | meso | atomic | yes | yes | no | extrinsic | Hard coded | inference | Planning | offline | categorical | no | bidirectional | environment | mathematical | Truth maintainance system
| action symbols, states | supervised | Simulation | | open, stack | Aug 2018 |
New approach for action recognition using motion based features | Kumar; P. Sivaprakash | 2013 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | not specified | not specified | not specified | not specified | mathematical | kNN | key points | supervised | Benchmark | KTH | | Aug 2018 |
Embodiment independent manipulation through action abstraction | Laaksonen; J. Felip; A. Morales; V. Kyrki | 2010 | limb | extroceptive | no | meso | compound | no | yes | yes | extrinsic | Hard coded | not specified | not specified | offline | not specified | not specified | not specified | not specified | mathematical | Transfer Learning | predefined actions | supervised | Real Robot | | grasp, lift | Aug 2018 |
Linking language with embodied and teleological representations of action for humanoid cognition | Lallee et al. | 2010 | limb | extroceptive | yes | global | atomic | no | no | yes | not specified | Demonstration | inference | Effect prediction | online | categorical | yes | bidirectional | environment | biomimetic | Spikenet, Temporal segmentation, Rule engine | visibility, moving, contact, language | supervised | Real Robot | | cover, uncover | Aug 2018 |
Real-Time Biologically Inspired Action Recognition from Key Poses Using a Neuromorphic Architecture | Layher et al. | 2017 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | not specified | mathematical | DCNN | key poses | supervised | Benchmark | | bend, jack, jump, run, walk, skip, wave, pick up, sit , rope, push, raise, press, lunge, twist, stretch, touch | Aug 2018 |
Learning convolutional action primitives for fine-grained action recognition | Lea et al. | 2016 | agent | both | no | meso | both | no | yes | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | environment | mathematical | Latent Convolutional Skip Chain CRF | depth, positions, velocities | supervised | Benchmark | 50 Salads, JIGSAWS | | Aug 2018 |
Robot skill discovery bases on observed data | Lee; J. Chen | 1996 | agent | extroceptive | no | global | atomic | no | no | yes | extrinsic | Demonstration | optimization | Planning | offline | not specified | yes | not specified | body | mathematical | Multiresolution Global Comp. and Local Coop. Algorithm | FSTM | unsupervised | Simulation | | navigation | Aug 2018 |
Salient pairwise spatio-temporal interest points for real-time activity recognition | Liu et al. | 2016 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | inference | Recognition | offline | categorical | no | unidirectional | body | mathematical | Directed Graphs | TSP, SSP | supervised | Benchmark | KTH, ADLs, UT-Interaction | | Aug 2018 |
Action recognition using dynamics features | Mansur; Makihara and Yagi | 2011 | observer | extroceptive | no | local | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | not specified | not specified | not specified | not specified | mathematical | HMM | torque | supervised | Benchmark | | walk, run, march, sit,kump forward, jump in place, hop | Aug 2018 |
Semi-supervised Learning of Action Ontology from Domain-Specific Corpora | Markievicz et al. | 2013 | observer | extroceptive | no | meso | atomic | yes | yes | yes | not specified | Language | classification | Recognition | offline | categorical | no | unidirectional | both | mathematical | NLP, Ontologies, WSM | text | semi-supervised | Benchmark | CHEMLAB corpus | | Aug 2018 |
Grounding Action Words in the Sensorimotor Interaction with the World: Experiments with a Simulated iCub Humanoid Robot | Marocco et al. | 2010 | agent | both | no | meso | atomic | no | no | yes | not specified | Ground truth | regression | Single-/Multi-step prediction | offline | continuous | no | unidirectional | environment | mathematical | RNN | joint configurations, tactile readings, language, roundness | supervised | Real Robot | | crawling | Aug 2018 |
Modelling the Cortical Columnar Organisation for Topological State-Space Representation, and Action Planning | Martinet et al. | 2008 | agent | proprioceptive | no | local | atomic | yes | not specified | yes | extrinsic | Exploration | optimization | Planning | offline | continuous | yes | bidirectional | body | biomimetic | Activation-Diffusion planning | location and orientation HP | unsupervised | Simulation | | navigation | Aug 2018 |
Extending sensorimotor contingency theory: prediction, planning and action generation | Maye & Engl | 2013 | agent | proprioceptive | no | global | atomic | yes | yes | yes | extrinsic | Exploration | regression | Single-/Multi-step prediction | online | continuous | yes | unidirectional | body | mathematical | HMM | motor readings | self-supervised | Real Robot | | navigation | Aug 2018 |
Action recognition with appearance–motion features and fast search trees | Mikolajczyk & Uemura | 2011 | observer | extroceptive | yes | global | atomic | no | yes | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | KD-Tree | GLOH, Hessian/Harris-Laplace, MSER | supervised | Benchmark | KTH, Weizmann | | Aug 2018 |
Inference Through Embodied Simulation in Cognitive Robots | Mohan et al. | 2013 | agent | both | no | meso | atomic | yes | yes | yes | extrinsic | Exploration | regression | Single-/Multi-step prediction | online | continuous | yes | bidirectional | environment | biomimetic | Growing SOM, PMP | size, color, shape, object word | unsupervised | Real Robot | | pushing | Aug 2018 |
Teaching Humanoids to Imitate ‘Shapes’ of Movements | Mohan et al. | 2010 | limb | extroceptive | no | local | atomic | no | no | yes | extrinsic | Demonstration | regression | Effect prediction | offline | continuous | no | unidirectional | body | mathematical | Direct Linear Transfrom | lines, ciritical points | supervised | Real Robot | | drawing | Aug 2018 |
Autonomous Learning of High-Level States and Actions in Continuous Environments | Mugan & Kuipers | 2012 | agent | both | no | global | atomic | yes | yes | yes | intrinsic | Exploration | optimization | Single-/Multi-step prediction | online | continuous | yes | unidirectional | both | mathematical | RL, DBN | object poses and velocities | unsupervised | Simulation | | grasping, pushing | Aug 2018 |
Adaptive synthesis of dynamically feasible full-body movements for the humanoid robot HRP-2 by flexible combination of learned dynamic movement primitives | Mukovskiy et al. | 2017 | agent | extroceptive | no | global | atomic | no | yes | yes | not specified | Demonstration | regression | Planning | offline | continuous | no | unidirectional | not specified | mathematical | DMPs, Anechoic Mixing Model | joint angle trajectories | supervised | Real Robot | | walking-reaching | Aug 2018 |
A biomimetic approach to robot table tennis | Mülling et al. | 2011 | agent | extroceptive | no | meso | atomic | no | yes | yes | not specified | Hard coded | optimization | Single-/Multi-step prediction | not specified | continuous | no | unidirectional | environment | biomimetic | FSM | points in 3D | not specified | Real Robot | | tennis strokes | Aug 2018 |
Acquisition of viewpoint representation in imitative learning from own sensory-motor experiences | Nakajo et al. | 2015 | agent | both | no | meso | atomic | no | yes | yes | not specified | Ground truth | regression | Single-/Multi-step prediction | offline | continuous | yes | unidirectional | both | mathematical | CTRNN | joint configurations, RGB images | supervised | Real Robot | | touching, reaching | Aug 2018 |
Graphical framework for action recognition using temporally dense STIPs | Natarajan; P. Banerjee; F. M. Khan; R. Nevatia | 2009 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | not specified | not specified | not specified | not specified | mathematical | CRFs | temporal dense-STIP | supervised | Benchmark | KTH | | Aug 2018 |
Development process of functional hierarchy for actions and motor imagery | Nishimoto & Tani | 2009 | agent | both | no | meso | atomic | yes | yes | yes | extrinsic | Demonstration | regression | Single-/Multi-step prediction | offline | both | yes | bidirectional | both | mathematical | MTRNN | joint configurations, camera angle, object pose | supervised | Real Robot | | push, pull, touch | Aug 2018 |
Learning Multiple Goal-Directed Actions Through Self-Organization of a Dynamic Neural Network Model: A Humanoid Robot Experiment | Nishimoto et al. | 2008 | agent | proprioceptive | no | meso | atomic | no | no | yes | not specified | Ground truth | regression | Single-/Multi-step prediction | offline | continuous | no | unidirectional | environment | biomimetic | CTRNN | joint configurations | supervised | Real Robot | | lift, move | Aug 2018 |
A generative model for developmental understanding of visuomotor experience | Noda; K. Kawamoto; T. Hasuo; K. Sabe | 2011 | limb | extroceptive | yes | global | atomic | no | no | no | extrinsic | Exploration | classification | Effect prediction | online | continuous | no | unidirectional | both | biomimetic | HMM | appearance in the vision, motor of arm and camera | unsupervised | Simulation | | reaching, interacting | Aug 2018 |
Acquiring hand-action models by attention point analysis | Ogawara et al. | 2001 | limb | extroceptive | yes | meso | atomic | no | no | yes | not specified | Demonstration | regression | Single-/Multi-step prediction | offline | categorical | yes | unidirectional | environment | mathematical | Attention Point Analysis | object color and shape, time | unsupervised | Real Robot | | grasp, pick, pour, handover | Aug 2018 |
Hierarchies for Embodied Action Perception | Ognibene | 2013 | observer | extroceptive | no | meso | atomic | yes | yes | yes | extrinsic | combination | inference | Single-/Multi-step prediction | offline | both | yes | bidirectional | both | mathematical | BN | object state, gripper state, motor commands | unsupervised | Real Robot | | manipulation | Aug 2018 |
Learning Epistemic Actions in Model-Free Memory-Free Reinforcement Learning: Experiments with a Neuro-robotic Model | Ognibene et al. | 2013 | agent | extroceptive | yes | local | atomic | yes | yes | yes | not specified | Exploration | not specified | Effect prediction | online | not specified | yes | unidirectional | environment | biomimetic | RL, NN | eye posture map, arm posture map | semi-supervised | Simulation | | reaching | Aug 2018 |
A Spiking Neural Network Model of Multi-modal Language Processing of Robot Instructions | Panchev | 2005 | agent | both | yes | global | atomic | yes | yes | yes | extrinsic | combination | regression | Single-/Multi-step prediction | offline | continuous | yes | unidirectional | body | biomimetic | Spiking Neural Network | object features, frequency map, tactile readings | unsupervised | Simulation | | navigation, manipulation | Aug 2018 |
Comparing Hidden Markov Models and Long Short Term Memory Neural Networks for Learning Action Representations | Panzner & Cimiano | 2016 | agent | extroceptive | no | global | atomic | yes | yes | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | HMM, LSTM | QTC | supervised | Simulation | | jumps over, jumps on, circling, pushing | Aug 2018 |
Self-organizing neural integration of pose-motion features for human action recognition | Parisi et al. | 2015 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | body | biomimetic | SOM, GWR | pose-motion vectors | unsupervised | Benchmark | | standing, walking, jogging, sitting, lying down, pick up object, jump, fall down, stand up | Aug 2018 |
Learning for Goal-directed Actions using RNNPB: Developmental Change of “What to Imitate” | Park et al. | 2017 | agent | proprioceptive | no | global | atomic | no | yes | yes | not specified | Demonstration | regression | Single-/Multi-step prediction | offline | continuous | yes | unidirectional | body | mathematical | RNNPB (parametric bias) | motor commands | supervised | Combination | | reaching | Aug 2018 |
Learning object, grasping and manipulation activities using hierarchical HMMs | Patel et al. | 2013 | observer | extroceptive | no | meso | compound | yes | yes | yes | not specified | Ground truth | classification | Single-/Multi-step prediction | offline | categorical | no | not specified | body | mathematical | HHMM, EM | hand and object motion/orientation, object class | unsupervised | Benchmark | | reach, grasp, lift, place | Aug 2018 |
Do what i want, not what i did: Imitation of skills by planning sequences of actions | Paxton et al. | 2016 | agent | both | no | meso | atomic | yes | yes | yes | not specified | Ground truth | inference | Planning | offline | categorical | no | unidirectional | environment | mathematical | GMM | time, gripper commands, object transforms | supervised | Real Robot | | structure building | Aug 2018 |
Joint movement similarities for robust 3D action recognition using skeletal data | Pazhoummand-Dar et al. | 2015 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | LCSS | 3D body skeleton | supervised | Benchmark | MSR-Action 3D, HDM05 | | Aug 2018 |
What should I do next? Using shared representations to solve interaction problems | Pezzulo & Dindo | 2011 | observer | both | no | global | atomic | yes | yes | yes | not specified | Demonstration | inference | Single-/Multi-step prediction | online | continuous | yes | bidirectional | both | mathematical | DBN | movement kinematics | unsupervised | Virtual Reality | | reach, put, turn | Aug 2018 |
Clustering of human actions using invariant body shape descriptor and dynamic time warping | Pierobon; M. Marcon; A. Sarti; S. Tubaro | 2005 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | not specified | not specified | not specified | not specified | mathematical | DTW | 3D shape descriptors | supervised | Benchmark | | pointing, crouching down, kick | Aug 2018 |
Audio-visual classification and detection of human manipulation actions | Pieropan et al. | 2014 | observer | extroceptive | no | meso | atomic | no | yes | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | environment | mathematical | HMM | poses, distances | supervised | Benchmark | | open, pour, close | Aug 2018 |
Cross-modal and scale-free action representations through enaction | Pitti et al. | 2009 | agent | both | yes | meso | atomic | no | no | yes | not specified | Demonstration | classification | Recognition | offline | continuous | no | unidirectional | environment | biomimetic | RNN, STDP | saliency maps, touch | self-supervised | Simulation | | grasping | Aug 2018 |
Fast action recognition using negative space features | Rahman et al. | 2014 | observer | extroceptive | yes | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | NN | negative space action descriptors | supervised | Benchmark | KTH, Weizmann, Fish action | | Aug 2018 |
Transferring skills to humanoid robots by extracting semantic representations from observations of human activities | Ramirez-Amaro et al. | 2017 | limb | extroceptive | no | meso | both | no | yes | yes | extrinsic | Demonstration | inference | Planning | offline | categorical | no | unidirectional | not specified | mathematical | Decision Tree | color | supervised | Real Robot | | making a pancake | Aug 2018 |
A new invariant descriptor for action recognition based on spherical harmonics | Razzaghi et al. | 2012 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Demonstration | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | SVM, spherical harmonics | spatio-temporal volume | supervised | Benchmark | KTH, Weizmann, IXMAS, Robust | | Aug 2018 |
View-independent human action recognition with Volume Motion Template on single stereo camera | Roh et al. | 2010 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Demonstration | inference | Recognition | offline | continuous | no | unidirectional | environment | mathematical | VMT, PMT | silhouette and depth maps | unsupervised | Benchmark | | hand movements, bowing | Aug 2018 |
A framework for heading-guided recognition of human activity | Rosales & Sclaroff | 2003 | observer | extroceptive | yes | global | atomic | no | no | yes | not specified | Demonstration | optimization | Recognition | offline | continuous | no | unidirectional | body | mathematical | EKF, PCA, EM | 3D trajectories | unsupervised | Benchmark | | walking, running, r.blading, biking | Aug 2018 |
Hand-Object Interaction and Precise Localization in Transitive Action Recognition | Rosenfeld & Ullman | 2016 | observer | extroceptive | yes | local | atomic | no | no | no | not specified | Ground truth | regression | Recognition | offline | not specified | not specified | not specified | not specified | mathematical | DAG-RNN, SVM | face, hand object location probability map | supervised | Benchmark | Stanford-40 Actions | | Aug 2018 |
Human activity recognition in videos using a single example | Roshtkhari & Levine | 2013 | observer | extroceptive | yes | global | atomic | no | no | yes | not specified | Ground truth | regression | Recognition | offline | categorical | no | unidirectional | body | mathematical | Similarity mapping | HOG, SIFT | supervised | Benchmark | KTH, Weizmann, MSR II | | Aug 2018 |
A Multi-Scale Hierarchical Codebook Method for Human Action Recognition in Videos Using a Single Example | Roshtkhari; M. D. Levine | 2012 | observer | extroceptive | no | global | atomic | no | no | yes | intrinsic | Exploration | classification | Recognition | online | not specified | not specified | not specified | not specified | mathematical | Bag of video words, Code book | STV | supervised | Benchmark | KTH, Weizmann, MSR II | | Aug 2018 |
Learning the Consequences of Actions: Representing Effects as Feature Changes | Rudolph et al. | 2010 | agent | extroceptive | no | global | atomic | yes | no | yes | not specified | Demonstration | inference | Single-/Multi-step prediction | offline | continuous | yes | unidirectional | environment | mathematical | BN | object pose and presence | supervised | Simulation | | can tossing, ball throwing | Aug 2018 |
Learning sequential and continuous control | Ryan & Andreae | 1993 | agent | extroceptive | no | global | atomic | yes | yes | yes | not specified | Exploration | regression | Single-/Multi-step prediction | online | continuous | no | unidirectional | body | biomimetic | PP, CMAC | position | unsupervised | Simulation | | reachability | Aug 2018 |
Action Recognition Robust to Background Clutter by Using Stereo Vision | Sanchez-Riera et al. | 2012 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | k-Means, SVM | BOW, Scene/Optical Flow | supervised | Benchmark | Ravel | | Aug 2018 |
Primitive Based Action Representation and Recognition | Sanmohan & Krüger | 2009 | observer | extroceptive | no | global | atomic | no | yes | yes | not specified | Demonstration | inference | Recognition | offline | categorical | no | unidirectional | body | mathematical | HMM, SCFG | trajectories | unsupervised | Combination | | walking, grasping, pushing | Aug 2018 |
Learning Discriminative Space–Time Action Parts from Weakly Labelled Videos | Sapienza et al. | 2014 | observer | extroceptive | no | meso | compound | yes | yes | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | not specified | body | mathematical | SVM-map | BOF | semi-supervised | Benchmark | KTH, YouTube, Hollywood2, HMDB | | Aug 2018 |
Encoding human actions with a frequency domain approach | Shah et al. | 2016 | observer | proprioceptive | no | meso | both | yes | no | yes | not specified | Demonstration | classification | Recognition | offline | not specified | not specified | not specified | not specified | mathematical | FFT, PCA, Clustering | joint configurations | semi-supervised | Benchmark | HDM05 | | Aug 2018 |
Learning Skeleton Stream Patterns with Slow Feature Analysis for Action Recognition | Shan et al. | 2014 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Demonstration | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | SFA, k-Means | skeletal joint streams | unsupervised | Benchmark | MSR-Action 3D | | Aug 2018 |
Teaching Robots New Actions through Natural Language Instructions | She et al. | 2014 | agent | both | no | meso | atomic | no | yes | yes | extrinsic | Ground truth | inference | Planning | online | categorical | yes | unidirectional | both | mathematical | NLP, Vision Graphs | language, object poses, distances | self-supervised | Real Robot | | reaching, grasping | Aug 2018 |
Integration of spatial and temporal contexts for action recognition by self organizing neural networks | Shimozaki; Y. Kuniyoshi | 2003 | observer | extroceptive | yes | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | not specified | not specified | not specified | not specified | biomimetic | SOM | 2D animation | supervised | Benchmark | | grasp, carry, place | Aug 2018 |
Navigating mobile robots with a modular neural architecture | Silva & Ribeiro | 2003 | agent | proprioceptive | no | meso | atomic | yes | no | yes | extrinsic | Demonstration | regression | Single-/Multi-step prediction | offline | categorical | yes | unidirectional | body | mathematical | MNN | robot state, world state | supervised | Real Robot | | navigation | Aug 2018 |
Spatiotemporal representation of 3D skeleton joints-based action recognition using modified spherical harmonics | Slaih and Chahir | 2016 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | continuous | no | unidirectional | body | mathematical | Spherical Harmonics, ELM | 3D body skeleton | supervised | Benchmark | MSR-Action 3D, G3D, Florence3D Action, UTKinect-Action | | Aug 2018 |
Leaving Some Stones Unturned: Dynamic Feature Prioritization for Activity Detection in Streaming Video | Su & Grauman | 2016 | observer | extroceptive | no | meso | compound | yes | yes | yes | not specified | Ground truth | regression | Single-/Multi-step prediction | offline | both | no | not specified | environment | mathematical | MDP, GMM | BOO, CNN-features | supervised | Benchmark | ADLs, UCF-101 | | Aug 2018 |
A sub-symbolic process underlying the usage-based acquisition of a compositional representation: Results of robotic learning experiments of goal-directed actions | Sugita; J. Tani | 2008 | agent | extroceptive | no | local | atomic | no | yes | yes | extrinsic | Demonstration | regression | Single-/Multi-step prediction | offline | categorical | yes | bidirectional | body | biomimetic | NN | colored patches, speed of wheel | supervised | Simulation | | reaching | Aug 2018 |
Action Disambiguation Analysis Using Normalized Google-Like Distance Correlogram | Sun & Liu | 2013 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | not specified | body | mathematical | NGLD+BOW, kNN, SVM | STIP, 3D-Sift, Spatial-Temporal Interest Point | supervised | Benchmark | Weizmann, UCF Sports | | Aug 2018 |
A novel hierarchical Bag-of-Words model for compact action representation | Sun et al. | 2016 | observer | extroceptive | yes | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | Hierarchical BOW, SVM | 2D images | supervised | Benchmark | Hollywood2, Olympic Sports, YouTube, HMDB | | Aug 2018 |
A unified representation for reasoning about robot actions, processes, and their effects on objects | Tenorth & Beetz | 2012 | agent | both | yes | meso | atomic | yes | yes | yes | not specified | Hard coded | inference | Planning | offline | both | no | unidirectional | both | mathematical | Ontologies, Logic Programming | object shape and pose, body poses | supervised | Real Robot | | making pancake | Aug 2018 |
From motor to sensory processing in mirror neuron computational modelling | Tessitore et al. | 2010 | agent | both | no | meso | compound | yes | yes | yes | not specified | Ground truth | classification | Single-/Multi-step prediction | offline | continuous | yes | bidirectional | body | biomimetic | PCA, NN, MDN | scene descriptors | supervised | Benchmark | Human-Grasp | | Aug 2018 |
Ubiquitous robotics in physical human action recognition: A comparison between dynamic ANNs and GP | Theodoridis et al. | 2008 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | environment | mathematical | ANN, Genetic Programming | kinematic body model | supervised | Benchmark | | daily activities | Aug 2018 |
Behavior Histograms for Action Recognition and Human Detection | Thurau | 2007 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Demonstration | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | k-Means | HOG | unsupervised | Benchmark | Weizmann | | Aug 2018 |
n-Grams of Action Primitives for Recognizing Human Behavior | Thurau & Hlavac | 2007 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Demonstration | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | PCA, NGrams, Ward's clustering | eigenshapes | unsupervised | Benchmark | Weizmann | | Aug 2018 |
Recognizing Human Actions by Their Pose | Thurau & Hlavac | 2009 | observer | extroceptive | no | global | atomic | yes | yes | yes | not specified | Ground truth | classification | Single-/Multi-step prediction | offline | categorical | no | not specified | body | mathematical | k-Means Clustering, NMF | prototypical poses, HOG | semi-supervised | Benchmark | Weizmann | | Aug 2018 |
Joint classification of actions and object state changes with a latent variable discriminative model | Vafeias & Ramamoorthy | 2014 | observer | extroceptive | no | meso | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | environment | mathematical | CRF | object pose, body pose | supervised | Benchmark | | drink, push, stack, read | Aug 2018 |
Rational imitation for robots: the cost difference model | Vanderlest & Winfield | 2017 | agent | extroceptive | no | global | atomic | no | no | yes | not specified | Demonstration | optimization | Planning | offline | continuous | no | unidirectional | body | mathematical | Cost Difference Model | not specified | unsupervised | Real Robot | | navigation | Aug 2018 |
On the improvement of human action recognition from depth map sequences using Space–Time Occupancy Patterns | Vieira et al. | 2013 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | Action graphs | 3D body skeleton | supervised | Benchmark | MSR-Action 3D | | Aug 2018 |
STOP: Space-Time Occupancy Patterns for 3D Action Recognition from Depth Map Sequences | Vieira et al. | 2012 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Effect prediction | offline | categorical | no | not specified | body | mathematical | SVM, PCA | PCA-STOP | supervised | Benchmark | MSR-Action 3D | | Aug 2018 |
Multiple Kernel Learning and Optical Flow for Action Recognition in RGB-D Video | Viet et al. | 2015 | observer | extroceptive | no | global | atomic | yes | no | no | not specified | Demonstration | classification | Recognition | offline | not specified | not specified | not specified | not specified | mathematical | MKL classification | dense optical flow, SPHOF on RGB-D | supervised | Benchmark | MSR-Daily Activity, 3D ActionPairs | | Aug 2018 |
Probabilistic semantic models for manipulation action representation and extraction | Vuga et al. | 2014 | agent | extroceptive | no | meso | atomic | no | yes | yes | not specified | Demonstration | optimization | Recognition | offline | continuous | no | unidirectional | environment | mathematical | SEC | color, surface | supervised | Benchmark | | pouring, opening | Aug 2018 |
Power difference template for action recognition | Wang et al. | 2017 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Demonstration | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | BOW, SVM | normalized projection histogram, motion kinetic velocity | supervised | Benchmark | KTH, UCF-Sports, UCF-101, HMDB | | Aug 2018 |
Hierarchical interpretation of human activities using competitive learning | Wechsler; Z. Duric; Fayin Li | 2002 | observer | extroceptive | no | meso | compound | no | no | yes | not specified | Ground truth | classification | Recognition | offline | not specified | not specified | not specified | not specified | mathematical | LVQ, clustering | motion parameters | semi-supervised | Benchmark | | striking, grinding, swing, stirring | Aug 2018 |
Unsupervised learning of reflexive and action-based affordances to model adaptive navigational behavior | Weiller et al. | 2010 | agent | extroceptive | no | global | atomic | yes | yes | yes | not specified | Exploration | inference | Planning | online | continuous | yes | unidirectional | body | mathematical | Place Fields, Geometrical transition matrix | affordances | unsupervised | Real Robot | | avoidance, navigation | Aug 2018 |
Grounding Neural Robot Language in Action | Wermter et al. | 2005 | observer | extroceptive | no | global | atomic | yes | yes | yes | extrinsic | combination | regression | Single-/Multi-step prediction | offline | both | yes | bidirectional | body | biomimetic | Helmholtz machine, SOM | pose, motor direction, action word | unsupervised | Simulation | | go, pick, lift | Aug 2018 |
Efficient Action Recognition with MoFREAK | Whiten; Laganière and Bilodeau | 2013 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | not specified | not specified | not specified | not specified | mathematical | SVM | MoFREAK descriptor | supervised | Benchmark | KTH, HMDB | | Aug 2018 |
A Simple Ontology of Manipulation Actions Based on Hand-Object Relations | Wörgötter et al. | 2013 | observer | extroceptive | no | meso | atomic | no | no | yes | not specified | Demonstration | classification | Recognition | offline | categorical | no | unidirectional | environment | mathematical | SEC, Ontologies | object and hand poses | supervised | Benchmark | | manipulation | Aug 2018 |
Integration of Heterogeneity for Human-Friendly Robotic Operations | Xi & Tarn | 1999 | agent | extroceptive | no | global | atomic | no | yes | yes | extrinsic | Demonstration | optimization | Single-/Multi-step prediction | online | continuous | yes | unidirectional | body | mathematical | Mapping function | force, poses | supervised | Real Robot | | obstacle avoidance | Aug 2018 |
Human action recognition framework by fusing multiple features | Xiao & Cheng | 2013 | observer | extroceptive | yes | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | not specified | not specified | not specified | not specified | mathematical | MMI, PSVM | depth motion maps, visual words, HOG | supervised | Benchmark | MSR-Action 3D | | Aug 2018 |
Human action learning via hidden Markov model | Yang et al. | 1997 | agent | extroceptive | no | meso | atomic | no | no | yes | not specified | Ground truth | inference | Single-/Multi-step prediction | offline | continuous | yes | unidirectional | environment | mathematical | HMM | cartesian gripper trajectory | supervised | Real Robot | | replace task | Aug 2018 |
Manipulation action tree bank: A knowledge resource for humanoids | Yang et al. | 2014 | not specified | not specified | no | not specified | compound | yes | yes | yes | not specified | Ground truth | classification | Planning | offline | not specified | not specified | not specified | not specified | mathematical | Tree banks | accelerometer, RGB-D / RGB | supervised | Benchmark | 50 Salads, TACoS | | Aug 2018 |
One-shot learning based pattern transition map for action early recognition | Yi et al. | 2017 | observer | extroceptive | no | global | both | yes | yes | yes | not specified | Demonstration | regression | Recognition | offline | categorical | no | unidirectional | environment | mathematical | Pattern transition maps, Q-Learning | pose data | semi-supervised | Benchmark | 3D ActionPairs, SYSU 3D HOI | | Aug 2018 |
Online Multimodal Ensemble Learning Using Self-Learned Sensorimotor Representations | Zambelli & Demiris | 2017 | agent | both | no | global | atomic | yes | yes | yes | not specified | Exploration | regression | Single-/Multi-step prediction | online | continuous | yes | unidirectional | both | mathematical | Ensemble Learning | joints, RGB images, touch, sound | self-supervised | Real Robot | | piano playing | Aug 2018 |
Learning the spatial semantics of manipulation actions through preposition grounding | Zampogiannis et al. | 2015 | agent | extroceptive | no | meso | atomic | yes | no | no | not specified | Ground truth | classification | Recognition | offline | both | yes | unidirectional | environment | mathematical | Collection of PVS | RGB-D video, spatial relation predicates on tracked objects | unsupervised | Benchmark | | pour, transfer, stack, stir | Aug 2018 |
View-Independent Human Action Recognition by Action Hypersphere in Nonlinear Subspace | Zhang & Zhuang | 2007 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Demonstration | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | Hypersphere classification | motion history image, polar fatures | unsupervised | Benchmark | IXMAS | | Aug 2018 |
Unified robot learning of action labels and motion trajectories from 3D human skeletal data | Zhang et al. | 2016 | observer | proprioceptive | no | global | atomic | no | no | yes | not specified | Demonstration | classification | Single-/Multi-step prediction | offline | categorical | yes | unidirectional | environment | mathematical | DTW, GMM, SVM, GMR | bone vectors | semi-supervised | Benchmark | MSR-Daily Activity | | Aug 2018 |
Motion Context: A New Representation for Human Action Recognition | Zhang et al. | 2008 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | not specified | body | mathematical | SVM, PLSA | motion words, motion index, motion context | supervised | Benchmark | KTH, Weizmann | | Aug 2018 |
kPose: A New Representation For Action Recognition | Zhou et al. | 2010 | observer | extroceptive | no | global | atomic | yes | no | yes | not specified | Demonstration | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | Pose Weighted Distribution Model | prototypical poses, HOG | unsupervised | Benchmark | Weizmann | | Aug 2018 |
Human action recognition using multi-layer codebooks of key poses and atomic motions | Zhu et al. | 2016 | observer | extroceptive | no | global | atomic | no | no | yes | not specified | Ground truth | classification | Recognition | offline | categorical | no | unidirectional | body | mathematical | Codebooks, SVM, NBNN, RF | 3D body skeleton | supervised | Benchmark | CAD-60, MSRC-12 | | Aug 2018 |
Bootstrapping Q-Learning for Robotics from Neuro-Evolution Results | Zimmer & Doncieux | 2017 | agent | both | yes | global | atomic | yes | yes | yes | intrinsic | Exploration | optimization | Planning | offline | categorical | yes | unidirectional | environment | mathematical | Policy Search, Clustering | distance, tactile, poses, | unsupervised | Simulation | | ball collecting, box pushing | Aug 2018 |
Title |
Author |
Year |
Perspective |
Stimuli |
Selective Attention |
Granularity |
Abstraction |
Competition |
Sequencing |
Generalization |
Motivation |
Acquisition |
Prediction |
Exploitation |
Learning |
Discretization |
Grounding |
Associativity |
Effect Correspondence |
Formulation |
Method |
Features |
Training |
Evaluation |
Datasets |
Actions |
Date Added |