Intelligent and Interactive Systems

User Tools

Site Tools


research

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

research [2015/04/23 20:00]
c7031082
research [2018/09/03 19:35]
Line 1: Line 1:
-====== Research ====== 
- 
-Research at %%IIS%% is situated at the intersection of computer vision, machine learning and robotics, and focuses on adaptive perception-action systems as well as on image and video analysis. Some of our areas of activity include 
- 
-  * object models for robotic interaction 
-  * perception for grasping and manipulation 
-  * systems that improve their performance with experience 
-  * video analysis for applications such as sports or human-computer interaction 
-  * links with the psychology and biology of perception 
- 
-Most of our work is based on probabilistic models and inference. 
- 
-The following list highlights some examples of our current and past work. Follow the links for more information. 
- 
-{{ :​research:​visualcortex.png?​nolink&​260|Areas of the visual cortex}} **[[https://​iis.uibk.ac.at/​public/​antonio/​Research.html|Computational neuroscience]]** - Due to the complexity of vision tasks (object recognition,​ motion and stereo analysis) for computers, many scientists and engineers have resorted towards neurophysiology for solutions on how the human visual system solves such difficult tasks with astonishing efficiency and accuracy. Recent approaches to object recognition are mainly driven by the "edge doctrine"​ of visual processing pioneered by Hubel and Wiesel'​s work (that led to their 1981 Nobel Prize in Medicine). Edges - as detected by responses of simple and complex cells - provide important information about the presence of shapes in visual scenes. We consider that their detection is only a first step at generating an interpretation of images. Our line of work focuses on intermediate level processing areas, these operate upon initial simple and complex cell outputs towards the formation of neural representations of scene content that allow robust shape inference and object recognition. \\ Check our PLOS ONE publications in [[http://​dx.doi.org/​10.1371/​journal.pone.0042058|Rodriguez-Sanchez and Tsotsos (2012)]] and [[http://​dx.doi.org/​10.1371/​journal.pone.0098424|Azzopardi,​ Rodriguez-Sanchez,​ Piater and Petkov (2014)]] as well as our [[http://​dx.doi.org/​10.1109/​TPAMI.2012.272|summary PAMI paper]]. 
- 
-<​html><​div style="​clear:​both"></​div><​br></​html>​ 
- 
- 
-<​html>​ 
-  <div style="​border:​0;​float:​right;​margin:​0 0 0 1.5em">​ 
-     <​video width="​280"​ height="​170"​ controls preload="​metadata">​ 
-       <​source src="/​public/​videos/​pacman_demo.ogg"​ type='​video/​ogg;​codecs="​theora,​ vorbis"'>​ 
-       <​applet code="​com.fluendo.player.Cortado.class"​ archive="/​public/​cortado.jar"​ width="​280"​ height="​170">​ 
-        <param name="​autoPlay"​ value"​false"​ /> 
-        <param name="​url"​ value="/​public/​videos/​pacman_demo.ogg"/>​ 
-       </​applet>​ 
-     </​video>​ </​div>​ 
-</​html>​ **Multimodal,​ hierarchical models for object manipulation** - We have developed a generic framework for object grasping using our robotic platform. The main software modules perform object detection and pose estimation, grasp planning, path planning and robot arm and hand trajectory execution. Over the course of the [[http://​www.pacman-project.eu/​|PaCMan project]], we will replace many of these modules by advanced modules developed by the project. For example, the object detection component, currently based on state-of-the-art methods, will benefit from hierarchical compositional models based on both 2D and 3D information. This representation will be useful both for reasoning in a cluttered environment where only parts of an object will be visible, and for improving object manipulation.  ​ 
- 
-<​html><​div style="​clear:​both"></​div><​br></​html>​ 
- 
-<​html>​ 
-  <div style="​border:​0;​float:​right;​margin:​0 0 0 1em">​ 
-     <​video width="​270"​ height="​180"​ controls preload="​metadata">​ 
-       <​source src="/​public/​videos/​bootstrapping.ogg"​ type='​video/​ogg;​codecs="​theora,​ vorbis"'>​ 
-       <​applet code="​com.fluendo.player.Cortado.class"​ archive="/​public/​cortado.jar"​ width="​280"​ height="​170">​ 
-        <param name="​autoPlay"​ value"​false"​ /> 
-        <param name="​url"​ value="/​public/​videos/​pacman_demo.ogg"/>​ 
-       </​applet>​ 
-     </​video>​ </​div>​ 
-</​html>​ 
-**[[https://​iis.uibk.ac.at/​public/​emre/​research.html|Bootstrapped learning and Emergent Structuring of interdependent single and multi-object affordances]]** - Inspired from infant development,​ we propose a learning system for a developmental robotic system that benefits from bootstrapping,​ where learned simpler structures (affordances) that encode robot'​s interaction dynamics with the world are used in learning of complex affordances ([[https://​iis.uibk.ac.at/​public/​emre/​papers/​ICDL2014-Bootstrapping.pdf|ICDL2014-Bootstrapping]]). In order to discover the developmental order of different affordances,​ we use Intrinsic Motivation approach that can guide the robot to explore the actions it should execute in order to maximize the learning progress. During this learning, the robot also discovers the structure by learning and using the most distinctive object features for predicting affordances. The results show that the hierarchical structure and the development order emerged from the learning dynamics that is guided by Intrinsic Motivation mechanisms and distinctive feature selection approach ([[https://​iis.uibk.ac.at/​public/​emre/​papers/​ICDL2014-EmergentStructuring.pdf|ICDL2014-EmergentStructuring.pdf]]). 
- 
-<​html><​div style="​clear:​both"></​div><​br></​html>​ 
- 
-{{ :​research:​teney-2013-crv4.jpg?​nolink&​240|Probabilistic models of object appearance}} **[[:​research:​appearance-models|Probabilistic models of appearance]] for object recognition and pose estimation in 2D images** ​ -  We developed methods to represent the appearance of objects, and associated inference methods to identify them in images of cluttered scenes. The goal here is to leverage, to a maximum, the information conveyed by 2D images alone, without resorting to stereo or other 3D sensing techniques. We are also interested in recovering the precise pose (3D orientation) of objects, so as to ultimately use such information in the context of robotic interaction and grasping. 
- 
-<​html><​div style="​clear:​both"></​div><​br></​html>​ 
- 
-{{ :​research:​grasp_density.jpg?​nolink&​100|Grasp density}} **//​[[:​research:​grasp-densities|Grasp Densities]]//:​ Dense, Probabilistic Grasp Affordance Representations** 
- - Motivated by autonomous robots that need to acquire object manipulation skills on the fly, we are developing methods for representing affordances by nonparametrically by samples from an underlying affordance distribution. We started by representing graspability by distributions of object-relative gripper pose distributions for successful grasps ([[@/​publications#​Detry-2011-PJBR|Detry et al. 2011]]). 
- 
-<​html><​div style="​clear:​both"></​div><​br></​html>​ 
- 
-{{ :​research:​edge-model.png?​nolink&​100|Edge model}} **[[:​research:​mrf-object-models|Probabilistic Structural Object Models]] for Recognition and 3D Pose Estimation** - Motivated by automnomous robots that need to acquire object models and manipulation skills on the fly, we developed learnable object models that represent objects as Markov networks, where nodes represent feature types, and arcs represent spatial relations ([[@/​publications#​Detry-2009-TPAMI|Detry et al. 2009]]). These models can handle deformations,​ occlusion and clutter. Object detection, recognition and pose estimation are solved using classical methods of probabilistic inference. 
- 
-<​html><​div style="​clear:​both"></​div><​br></​html>​ 
- 
-{{ :​research:​rlvc-tree.gif?​200|Animated decision tree}} **Reinforcement Learning of Visual Classes** - Using learning approaches on visual input is a challenge because of the high dimensionality of the raw pixel data. Introducing concepts from appearance-based computer vision to reinforcement learning, our RLVC algorithm ([[@/​publications#​Jodogne-2007-JAIR|Jodogne & Piater 2007]]) initially treats the visual input space as a single, perceptually aliased state, which is then iteratively split on local visual features, forming a decision tree. In this way, perceptual learning and policy learning are interleaved,​ and the system learns to focus its attention on relevant visual features. 
- 
-<​html><​div style="​clear:​both"></​div><​br></​html>​ 
- 
-{{ :​research:​joint-space.png?​120|Joint space}} **Reinforcement Learning of Perception-Action Categories** - Our RLJC algorithm ([[@/​publications#​Jodogne-2006-ECML-222|Jodogne & Piater 2006]]), extends RLVC to the combined perception-action space. This constitutes a promising new approach to the age-old problem of applying reinforcement learning to high-dimensional and/or continuous action spaces. 
- 
-<​html><​div style="​clear:​both"></​div><​br></​html>​ 
- 
-{{ :​research:​maree-2005-cvpr-fig6.png?​nolink|Patches}} **Image Classification with Extremely Randomized Trees** - We developed a generic method that achieves highly competitive results on several, very different data sets ([[@/​publications#​Maree-2005-CVPR|Marée et.al. 2005]]). It is based on three straightforward insights: //​randomness//​ to keep classifier bias down, //local patches// to increase robustness to partial occlusions and global phenomena such as viewpoint changes, and //​normalization//​ to achieve invariance to various transformations. The key contribution was probably the demonstration of how far randomization can take us: Local patches are extracted at random, rotational invariance is obtained by randomly rotating the training patches, and classification is done using Extremely Randomized Trees. 
- 
-<​html><​div style="​clear:​both"></​div><​br></​html>​ 
- 
-{{ :​research:​pdm-tracking.jpg?​nolink&​200|}} **[[:​research:​object-tracking|Real-Time Object Tracking]] in Complex Scenes** - While most work at the time was based on background subtraction,​ we developed new methods for complex scenes by tracking local features for robustness to occlusions and to background changes, taking spatial coherence into account for robustness to overlapping,​ similar-looking targets. In the context of soccer, we also developed methods for robust, model-based and model-free, absolute and incremental terrain tracking. 
- 
-<​html><​div style="​clear:​both"></​div><​br></​html>​ 
- 
  
research.txt · Last modified: 2018/09/03 19:35 (external edit)