Neuromorphic vision and attention for mobile robots

Laurent Itti

Depts of Computer Science, Psychology and Neuroscience
University of Southern California, USA

Abstract. In recent years, a number of neurally-inspired computational models have emerged which demonstrate unparalleled performance, flexibility, and adaptability in coping with real-world inputs. In the visual domain, in particular, such models are achieving great strides in tasks including focusing attention onto the most important locations in a scene, recognizing attended objects, computing contextual information in the form of the 'd3gist 'd3 of the scene, and planning/executing visually-guided motor actions, among many other functions. However, these models have not yet been able to demonstrate much higher-level or cognitive computation ability. On the other hand, symbolic models from artificial intelligence have reached significant maturity in their cognitive reasoning abilities, but the worlds in which they can operate have been necessarily simplified (e.g., a chess board, a virtual maze). In this talk I will present the latest developments in our and other laboratories which attempt to bridge the gap between these two disciplines, neural modeling and artificial intelligence, in developing the next generation of robots. I will briefly review a number of efforts which aim at building models that can both process real-world inputs in robust and flexible ways, and perform cognitive reasoning on the symbols extracted from these inputs. I will draw from examples in the biological/computer vision fields, including algorithms for complex scene understanding, and for robot navigation.