European scientists have developed a new robot that navigates using human-like visual processing and object detection as a tool for investigating how the brain responds to its environment while the body is moving. “It seems to be a trend, from neuroscience to computer science, to look at the brain for designing new systems,” says Tomaso Poggio of the Massachusetts Institute of Technology’s Center for Biological and Computational Learning. The wheeled machine features a movable head that sees stereoscopically with a pair of cameras, and is controlled by algorithms designed to imitate different components of the human visual system. The device employs a simulated neural network to update its position relative to its surroundings, continually adjusting to each new input in a mimicry of human visual processing and movement planning. The robot mirrors object recognition, motion estimation, and decision making to navigate around a room, moving toward specific targets while evading walls and impediments. Heiko Neumann with the University of Ulm’s Vision and Perception Lab says neuroscientists typically concentrate on a specific aspect of vision and motion, but the creation of a real, human-like computer navigation model requires the integration of these various aspects into a “coherent model architecture.” Project coordinator Mark Greenlee of Germany’s University of Regensburg says that potential applications of the robot’s technology could include intelligent wheelchairs capable of easy indoor navigation. Poggio says that we are “on the cusp of a new stage where artificial intelligence is getting information from neuroscience.”
For more information please visit: http://www.cpccci.com
This entry was posted on Wednesday, July 1st, 2009 at 10:06 pm and is filed under Computer Science and Engineering News. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.