The Perception-on-Purpose (POP) project is an effort by European researchers to develop technology enabling a robot to integrate visual and audio data to facilitate purposeful perception. “It is not that easy to decide what is foreground and what is background using sound alone, but by combining the two modalities–sound and vision–it becomes much easier,” says project coordinator Radu Horaud. “If you are able to locate 10 sound sources in 10 different directions, but if in one of these directions you see a face, then you can much more easily concentrate on that sound and throw out the other ones.” The researchers followed this strategy in their development of algorithms that allowed their robot, Popeye, to reliably identify speakers. “Most often, sound research is conducted in specialized labs, with arrays of microphones and a very controlled acoustic environment,” Horaud says. “But we integrated our two microphones and two cameras onto the head of our Popeye. The idea is to have an agent-centered cognitive system.” Horaud believes there is a link between multi-sensory perception and cognition, and that some modern artificial intelligence applications are constrained by their inability to learn from their environment.
For More Information Visit: http://www.cpccci.com
This entry was posted on Thursday, July 26th, 2012 at 5:24 am and is filed under CS, Web Design and web optimization. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.