Enhancing the Visual Perception Capabilities of Kompaï Robot Using Parallel Processing
Kompeye
Duration:
01.02.2012 - 31.07.2013
Robots could be the perfect home employees as they can work without a break and without complaining. Unfortunately, current technology in autonomous robotics is still far away from this dream. Nevertheless, improving the level of perception of current robots will lead them to evolve towards higher levels of intelligence. Thus, in the recent years the academic research has achieved important advances in the field of computer vision that allow humans to interact with machines in a more natural way with machines. Such advances include the detection of human figures, heads, faces, facial expressions, even with moving cameras, along with gesture detection and recognition. The gap between all these perceptive technologies in one side and their effective implementation into autonomous service robots aimed at looking after people in the other side, is to be narrowed by the KOMPEYE experiment.
The general objective is to apply, in a combined and complementary way, computer vision techniques for markerless, view-independent human figure, head and face detection, face emotion recognition, gesture spotting and recognition approaches, for an advanced autonomous robot cognitive visual perception. Thanks to the results of this experiment, service robots will be able to look after humans, detecting if they are in trouble and acting consequently. The experiment will be guided and evaluated through 4 demonstrators in an indoors real and uncontrolled indoors environment with a modified version of the Kompaï companion robot from Robosoft.
Main developments in the experiment will be focused on integrating in an optimal way a different set of advanced computer vision algorithms which at the end will allow the robot at the end detect people in trouble using only its own cameras. The experiment will consist of a set of 4 different technical tasks. The main innovation all throughout the project consists in the novelty of applying all these techniques synergistically for an advanced visual perception of robots, taking advantage of parallel processing through GPUs, trying to cover the gaps in human-robot communication currently in the state of art.
Looking for support for your next project? Contact us, we are looking forward to helping you.