Efficient Generic Face Model Fitting to Images and Videos

Autores: Luis Unzueta, Waldir Pimenta, Jon Goenetxea, Luís Paulo Santos, Fadi Dornaika

Fecha: 01.05.2014

Image and Vision Computing


PDF

Abstract

In this paper we present a robust and lightweight method for the automatic fi tting of deformable 3D face models on facial images. Popular fitting techniques such as those based on statistical models of shape and appearance require a training stage based on a set of facial images and their corresponding facial landmarks, which have to be manually labeled. Therefore, new images in which to fi t the model cannot di ffer too much in shape and appearance (including illumination variation, facial hair, wrinkles, etc) from those used for training. By contrast, our approach can t a generic face model in two steps: (1) the detection of facial features based on local image gradient analysis and (2) the backprojection of a deformable 3D face model through the optimization of its deformation parameters. The proposed approach can retain the advantages of both learning-free and learning-based approaches. Thus, we can estimate the position, orientation, shape and actions of faces, and initialize user-speci fic face tracking approaches, such as Online Appearance Models (OAM), which have shown to be more robust than generic user tracking approaches. Experimental results show that our method outperforms other fi tting alternatives under challenging illumination conditions and with a computational cost that allows its implementation in devices with low hardware speci cations, such as smartphones and tablets. Our proposed approach lends itself nicely to many frameworks addressing semantic inference in face images and videos.

BIB_text

@Article {
author = {Luis Unzueta, Waldir Pimenta, Jon Goenetxea, Luís Paulo Santos, Fadi Dornaika},
title = {Efficient Generic Face Model Fitting to Images and Videos},
journal = {Image and Vision Computing},
pages = {321-334},
number = {5},
volume = {32},
keywds = {

Face model fitting, Head pose estimation, Facial feature detection, Face tracking


}
abstract = {

In this paper we present a robust and lightweight method for the automatic fi tting of deformable 3D face models on facial images. Popular fitting techniques such as those based on statistical models of shape and appearance require a training stage based on a set of facial images and their corresponding facial landmarks, which have to be manually labeled. Therefore, new images in which to fi t the model cannot di ffer too much in shape and appearance (including illumination variation, facial hair, wrinkles, etc) from those used for training. By contrast, our approach can t a generic face model in two steps: (1) the detection of facial features based on local image gradient analysis and (2) the backprojection of a deformable 3D face model through the optimization of its deformation parameters. The proposed approach can retain the advantages of both learning-free and learning-based approaches. Thus, we can estimate the position, orientation, shape and actions of faces, and initialize user-speci fic face tracking approaches, such as Online Appearance Models (OAM), which have shown to be more robust than generic user tracking approaches. Experimental results show that our method outperforms other fi tting alternatives under challenging illumination conditions and with a computational cost that allows its implementation in devices with low hardware speci cations, such as smartphones and tablets. Our proposed approach lends itself nicely to many frameworks addressing semantic inference in face images and videos.


}
isi = {1},
date = {2014-05-01},
year = {2014},
}
Vicomtech

Parque Científico y Tecnológico de Gipuzkoa,
Paseo Mikeletegi 57,
20009 Donostia / San Sebastián (España)

+(34) 943 309 230

Edificio Ensanche,
Zabalgune Plaza 11,
48009 Bilbao (España)

close overlay