Efficient Deformable 3D Face Model Fitting to Monocular Images

Authors: Luis Unzueta Irurtia Waldir Pimeta Jon Goenetxea Imaz Luís Paulo Santos Fadi Dornaika

Date: 29.07.2015


PDF

Abstract

In this work, we present a robust and lightweight approach for the automatic fitting of deformable 3D face models to facial pictures. Well known fitting methods, for example those taking into account statistical models of shape and appearance, need a training stage based on a set of facial landmarks, manually tagged on facial pictures. In this manner, new pictures in which to fit the model cannot differ excessively in shape and appearance (including illumination changes, facial hair, wrinkles, and so on) from those utilized for training. By contrast, our methodology can fit a generic face model in two stages: (1) the localization of facial features based on local image gradient analysis; and (2) the backprojection of a deformable 3D face model through the optimization of its deformation parameters. The proposed methodology preserves the advantages of both learning-free and learning-based methodologies. Subsequently, we can estimate the position, orientation, shape and actions of faces, and initialize user-specific face tracking approaches, such as Online Appearance Models (OAMs), which have demonstrated to be more robust than generic user tracking methodologies. Experimental results demonstrate that our strategy outperforms other fitting methods under challenging illumination conditions and with a computational footprint that permits its execution in gadgets with reduced computational power, such as cell phones and tablets. Our proposed methodology fits well with numerous systems addressing semantic inference in face images and videos.

BIB_text

@Article {
title = {Efficient Deformable 3D Face Model Fitting to Monocular Images},
pages = {143-168},
keywds = {

2D shape landmarks, 3D face model, deformable model backprojection, facial actions, facial expression recognition, facial feature extraction, facial parts, face gesture analysis, face model fitting, face recognition, face tracking, gradient maps, head
}
abstract = {

In this work, we present a robust and lightweight approach for the automatic fitting of deformable 3D face models to facial pictures. Well known fitting methods, for example those taking into account statistical models of shape and appearance, need a training stage based on a set of facial landmarks, manually tagged on facial pictures. In this manner, new pictures in which to fit the model cannot differ excessively in shape and appearance (including illumination changes, facial hair, wrinkles, and so on) from those utilized for training. By contrast, our methodology can fit a generic face model in two stages: (1) the localization of facial features based on local image gradient analysis; and (2) the backprojection of a deformable 3D face model through the optimization of its deformation parameters. The proposed methodology preserves the advantages of both learning-free and learning-based methodologies. Subsequently, we can estimate the position, orientation, shape and actions of faces, and initialize user-specific face tracking approaches, such as Online Appearance Models (OAMs), which have demonstrated to be more robust than generic user tracking methodologies. Experimental results demonstrate that our strategy outperforms other fitting methods under challenging illumination conditions and with a computational footprint that permits its execution in gadgets with reduced computational power, such as cell phones and tablets. Our proposed methodology fits well with numerous systems addressing semantic inference in face images and videos.


}
isbn = {978-1-68108-111-3},
date = {2015-07-29},
year = {2015},
}
Vicomtech

Parque Científico y Tecnológico de Gipuzkoa,
Paseo Mikeletegi 57,
20009 Donostia / San Sebastián (Spain)

+(34) 943 309 230

close overlay