3D face analysis for identification and human machine interfaces

The 3Dsensation consortium aims at fundamentally redefining human-machine interaction. This includes also the automatic 3D analysis of human faces and facial expressions which is useful in many tasks such as access-control, creation of new dialog systems for human-computer-interaction of even medical therapy. Current methods for a geometric facial expression analysis and synthesis often rely purely on linear models. They represent facial expressions as a linear combination of a small number of basis expressions which are typically learned from a large training set using well known machine learning techniques such as PCA. While this approach is very popular due to its simplicity it also lacks fine details the ability to appropriately cover the space of possible facial expressions. Several techniques have been proposed to circumvent these limitations like for example part based models, adaption of basis expressions, or extension of the expression space via corrective shapes. In contrast to the aforementioned methods, we want to develop a new model based approach that is able to refine the estimated result using a hierarchical approach i.e. a state based facial expression model that adapts its expressiveness/complexity to the current facial expression state. This allows for more detailed deformations without unnecessarily increasing the overall complexity. Finally we combine the model based reconstruction technique with a model free refinement step to ensure highly detailed 3D reconstructions but simultaneously guarantee temporal as well as semantic consistency.

Principal Investigators
Eisert, Peter Prof. Dr.-Ing. (Details) (Visual Computing)

Duration of Project
Start date: 10/2015
End date: 09/2017

Research Areas
Interactive and Intelligent Systems, Image and Language Processing, Computer Graphics and Visualisation

Research Areas
Human-Computer Interaction (HCI), Informatik

Last updated on 2020-14-10 at 10:27