3D Facial Analysis for Identification and Human Machine Interaction


The 3D-sensation consortium aims at fundamentally redefining human-machine interaction. This includes also the automatic 3D analysis of human faces and facial expressions which is useful in many tasks such as access-control, creation of new dialogue systems for human-computer-interaction of even medical therapy. Based on the results of a previous project (3DGIM), we are working on hybrid representations of the human face that allow for more realistic rendering and animation. This project focuses on facial regions that are ignored by most other models. Problematic regions, like eyes, mouth and lips are usually not represented in face models although they have a huge impact on the quality of still and moving images that are generated from the facial model. While in 3DGIM, the face was treated as one deforming object, we are now trying to extend the 'one-model-fits-all' approach by implementing specialized deformation and rendering models for complex regions like eyes, mouth and lips to allow for more realistic renderings and animations. Another extension of this project will be a hybrid animation pipeline, which uses dynamic textures and geometry to model the deformation of a human face. Through this extension, we hope to overcome typical limitations of pure geometric animation. In the second half of this project, we will focus on temporal deformation models. The reason for this is that not only the 'static' geometry-quality affects the realism of a face model, but also the dynamics. Pure linear interpolation between different blendshapes usually results in unsatisfactory animations of the facial geometry, unless highly complex or manually rigged models are used. A texture based animation can partially circumvent these problems, by capturing and eventually replaying the actual video input. But also texture based animation requires being able to interpolate between different facial expressions, for example to concatenate two or more captured sequences. To solve this problem, we will use recent machine learning techniques to analyse captured data and extract more realistic deformation models.


Principal investigators
Eisert, Peter Prof. Dr.-Ing. (Details) (Visual Computing)

Participating organisational units of HU Berlin

Financer
Federal Ministry of Education and Research

Duration of project
Start date: 10/2017
End date: 09/2019

Research Areas
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing

Research Areas
Informatik

Last updated on 2025-16-01 at 13:46