4D Capture

4D Capture

4D Capture - Moving Video

Aims

Design and build a system for generic 3D facial macro- and micro-movement capture.

  • Real-time reconstruction of moving shapes and 3D texture information at unprecedented sub-pixel resolution.
  • Realistic rendering of moving faces with real-time visualisation and interaction.
  • Design and implement novel automated feature extraction and classification methods that exploit the spatio-temporal information of moving 3D faces and allow fast detection of macro- and micro-movements.
  • Generate a new facial expression taxonomy using a robust and accurate model to map facial micro- and macro-movements to corresponding expressions.

Real-time capture and recovery

The capture and recovery of moving 3D faces in real-time involves the transfer and processing of high volumes of data. To address this challenge, we use a special combination of expertise both in hardware and software, where all reconstruction and analysis processing is run in parallel on a combined CPU and GPU processor. In addition to enabling high-performance recognition capabilities on moving 3D data, this approach has the advantage of using development platforms that will support portable applications.

Facial Expression Modelling and Classification

Facial expression recognition is commonly undertaken within the 2D imaging domain. Recent developments at Centre for Machine Vision, such as the Photoface and the 4D Vision projects, allow analysis of dense 3D surface information in both static and dynamic ways, respectively. These systems employ a classification method which is both pose and illumination invariant, hence overcoming the limitations of 2D approaches. Unlike other commonly used 3D capture techniques, photometric stereo provides dense high frequency spatial information which allows the capture of fine details, such as wrinkles and transient furrows. This high density information also enables the extraction of curvature based features. Through statistical feature selection and SVM -based classification, we are then able to classify facial expressions with relatively high accuracy.

4D Capture

Above: Results of applying a bandpass filter to a single 3D frame and the use of local binary pattern analysis for inter-frame registration.

Back to Robot Vision

Theme Leader

Centre for Machine Vision

Professor Melvyn Smith
Bristol Robotics Laboratory,
University of the West of England,
Frenchay Campus, T Building,
Coldharbour Lane,
Bristol, BS16 1QY
Tel: +44 (0)117 32 86358
E-mail: Melvyn.Smith@uwe.ac.uk

Follow the Centre for Machine Vision

Page last updated 12 May 2016

  •