Generating thermal image data samples using 3D facial modelling techniques and deep learning methodologies
Farooq, Muhammad Ali
MetadataShow full item record
This item's downloads: 43 (view details)
Cited 1 times in Scopus (view citations)
Farooq, Muhammad Ali, & Corcoran, Peter. (2020). Generating thermal image data samples using 3D facial modelling techniques and deep learning methodologies. Paper presented at the 12th International Conference on Quality of Multimedia Experience (QoMEX), Athlone, Ireland, 26-28 May. https://doi.org/10.1109/qomex48832.2020.9123079
Methods for generating synthetic data have become of increasing importance to build large datasets required for Convolution Neural Networks (CNN) based deep learning techniques for a wide range of computer vision applications. In this work, we extend existing methodologies to show how 2D thermal facial data can be mapped to provide 3D facial models. For the proposed research work we have used tufts datasets for generating 3D varying face poses by using a single frontal face pose. The system works by refining the existing image quality by performing fusion based image preprocessing operations. The refined outputs have better contrast adjustments, decreased noise level and higher exposedness of the dark regions. It makes the facial landmarks and temperature patterns on the human face more discernible and visible when compared to original raw data. Different image quality metrics are used to compare the refined version of images with original images. In the next phase of the proposed study, the refined version of images is used to create 3D facial geometry structures by using Convolution Neural Networks (CNN). The generated outputs are then imported in blender software to finally extract the 3D thermal facial outputs of both males and females. The same technique is also used on our thermal face data acquired using prototype thermal camera (developed under Heliaus EU project) in an indoor lab environment which is then used for generating synthetic 3D face data along with varying yaw face angles and lastly facial depth map is generated.