Accurate 2D facial depth models derived from a 3D synthetic dataset
MetadataShow full item record
This item's downloads: 35 (view details)
Cited 1 times in Scopus (view citations)
Khan, Faisal, Basak, Shubhajit, & Corcoran, Peter. (2021). Accurate 2D facial depth models derived from a 3D synthetic dataset. Paper presented at the 39th IEEE International Conference on Consumer Electronics (ICCE 2021), Las Vegas, USA, 10-12 January, doi: 10.1109/ICCE50685.2021.9427595
As Consumer Technologies (CT) seeks to engage and interact more closely with the end-user it becomes important to observe and analyze a user’s interaction with CT devices and associated services. One of the most useful modes for monitoring a user is to analyze a real-time video stream of their face. Facial expressions, movements and biometrics all provide important information, but obtaining a calibrated input with 3D accuracy from a single camera requires accurate knowledge of the facial depth and distance of different features from the camera. In this paper, a method is proposed to generate synthetic high-accuracy human facial depth from synthetic 3D face models. The generated synthetic human facial dataset is then used in Convolutional Neural Networks (CNN’s) for monocular depth facial estimation and the results of the experiments are presented.