Show simple item record

dc.contributor.advisorCorcoran, Peteren
dc.contributor.authorBacivarov, Ioanaen
dc.date.accessioned2010-12-06T13:44:30Zen
dc.date.available2010-12-06T13:44:30Zen
dc.date.issued2009-05-01en
dc.identifier.urihttp://hdl.handle.net/10379/1482en
dc.description.abstractAdvances are presented in the modeling of facial sub-regions, in the use of enhanced whole face-models, and based on these sub-region models, in the determination of facial expressions. Models are derived using techniques from the field of active appearance modelling (AAM). A technical description and review of such techniques and a number of additional state-of-art techniques for face detection and face region analysis are provided. A detailed literature review covering a range of topics relating to facial expression analysis is provided. In particular the prior use of AAM techniques for facial feature extraction is reviewed. A range of methodologies for classifying facial expressions are also reviewed. Improved eye-region and lips region models are presented. These models employ the concept of overlapping landmark points, enabling the resulting models to handle eye-gaze, different degrees of closure of the eye, and texture variations in the lips due to the appearance of teeth when the mouth opens in a smile. The eye model is further improved by providing a component-AAM implementation enabling independent modelling of the state of each eye. Initialisation of the lips model is improved using a hue-based pre-filter. A whole-face component-AAM model is provided, combining the improved eye and lips models in an overall framework which significantly increases the accuracy of fitting of the AAM to facial expressions. A range of experiments are performed to tune and test this model for the purpose of the accurate classification of facial expressions in unseen images. Both nearest neighbour (NN) and support vector machine (SVM) classification methodologies are used. Testing of the system to classify the six universal emotions and the neutral face state shows that an accuracy of 83% can be achieved when using SVM classification. Preliminary investigations on additional enhancements to improve on this performance are provided, including the use of (i) pre-filters for gender, race, and age, (ii) person-specific AAM models, and (iii) expression tracking across multiple images in a video sequence. All of these techniques are shown to have potential to further enhance the accuracy of expression recognition of the underlying component-AAM face model with eye and lips subregional models.en
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 Ireland
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/3.0/ie/
dc.subjectActive appearance modellingen
dc.subjectFace modellingen
dc.subjectEye modelen
dc.subjectLips modelen
dc.subjectFacial expressions modellingen
dc.titleAdvances in the Modelling of Facial Sub-Regions and Facial Expressions using Active Appearance Techniquesen
dc.typeThesisen
nui.item.downloads2359


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 Ireland
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 Ireland