Show simple item record

dc.contributor.authorVON NUMERS, CHARLOTTE
dc.date.accessioned2022-06-22T06:23:29Z
dc.date.available2022-06-22T06:23:29Z
dc.date.issued2022-06-22
dc.identifier.urihttps://hdl.handle.net/2077/72277
dc.description.abstractFacial expressions have long been linked to human emotions, physiology, and behavior. Novel work suggests that clinical trial subject facial expression relates to self-reported quality of life. This implies automated facial expression recognition tools have potential to increase understanding of emotional and social functioning in response to treatment. Limited availability of labeled facial image datasets has however historically posed challenges for building robust facial expression recognition tools that generalize to diverse populations. This thesis project presents a network predicting gold-standard facial expression labels called action units. To overcome data scarcity challenges a multi-stage pretraining approach is introduced. The model achieves ROC AUC of 0.88 over six action units for cross validation with held-out subjects. Predictions for action units that are rare in the training data generalize to unseen subjects which highlights the benefit of the pretraining approach. The results further indicate a possible advantage of including a subject-level baseline.en_US
dc.language.isoengen_US
dc.subjectFacial Expression Recognitionen_US
dc.subjectFACSen_US
dc.subjectAction Unitsen_US
dc.subjectSubject-level Baselineen_US
dc.subjectMulti-stage Pretrainingen_US
dc.titleFacial Expression Recognition for Clinical Trial Self-recordingsen_US
dc.typetext
dc.setspec.uppsokTechnology
dc.type.uppsokH2
dc.contributor.departmentGöteborgs universitet/Institutionen för data- och informationsteknikswe
dc.contributor.departmentUniversity of Gothenburg/Department of Computer Science and Engineeringeng
dc.type.degreeStudent essay


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record