dc.contributor.author | VON NUMERS, CHARLOTTE | |
dc.date.accessioned | 2022-06-22T06:23:29Z | |
dc.date.available | 2022-06-22T06:23:29Z | |
dc.date.issued | 2022-06-22 | |
dc.identifier.uri | https://hdl.handle.net/2077/72277 | |
dc.description.abstract | Facial expressions have long been linked to human emotions, physiology, and behavior.
Novel work suggests that clinical trial subject facial expression relates to
self-reported quality of life. This implies automated facial expression recognition
tools have potential to increase understanding of emotional and social functioning
in response to treatment. Limited availability of labeled facial image datasets has
however historically posed challenges for building robust facial expression recognition
tools that generalize to diverse populations. This thesis project presents a network
predicting gold-standard facial expression labels called action units. To overcome
data scarcity challenges a multi-stage pretraining approach is introduced. The model
achieves ROC AUC of 0.88 over six action units for cross validation with held-out
subjects. Predictions for action units that are rare in the training data generalize
to unseen subjects which highlights the benefit of the pretraining approach. The
results further indicate a possible advantage of including a subject-level baseline. | en_US |
dc.language.iso | eng | en_US |
dc.subject | Facial Expression Recognition | en_US |
dc.subject | FACS | en_US |
dc.subject | Action Units | en_US |
dc.subject | Subject-level Baseline | en_US |
dc.subject | Multi-stage Pretraining | en_US |
dc.title | Facial Expression Recognition for Clinical Trial Self-recordings | en_US |
dc.type | text | |
dc.setspec.uppsok | Technology | |
dc.type.uppsok | H2 | |
dc.contributor.department | Göteborgs universitet/Institutionen för data- och informationsteknik | swe |
dc.contributor.department | University of Gothenburg/Department of Computer Science and Engineering | eng |
dc.type.degree | Student essay | |