Multi-Turn Contextual Sentiment Analysis of IT Support Dialogues: A BERT-based Approach

No Thumbnail Available

Date

2025-10-09

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

This thesis investigates whether modern natural language processing (NLP) techniques can extract reliable sentiment insights, within the textual content of ITsupport dialogues. Declining survey response rates and limited resources for manually assessing user satisfaction motivate the use of transformed-based models as a scalable alternative. This study explores utterance-level sentiment classification that explicitly incorporates dialogue context. This is done by evaluating the performance of a pre-trained BERT model, prior to and post fine-tuning on multi-turn, domain-specific e-mail conversations. Furthermore, the study explores automatic anonymisation of sensitive entities within the data, by integrating named entity recognition to an anonymisation pipeline. The results show that fine-tuning significantly improved contextual understanding and sentiment classification performance within the IT-support domain, while still showing encouraging results on single-turn context levels. However, its performance dropped on out-of-domain data, showing moderate generalisability levels. Additionally the evaluation on out-of-domain data, suggests label noise in the training data of the base-model. The anonymisation pipeline successfully masked most personal-, locational-, and organisational names, but showed a tagging error rate of over 40%. This exposes potential legal risk if applied without human oversight. The study demonstrates the potential of dialogue-aware fine-tuning for sentiment analysis while underscoring the importance of domain-specific, high-quality, human-annotated data and robust anonymisation to ensure safe, transferable applications.

Description

Keywords

NLP, sentiment analysis, data science, dialogue context, anonymisation, BERT

Citation

Collections