LAUGHTER PREDICTION IN TEXT BASED DIALOGUES Predicting Laughter using Transformer-Based Models

No Thumbnail Available

Date

2022-06-20

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

In this paper we will attempt to predict and assess the performance of predicting laughter using a BERT model (Devlin et al., 2019), and a BERT model finetuned on the Open subtitles dataset with and without considering dialogue-acts classes as well as sliding window of dialogues. We hypothesize that fine tuning a BERT on the open subtitles might increase the performance. Our results will be compared with those of Maraev et al., 2021a paper which show predicting actual laughs in dialogue and address it with various deep learning models, namely recurrent neural network (RNN), convolution neural network (CNN) and combinations of these. The Switchboard dialogue Act Corpus (SWDA), Jurafsky et al., 1997a) (US English, phone conversations where two participants that are not familiar with each other discuss a potentially controversial subject, such as gun control or the school system) is processed first in the project to make it appropriate for the BERT model. We then analyze dialogue acts within the Switchboard Dialogue Act Corpus with their collocation with laughter and supply some qualitative insights. SWDA is tagged with a collection of 220 dialogue act tags which, following Jurafsky et al. (1997b), we cluster into a smaller set of 42 tags. The major purpose of this research is to show that a BERT model would outperform the Convolution Neural Network (CNN) and Recurrent Neural Network (RNN) models presented in the IWSDS publication.

Description

Keywords

Transformer, BERT, Laughter, Sliding Window

Citation