Show simple item record

dc.contributor.authorLanzini, Stefano
dc.contributor.authorAllwood, Jens
dc.date.accessioned2018-01-17T10:07:22Z
dc.date.available2018-01-17T10:07:22Z
dc.date.issued2015
dc.identifier.urihttp://hdl.handle.net/2077/54978
dc.description.abstractFace-to-face communication is multimodal with varying contributions from all sensory modalities, see e.g. Kopp (2013), Kendon (1980) and Allwood (1979). This paper reports a study of respondents interpreting vocal and gestural verbal and non-verbal, behavior. 10 clips from 5 different short video + audio recordings of two persons meeting for the first time were used as stimulus in a perception/classification study. The respondents were divided in 3 different groups. The first group watched only the video part of the clips without any sound. The second group listened to the audio track without video. The third group was exposed to both the audio and video tracks of the clip. In order to collect the data, we used a crowdsourcing questionnaire. The study reports on how respondents classified clips containing 4 different types of behavior (looking up, looking down, nodding and laughing) that were found to be frequent in a previous study (Lanzini 2013) according to which Affective Epistemic State (AES) the behaviors were perceived as expressing. We grouped the linguistic terms for the affective epistemic states that the respondents used into 27 different semantic fields. In this paper we will focus on the 7 most common fields, i.e. the fields of Thinking, Nervousness, Happiness, Assertiveness, Embarrassment, Indifference and Interest. The aim of the study is to increase understanding of how exposure to video and/or audio modalities affect the interpretation of vocal and gestural verbal and non-verbal behavior, when they are displayed unimodally and multi-modally.sv
dc.language.isoengsv
dc.subjectAffective Epistemic Statessv
dc.subjectMultimodalitysv
dc.subjectGesturesv
dc.subjectSpeechsv
dc.subjectVerbalsv
dc.subjectNon-verbalsv
dc.subjectCommunicationsv
dc.subjectVocalsv
dc.subjectAuditorysv
dc.titleOn the attribution of affective-epistemic states to communicative behavior in different modes of recordingsv
dc.typeTextsv
dc.type.svepconference paper, peer reviewedsv
dc.contributor.organizationSCCIIL Interdisciplinary Center, University of Gothenburgsv


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record