Show simple item record

dc.contributor.authorLanzini, Stefano
dc.date.accessioned2013-09-20T08:55:09Z
dc.date.available2013-09-20T08:55:09Z
dc.date.issued2013-09-20
dc.identifier.urihttp://hdl.handle.net/2077/34010
dc.description.abstractFor every human being, it is essential to transmit information and advise other people about their own affective epistemic states. On the other hand, it is also indispensable for every person to be able to interpret the states of other people with whom they interact. By the term affective epistemic states (AES) we want to indicate all those states that involve cognition, perception and feeling, or as Schroder suggests “states which involve both knowledge and feeling” (Schroder, 2011). The aim of this paper is to show how different mode of representation (video, audio, a combination of audio+video and written words) can influence the understanding and interpretation of AES. We also want to examine the effect of multimodality (using visual and auditory sensory modality simultaneously) compared to unimodality (using just visual or just auditory sensory modality). Even if some studies investigated the area of emotions and affective states, it is still very hard to find research that involves epistemic features. More studies are essential in order to understand the mechanism about how AES are interpreted by humans. We conducted an experiment at the University of Gothenburg with 12 Swedish participants. Four recordings of first encounters face-to-face interaction, were displayed to each respondent. Each recording was shown in a different mode. The modes used for the experiment consisted of a transcriptions (T), a video with audio (V+A), a video without audio (V) and an audio recording (A). The recordings were all about two people that were meeting for the first time. We asked the respondents to identify which kinds of AES were displayed by people participating in the recording. Respondents were asked to motivate their answers. Several interesting outcomes have been observed. Participants were able to interpret different AES when exposed to the same behavior in different modes. This means that when the same behavior is displayed in different modes, the respondent’s perception is often influenced in different ways. The same AES can be shown through vocal and gestural behaviors, and it can be perceived by visual, auditory or both modalities together, depending on the modes displayed. We observed that AES are highly multimodal and the majority of the times, different behaviors are perceived differently, depending on if they were shown by multimodal or unimodal modes.sv
dc.language.isoengsv
dc.relation.ispartofseries1651-4769sv
dc.relation.ispartofseries2013:071sv
dc.subjectAffective epistemic statessv
dc.subjectperceptionsv
dc.subjectmultimodalitysv
dc.subjectunimodalitysv
dc.subjectcommunicationsv
dc.titleHow do different modes contribute to the interpretation of affective epistemic states?sv
dc.title.alternativeHow different mode of representation (video, audio, video+audio and written words) can influence the understanding and interpretation of AESsv
dc.typeTexteng
dc.setspec.uppsokTechnology
dc.type.uppsokH2
dc.contributor.departmentIT-universitetet i Göteborg/Tillämpad informationsteknologiswe
dc.contributor.departmentIT University of Gothenburg/Applied Information Technologyeng
dc.type.degreeMaster theseseng


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record