dc.contributor.author | Lanzini, Stefano | |
dc.date.accessioned | 2013-09-20T08:55:09Z | |
dc.date.available | 2013-09-20T08:55:09Z | |
dc.date.issued | 2013-09-20 | |
dc.identifier.uri | http://hdl.handle.net/2077/34010 | |
dc.description.abstract | For every human being, it is essential to transmit information and advise other people
about their own affective epistemic states. On the other hand, it is also indispensable for
every person to be able to interpret the states of other people with whom they interact.
By the term affective epistemic states (AES) we want to indicate all those states that
involve cognition, perception and feeling, or as Schroder suggests “states which involve
both knowledge and feeling” (Schroder, 2011).
The aim of this paper is to show how different mode of representation (video, audio, a
combination of audio+video and written words) can influence the understanding and
interpretation of AES. We also want to examine the effect of multimodality (using visual
and auditory sensory modality simultaneously) compared to unimodality (using just
visual or just auditory sensory modality). Even if some studies investigated the area of
emotions and affective states, it is still very hard to find research that involves epistemic
features. More studies are essential in order to understand the mechanism about how
AES are interpreted by humans.
We conducted an experiment at the University of Gothenburg with 12 Swedish
participants. Four recordings of first encounters face-to-face interaction, were displayed
to each respondent. Each recording was shown in a different mode. The modes used for
the experiment consisted of a transcriptions (T), a video with audio (V+A), a video
without audio (V) and an audio recording (A). The recordings were all about two people
that were meeting for the first time. We asked the respondents to identify which kinds of
AES were displayed by people participating in the recording. Respondents were asked to
motivate their answers.
Several interesting outcomes have been observed. Participants were able to interpret
different AES when exposed to the same behavior in different modes. This means that
when the same behavior is displayed in different modes, the respondent’s perception is
often influenced in different ways. The same AES can be shown through vocal and
gestural behaviors, and it can be perceived by visual, auditory or both modalities
together, depending on the modes displayed. We observed that AES are highly
multimodal and the majority of the times, different behaviors are perceived differently,
depending on if they were shown by multimodal or unimodal modes. | sv |
dc.language.iso | eng | sv |
dc.relation.ispartofseries | 1651-4769 | sv |
dc.relation.ispartofseries | 2013:071 | sv |
dc.subject | Affective epistemic states | sv |
dc.subject | perception | sv |
dc.subject | multimodality | sv |
dc.subject | unimodality | sv |
dc.subject | communication | sv |
dc.title | How do different modes contribute to the interpretation of affective epistemic states? | sv |
dc.title.alternative | How different mode of representation (video, audio, video+audio and written words) can influence the understanding and interpretation of AES | sv |
dc.type | Text | eng |
dc.setspec.uppsok | Technology | |
dc.type.uppsok | H2 | |
dc.contributor.department | IT-universitetet i Göteborg/Tillämpad informationsteknologi | swe |
dc.contributor.department | IT University of Gothenburg/Applied Information Technology | eng |
dc.type.degree | Master theses | eng |