Browsing by Author "Lanzini, Stefano"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item How do different modes contribute to the interpretation of affective epistemic states?(2013-09-20) Lanzini, Stefano; IT-universitetet i Göteborg/Tillämpad informationsteknologi; IT University of Gothenburg/Applied Information TechnologyFor every human being, it is essential to transmit information and advise other people about their own affective epistemic states. On the other hand, it is also indispensable for every person to be able to interpret the states of other people with whom they interact. By the term affective epistemic states (AES) we want to indicate all those states that involve cognition, perception and feeling, or as Schroder suggests “states which involve both knowledge and feeling” (Schroder, 2011). The aim of this paper is to show how different mode of representation (video, audio, a combination of audio+video and written words) can influence the understanding and interpretation of AES. We also want to examine the effect of multimodality (using visual and auditory sensory modality simultaneously) compared to unimodality (using just visual or just auditory sensory modality). Even if some studies investigated the area of emotions and affective states, it is still very hard to find research that involves epistemic features. More studies are essential in order to understand the mechanism about how AES are interpreted by humans. We conducted an experiment at the University of Gothenburg with 12 Swedish participants. Four recordings of first encounters face-to-face interaction, were displayed to each respondent. Each recording was shown in a different mode. The modes used for the experiment consisted of a transcriptions (T), a video with audio (V+A), a video without audio (V) and an audio recording (A). The recordings were all about two people that were meeting for the first time. We asked the respondents to identify which kinds of AES were displayed by people participating in the recording. Respondents were asked to motivate their answers. Several interesting outcomes have been observed. Participants were able to interpret different AES when exposed to the same behavior in different modes. This means that when the same behavior is displayed in different modes, the respondent’s perception is often influenced in different ways. The same AES can be shown through vocal and gestural behaviors, and it can be perceived by visual, auditory or both modalities together, depending on the modes displayed. We observed that AES are highly multimodal and the majority of the times, different behaviors are perceived differently, depending on if they were shown by multimodal or unimodal modes.Item Multimodal health communication in two cultures – A comparison of Swedish and Malaysian Youtube videos(2017) Allwood, Jens; Ahlsén, Elisabeth; Lanzini, Stefano; Attaran, Ali; SCCIIL Interdisciplinary Center, University of GothenburgYoutube video health information about overweight and obesity, was analyzed in two different countries – Sweden and Malaysia. The videos were analyzed by using Activity based Communication Analysis, Critical Discourse Analysis and Rhetorical Analysis, pointing to possible cultural differences in rhetorical approach. The use of multimodality was in focus in the analysis. Considerable differences in the use of spoken and written words, pictures, animations, colour, music and other sounds were found between Swedish videos which tended to rely more on spoken words from experts and on logos, while Malaysian videos tended to rely heavily on animations, vivid colours, music and other sounds and appeal to pathos. In both countries, ethos is important, but conveyed in somewhat different ways. The length of the videos differ considerably, with Malaysian videos being very short and Swedish videos quite long.Item On the attribution of affective-epistemic states to communicative behavior in different modes of recording(2015) Lanzini, Stefano; Allwood, Jens; SCCIIL Interdisciplinary Center, University of GothenburgFace-to-face communication is multimodal with varying contributions from all sensory modalities, see e.g. Kopp (2013), Kendon (1980) and Allwood (1979). This paper reports a study of respondents interpreting vocal and gestural verbal and non-verbal, behavior. 10 clips from 5 different short video + audio recordings of two persons meeting for the first time were used as stimulus in a perception/classification study. The respondents were divided in 3 different groups. The first group watched only the video part of the clips without any sound. The second group listened to the audio track without video. The third group was exposed to both the audio and video tracks of the clip. In order to collect the data, we used a crowdsourcing questionnaire. The study reports on how respondents classified clips containing 4 different types of behavior (looking up, looking down, nodding and laughing) that were found to be frequent in a previous study (Lanzini 2013) according to which Affective Epistemic State (AES) the behaviors were perceived as expressing. We grouped the linguistic terms for the affective epistemic states that the respondents used into 27 different semantic fields. In this paper we will focus on the 7 most common fields, i.e. the fields of Thinking, Nervousness, Happiness, Assertiveness, Embarrassment, Indifference and Interest. The aim of the study is to increase understanding of how exposure to video and/or audio modalities affect the interpretation of vocal and gestural verbal and non-verbal behavior, when they are displayed unimodally and multi-modally.