Open Universiteit

Please use this identifier to cite or link to this item: http://hdl.handle.net/1820/5568
Title: FILTWAM and Voice Emotion Recognition
Authors: Bahreini, Kiavash
Nadolski, Rob
Westera, Wim
Keywords: Learner support in serious games
game-based learning
human-computer interaction
multimodal emotion recognition
real-time voice emotion recognition
microphone
Issue Date: 26-Oct-2014
Publisher: Springer International Publishing
Citation: Bahreini, K., Nadolski, R., & Westera, W. (2014). FILTWAM and Voice Emotion Recognition. In A. De Gloria (Ed.), Games and Learning Alliance, Second International Conference, GALA 2013, Lecture Notes in Computer Science, Vol. 8605 (pp. 116-129), October, 23-25, 2013, Paris, France: Springer International Publishing Switzerland.
Series/Report no.: Lecture Notes in Computer Science;
Abstract: This paper introduces the voice emotion recognition part of our framework for improving learning through webcams and microphones (FILTWAM). This framework enables multimodal emotion recognition of learners during game-based learning. The main goal of this study is to validate the use of microphone data for a real-time and adequate interpretation of vocal expressions into emotional states were the software is calibrated with end users. FILTWAM already incorporates a valid face emotion recognition module and is extended with a voice emotion recognition module. This extension aims to provide relevant and timely feedback based upon learner's vocal intonations. The feedback is expected to enhance learner’s awareness of his or her own behavior. Six test persons received the same computer-based tasks in which they were requested to mimic specific vocal expressions. Each test person mimicked 82 emotions, which led to a dataset of 492 emotions. All sessions were recorded on video. An overall accuracy of our software based on the requested emotions and the recognized emotions is a pretty good 74.6% for the emotions happy and neutral emotions; but will be improved for the lower values of an extended set of emotions. In contrast with existing software our solution allows to continuously and unobtrusively monitor learners’ intonations and convert these intonations into emotional states. This paves the way for enhancing the quality and efficacy of game-based learning by including the learner's emotional states, and links these to pedagogical scaffolding.
URI: http://hdl.handle.net/1820/5568
ISBN: 978-3-319-12156-7
978-3-319-12157-4
ISSN: 0302-9743
Appears in Collections:1. FEEEL Publications, books and conference papers

Files in This Item:
File Description SizeFormat 
Bahreini, Nadolski, and Westera-2013.pdf1.05 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons