Open Universiteit

Please use this identifier to cite or link to this item:
Title: Improved Multimodal Emotion Recognition for Better Game-Based Learning
Authors: Bahreini, Kiavash
Nadolski, Rob
Westera, Wim
Keywords: Game-based learning
Human-computer interaction
Multimodal emotion recognition
Real-time emotion recognition
Affective computing
Issue Date: 1-Sep-2015
Publisher: Springer International Publishing
Citation: Bahreini, K., Nadolski, R., & Westera, W. (2015). Improved Multimodal Emotion Recognition for Better Game-Based Learning. In A. De Gloria (Ed.), Proceedings of the GALA Serious Gaming Conference 2014, LNCS 9221 (pp. 107-120). Bucharest, Romania. Springer International Publishing. Doi:10.1007/978-3-319-22960-7_11.
Series/Report no.: Lecture Notes in Computer Science;Games and Learning Alliance, 9221
Abstract: This paper introduces the integration of the face emotion recognition part and the voice emotion recognition part of our FILTWAM framework that uses webcams and microphones. This framework enables real-time multimodal emotion recognition of learners during game-based learning for triggering feedback towards improved learning. The main goal of this study is to validate the integration of webcam and microphone data for a real-time and adequate interpretation of facial and vocal expressions into emotional states where the software modules are calibrated with end users. This integration aims to improve timely and relevant feedback, which is expected to increase learners’ awareness of their own behavior. Twelve test persons received the same computer-based tasks in which they were requested to mimic specific facial and vocal expressions. Each test person mimicked 80 emotions, which led to a dataset of 960 emotions. All sessions were recorded on video. An overall accuracy of Kappa value based on the requested emotions, expert opinions, and the recognized emotions is 0.61, of the face emotion recognition software is 0.76, and of the voice emotion recognition software is 0.58. A multimodal fusion between the software modules can increase the accuracy to 78%. In contrast with existing software our software modules allow real-time, continuously and unobtrusively monitoring of learners’ face expressions and voice intonations and convert these into emotional states. This inclusion of learner's emotional states paves the way for more effective, efficient and enjoyable game-based learning.
Description: This paper has been already published on the Springer website and is available online in the following link:
ISBN: 978-3-319-22959-1
Appears in Collections:1. FEEEL Publications, books and conference papers

Files in This Item:
File Description SizeFormat 
Improved Multimodal Emotion Recognition for Better Game-Based Learning.pdf1.2 MBAdobe PDFView/Open

This item is licensed under a Creative Commons License Creative Commons