Open Universiteit

Please use this identifier to cite or link to this item:
Title: Data Fusion for Real-time Multimodal Emotion Recognition through Webcams and Microphones in E-Learning
Authors: Bahreini, Kiavash
Nadolski, Rob
Westera, Wim
Keywords: Hybrid Data Fusion
Multimodal Emotion Recognition
Emotion Detection
Real-time Software Development
Software Development
Data Mining
WEKA Classifiers
Machine Learning
Issue Date: 2-Mar-2016
Publisher: Taylor & Francis Group
Citation: Bahreini, K., Nadolski, R., & Westera, W. (2016). Data Fusion for Real-time Multimodal Emotion Recognition through Webcams and Microphones in E-Learning. International Journal of Human-Computer Interaction, 32(5), 415-430. Taylor & Francis Group. DOI: 10.1080/10447318.2016.1159799.
Abstract: This paper describes the validation study of our software that uses combined webcam and microphone data for real-time, continuous, unobtrusive emotion recognition as part of our FILTWAM framework. FILTWAM aims at deploying a real time multimodal emotion recognition method for providing more adequate feedback to the learners through an online communication skills training. Herein, timely feedback is needed that reflects on their shown intended emotions and which is also useful to increase learners’ awareness of their own behaviour. At least, a reliable and valid software interpretation of performed face and voice emotions is needed to warrant such adequate feedback. This validation study therefore calibrates our software. The study uses a multimodal fusion method. Twelve test persons performed computer-based tasks in which they were asked to mimic specific facial and vocal emotions. All test persons’ behaviour was recorded on video and two raters independently scored the showed emotions, which were contrasted with the software recognition outcomes. A hybrid method for multimodal fusion of our multimodal software shows accuracy between 96.1% and 98.6% for the best-chosen WEKA classifiers over predicted emotions. The software fulfils its requirements of real-time data interpretation and reliable results.
Description: The original article is available on the Taylor & Francis Online website in the following link:
ISSN: 1532-7590 (Online)
044-7318 (Print)
Appears in Collections:1. FEEEL Publications, books and conference papers

Files in This Item:
File Description SizeFormat 
Bahreini, Nadolski, Westera, 2016a.pdf1.33 MBAdobe PDFView/Open

This item is licensed under a Creative Commons License Creative Commons