Open Universiteit

Please use this identifier to cite or link to this item: http://hdl.handle.net/1820/9822
Title: Multimodal Challenge: Analytics Beyond User-computer Interaction Data
Authors: Di Mitri, Daniele
Schneider, Jan
Specht, Marcus
Drachsler, Hendrik
Keywords: multimodal learning analytics
wearables
CrossMMLA
sensor-based learning
Issue Date: Mar-2018
Publisher: Society for Learning Analytics Research
Citation: Di Mitri, D., Schneider, J., Specht, M., & Drachsler, H. (2018) Multimodal Challenge: Analytics Beyond User-computer Interaction Data. In Pardo, A., Bartimote, K., Lynch, G., Buckingham Shum, S., Ferguson, R., Merceron, A., & Ochoa, X. (Eds.). (2018). Companion Proceedings of the 8th International Conference on Learning Analytics and Knowledge (pp. 362-365). Sydney, Australia: Society for Learning Analytics Research
Abstract: This contribution describes one the challenges explored in the Fourth LAK Hackathon. This challenge aims at shifting the focus from learning situations which can be easily traced through user-computer interactions data and concentrate more on user-world interactions events, typical of co-located and practice-based learning experiences. This mission, pursued by the multimodal learning analytics (MMLA) community, seeks to bridge the gap between digital and physical learning spaces. The “multimodal” approach consists in combining learners’ motoric actions with physiological responses and data about the learning contexts. These data can be collected through multiple wearable sensors and Internet of Things (IoT) devices. This Hackathon table will confront with three main challenges arising from the analysis and valorisation of multimodal datasets: 1) the data collection and storing, 2) the data annotation, 3) the data processing and exploitation. Some research questions which will be considered in this Hackathon challenge are the following: how to process the raw sensor data streams and extract relevant features? which data mining and machine learning techniques can be applied? how can we compare two action recordings? How to combine sensor data with Experience API (xAPI)? what are meaningful visualisations for these data?
URI: http://hdl.handle.net/1820/9822
Appears in Collections:1. TELI Publications, books and conference papers

Files in This Item:
File Description SizeFormat 
Hackathon_CameraReady.pdf78.92 kBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons