Please use this identifier to cite or link to this item:
|Title:||Digital Learning Projection. Learning performance estimation from multimodal learning experiences|
|Authors:||Di Mitri, Daniele|
|Publisher:||Springer International Publishing|
|Citation:||Di Mitri, D. (2017). Digital Learning Projection. Learning performance estimation from multimodal learning experiences. In E. André, R. Baker, X. Hu, Ma. M.T. Rodrigo, & B. du Boulay (Eds.), Proceedings of AIED 2017, 18th International Conference on Artificial Intelligence in Education (pp. 609–612). Wuhan, China: Springer International Publishing, 28 June–1 July 2017.|
|Abstract:||Multiple modalities of the learning process can now be captured on real-time through wearable and contextual sensors. By annotating these multimodal data (the input space) by expert assessments or self-reports (the output space), machine learning models can be trained to predict the learning performance. This can lead to continuous formative assessment and feedback generation, which can be used to personalise and contextualise content, improve awareness and support informed decisions about learning.|
|Appears in Collections:||1. TELI Publications, books and conference papers|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.