Open Universiteit

Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDi Mitri, Daniele-
dc.identifier.citationDi Mitri, D. (2017). Digital Learning Projection. Learning performance estimation from multimodal learning experiences. In E. André, R. Baker, X. Hu, Ma. M.T. Rodrigo, & B. du Boulay (Eds.), Proceedings of AIED 2017, 18th International Conference on Artificial Intelligence in Education (pp. 609–612). Wuhan, China: Springer International Publishing, 28 June–1 July 2017.en_US
dc.description.abstractMultiple modalities of the learning process can now be captured on real-time through wearable and contextual sensors. By annotating these multimodal data (the input space) by expert assessments or self-reports (the output space), machine learning models can be trained to predict the learning performance. This can lead to continuous formative assessment and feedback generation, which can be used to personalise and contextualise content, improve awareness and support informed decisions about learning.en_US
dc.publisherSpringer International Publishingen_US
dc.subjectmultimodal dataen_US
dc.subjectlearning analyticsen_US
dc.subjectphd projecten_US
dc.subjectdoctoral consortiumen_US
dc.titleDigital Learning Projection. Learning performance estimation from multimodal learning experiencesen_US
dc.typeConference paperen_US
Appears in Collections:1. TELI Publications, books and conference papers

Files in This Item:
File Description SizeFormat 
AIED2017_paper_187.pdf144.99 kBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.