Open Universiteit

Please use this identifier to cite or link to this item: http://hdl.handle.net/1820/5751
Title: D2.2.2 Final Version of the LinkedUp Evaluation Framework
Authors: Drachsler, Hendrik
Stoyanov, Slavi
Guy, Marieke
Scheffel, Maren
Keywords: Linked Data
data competition
education
evaluation framework
Issue Date: 17-Dec-2014
Citation: Drachsler, H., Stoyanov, S., Guy, M., & Scheffel, M. (2014). D2.2.2 Final Version of the LinkedUp Evaluation Framework. LinkedUp project. Heerlen, The Netherlands.
Abstract: This document (D2.2.2) describes the LinkedUp consortium’s experience in developing and on- going improvement of the LinkedUp Evaluation Framework throughout three web open educational data competitions: Veni, Vidi, Vici. D2.2.2 is the final report regarding the Evaluation Framework (EF). It synthesises the work already done in the previous WP2 deliverables (D2.1, D2.2.1, D2.3.1, D2.3.2, D2.3.3) reporting on best practices, providing suggestions for improvements and possible adjustments to additional application areas. The initial version of the EF was developed by applying the Group Concept Mapping Methodology (GCM). It objectively identified through some advanced statistical techniques the shared vision of experts in the domain of technology-enhanced learning on the criteria and indicators of the EF. The GCM contributed to the construct and content validity of the EF. The first version of the EF was tested during the Learning Analytics and Knowledge Conference 2013 (LAK 13). After each competition round (Veni, Vidi, Vici) usefulness and ease of use of the EF were tested with a number of experts through a questionnaire and interviews. The analysis of the data suggested some improvements. In this final report of the EF we summarise the lessons-learned and provide six main suggestions for future data competitions developers: 1. Designing a data competition starts with a definition of evaluation criteria 2. Test the understandability of your evaluation criteria before publishing those 3. Do not use an ‘not applicable’ option for evaluation indicators 4. Less (indicators) are more (preferable) 5. Apply an unification of the scale of evaluation indicators’ 6. Weighting of important evaluation criteria can be very informative We finally present the final version of the LinkedUp EF and refer to the LinkedUp toolbox that provides all lessons-learned and further information for future data competition organisers.
URI: http://hdl.handle.net/1820/5751
Appears in Collections:3. TELI Deliverables and reports

Files in This Item:
File Description SizeFormat 
LinkedUp_D2.2.2.pdf2 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons