Open Universiteit

Please use this identifier to cite or link to this item: http://hdl.handle.net/1820/8650
Title: User-centric Evaluation of Recommender Systems in Social Learning Platforms: Accuracy is Just the Tip of the Iceberg
Authors: Fazeli, Soudé
Drachsler, Hendrik
Bitter-Rijpkema, Marlies
Brouns, Francis
Van der Vegt, Wim
Sloep, Peter
Keywords: Recommender systems
Measurement
Prediction algorithms
Metadata
Crowdsourcing
Social network services
Tagging
evaluation
social
learning
accuracy
performance
Issue Date: 27-Jul-2017
Publisher: IEEE Transactions on Learning Technologies
Citation: Fazeli, S., Drachsler, H., Bitter-Rijpkema, M., Brouns, F., Van der Vegt, W., & Sloep, P. B. (2017). User-centric Evaluation of Recommender Systems in Social Learning Platforms: Accuracy is Just the Tip of the Iceberg. IEEE Transactions on Learning Technologies, PP( 99), 1-1. doi: 10.1109/TLT.2017.2732349
Series/Report no.: Soude Fazeli;
Abstract: Recommender systems provide users with content they might be interested in. Conventionally, recommender systems are evaluated mostly by using prediction accuracy metrics only. But the ultimate goal of a recommender system is to increase user satisfaction. Therefore, evaluations that measure user satisfaction should also be performed before deploying a recommender system in a real target environment. Such evaluations are laborious and complicated compared to the traditional, data-centric evaluations, though. In this study, we carried out a user-centric evaluation of state-of-the-art recommender systems as well as a graph-based approach in the ecologically valid setting of an authentic social learning platform. We also conducted a data-centric evaluation on the same data to investigate the added value of user-centric evaluations and how user satisfaction of a recommender system is related to its performance in terms of accuracy metrics. Our findings suggest that user-centric evaluation results are not necessarily in line with data-centric evaluation results. We conclude that the traditional evaluation of recommender systems in terms of prediction accuracy only does not suffice to judge performance of recommender systems on the user side. Moreover, the user-centric evaluation provides valuable insights in how candidate algorithms perform on each of the five quality metrics for recommendations: usefulness, accuracy, novelty, diversity, and serendipity.
URI: http://hdl.handle.net/1820/8650
ISSN: 1939-1382
Appears in Collections:1. TELI Publications, books and conference papers

Files in This Item:
File Description SizeFormat 
IEEE_TLT2017_pre_print.pdf2.08 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons