Abstract
In this paper we propose metrics unexpectedness and unexpectedness_r for measuring the serendipity of recommendation lists produced by recommender systems. Recommender systems have been evaluated in many ways. Although prediction quality is frequently measured by various accuracy metrics, recommender systems must be not only accurate but also useful. A few researchers have argued that the bottom-line measure of the success of a recommender system should be user satisfaction. The basic idea of our metrics is that unexpectedness is the distance between the results produced by the method to be evaluated and those produced by a primitive prediction method. Here, unexpectedness is a metric for a whole recommendation list, while unexpectedness_r is that taking into account the ranking in the list. From the viewpoints of both accuracy and serendipity, we evaluated the results obtained by three prediction methods in experimental studies on television program recommendations.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Breese, J., Herlocker, J., Kadie, C.: Empirical analysis of predictive algorithms for collaborative filtering. In: UAI 1998. Proc. of the 14th Conference on Uncertainty in Artificial Intelligence, pp. 43–52 (1998)
Cleverdon, C., Kean, M.: Factors Determining the Performance of Indexing Systems. In: Aslib Cranfield Research Project, Cranfield, England (1968)
Billsus, D., Pazzani, M.: Learning collaborative information filters. In: Proc. of the 15th National Conference on Artificial Intelligence(AAAI), pp. 46–53 (1998)
Sarwar, B., et al.: Analysis of recommendation algorithms for E-commerce. In: EC 2000. Proc. of the 2nd ACM Conference on Electronic Commerce, pp. 285–295 (2000)
Shardanand, U., Maes, P.: Social Information Filtering: Algorithms for Automating ”Word of Mouth”. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems(ACM SIGCHI), pp. 210–217. ACM Press, New York (1995)
Swets, J.: Effectiveness of information retrieval methods. Amer. Doc. 20, 72–89 (1969)
Swearingen, K., Sinha, R.: Beyond Algorithms: An HCI Perspective on Recommender Systems. In: ACM SIGIR Workshop on Recommender Systems (2001)
Herlocker, J., et al.: Evaluating Collaborative Filtering Recommender Systems. J. of ACM Transactions on Information Systems 22(1), 5–53 (2004)
Ziegler, C.-N, et al.: Improving Recommendation Lists Through Topic Diversification. In: Proc. of WWW 2005, pp. 22–32 (2005)
Graham, P.: A plan for spam (August 2002), http://www.paulgraham.com/spam.html
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Murakami, T., Mori, K., Orihara, R. (2008). Metrics for Evaluating the Serendipity of Recommendation Lists. In: Satoh, K., Inokuchi, A., Nagao, K., Kawamura, T. (eds) New Frontiers in Artificial Intelligence. JSAI 2007. Lecture Notes in Computer Science(), vol 4914. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78197-4_5
Download citation
DOI: https://doi.org/10.1007/978-3-540-78197-4_5
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-78196-7
Online ISBN: 978-3-540-78197-4
eBook Packages: Computer ScienceComputer Science (R0)