Abstract
Collaborative filtering-based recommender systems leverage vast amounts of behavioral user data, which poses severe privacy risks. Thus, often random noise is added to the data to ensure Differential Privacy (DP). However, to date, it is not well understood in which ways this impacts personalized recommendations. In this work, we study how DP affects recommendation accuracy and popularity bias when applied to the training data of state-of-the-art recommendation models. Our findings are three-fold: First, we observe that nearly all users’ recommendations change when DP is applied. Second, recommendation accuracy drops substantially while recommended item popularity experiences a sharp increase, suggesting that popularity bias worsens. Finally, we find that DP exacerbates popularity bias more severely for users who prefer unpopular items than for users who prefer popular items.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
The number of recommended relevant items is divided by the number of all relevant items (i.e., Recall), or by the length of the recommendation list (i.e., Precision). When DP is applied, \(\varDelta Recall\) and \(\varDelta Precision\) only depend on how the number of recommended relevant items changes and therefore, the relative change is the same.
- 2.
- 3.
No clear pattern across datasets can be observed [5] and thus, this behavior of MultVAE needs to be researched in the future.
References
Abdollahpouri, H., et al.: Multistakeholder recommendation: survey and research directions. User Model. User-Adap. Inter. 30, 127–158 (2020)
Abdollahpouri, H., Mansoury, M., Burke, R., Mobasher, B.: The unfairness of popularity bias in recommendation. In: Workshop on Recommendation in Multi-stakeholder Environments (RMSE), in Conjunction With the 13th ACM Conference on Recommender Systems (RecSys) (2019)
Abdollahpouri, H., Mansoury, M., Burke, R., Mobasher, B.: The connection between popularity bias, calibration, and fairness in recommendation. In: Proceedings of the 14th ACM Conference on Recommender Systems (RecSys), pp. 726–731 (2020)
Agarwal, S.: Trade-offs between fairness, interpretability, and privacy in machine learning. Master’s thesis, University of Waterloo (2020)
Anelli, V.W., Bellogín, A., Di Noia, T., Jannach, D., Pomo, C.: Top-n recommendation algorithms: a quest for the state-of-the-art. In: Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization (UMAP), pp. 121–131 (2022)
Bagdasaryan, E., Poursaeed, O., Shmatikov, V.: Differential privacy has disparate impact on model accuracy. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems (NeurIPS), pp. 15479–15488 (2019)
Beigi, G., Liu, H.: A survey on privacy in social media: identification, mitigation, and applications. ACM Trans. Data Sci. (TDS) 1(1), 1–38 (2020)
Berkovsky, S., Kuflik, T., Ricci, F.: The impact of data obfuscation on the accuracy of collaborative filtering. Expert Syst. Appl. 39(5), 5033–5042 (2012)
Bishop, C.M.: Training with noise is equivalent to tikhonov regularization. Neural Comput. 7(1), 108–116 (1995)
Calandrino, J.A., Kilzer, A., Narayanan, A., Felten, E.W., Shmatikov, V.: “you might also like:" privacy risks of collaborative filtering. In: 2011 IEEE Symposium on Security and Privacy (S &P), pp. 231–246 (2011)
Chen, C., Zhou, J., Wu, B., Fang, W., Wang, L., Qi, Y., Zheng, X.: Practical privacy preserving poi recommendation. ACM Trans. Intell. Syst. Technol. (TIST) 11(5), 1–20 (2020)
Chen, C., Zhang, M., Zhang, Y., Liu, Y., Ma, S.: Efficient neural matrix factorization without sampling for recommendation. ACM Trans. Inf. Syst. (TOIS) 38(2), 1–28 (2020)
Ding, B., Kulkarni, J., Yekhanin, S.: Collecting telemetry data privately. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NeurIPS), pp. 3574–3583 (2017)
Dwork, C.: Differential privacy: a survey of results. In: International conference on Theory and Applications of Models of Computation (TAMC), pp. 1–19 (2008)
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (ITCS), pp. 214–226 (2012)
Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Now Publishers, Inc. (2014)
Ekstrand, M.D., Joshaghani, R., Mehrpouyan, H.: Privacy for all: ensuring fair and equitable privacy protections. In: Proceedings of ACM Conference on Fairness, Accountability, and Transparency (FAccT), pp. 35–47 (2018)
Eskandanian, F., Sonboli, N., Mobasher, B.: Power of the few: analyzing the impact of influential users in collaborative recommender systems. In: Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization, pp. 225–233 (2019)
Friedman, A., Berkovsky, S., Kaafar, M.A.: A differential privacy framework for matrix factorization recommender systems. User Model. User-Adapt. Interact. (UMUAI) 26(5), 425–458 (2016)
Friedman, A., Knijnenburg, B.P., Vanhecke, K., Martens, L., Berkovsky, S.: Privacy aspects of recommender systems. In: Ricci, F., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, pp. 649–688. Springer, Boston, MA (2015). https://doi.org/10.1007/978-1-4899-7637-6_19
Ganhör, C., Penz, D., Rekabsaz, N., Lesota, O., Schedl, M.: Unlearning protected user attributes in recommendations with adversarial training. In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pp. 2142–2147. Springer, Heidelberg (2022)
Gentry, C.: A fully homomorphic encryption scheme. Ph.D. thesis, Stanford university (2009)
Harper, F.M., Konstan, J.A.: The movielens datasets: history and context. ACM Trans. Interact. Intell. Syst. (TiiS) 5(4), 1–19 (2015)
Hashemi, H., et al.: Data leakage via access patterns of sparse features in deep learning-based recommendation systems. Workshop on Trustworthy and Socially Responsible Machine Learning (TSRML), in Conjunction with the 36th Conference on Neural Information Processing Systems (NeurIPS) (2022)
He, X., Deng, K., Wang, X., Li, Y., Zhang, Y., Wang, M.: Lightgcn: simplifying and powering graph convolution network for recommendation. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pp. 639–648. Springer, Heidelberg (2020)
Kim, S., Kim, J., Koo, D., Kim, Y., Yoon, H., Shin, J.: Efficient privacy-preserving matrix factorization via fully homomorphic encryption. In: Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security (ASIACCS), pp. 617–628 (2016)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Proceedings of 3rd International Conference on Learning Representations (ICLR) (2015)
Klimashevskaia, A., Elahi, M., Jannach, D., Trattner, C., Skjærven, L.: Mitigating popularity bias in recommendation: potential and limits of calibration approaches. In: Advances in Information Retrieval: Workshop on Algorithmic Bias in Search and Recommendation (BIAS) in conjunction with the 42nd European Conference on IR Research (ECIR), pp. 82–90. Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-031-09316-6_8
Kowald, D., Schedl, M., Lex, E.: The unfairness of popularity bias in music recommendation: a reproducibility study. In: Jose, J.M., et al. (eds.) ECIR 2020. LNCS, vol. 12036, pp. 35–42. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-45442-5_5
Lacic, E., Reiter-Haas, M., Kowald, D., Reddy Dareddy, M., Cho, J., Lex, E.: Using autoencoders for session-based job recommendations. User Model. User-Adap. Inter. 30, 617–658 (2020)
Lesota, O., et al.: Analyzing item popularity bias of music recommender systems: are different genders equally affected? In: Proceedings of the 15th ACM Conference on Recommender Systems (RecSys), pp. 601–606 (2021)
Lex, E., Kowald, D., Schedl, M.: Modeling popularity and temporal drift of music genre preferences. Trans. Int. Soc. Music Inf. Retr. 3(1) (2020)
Liang, D., Krishnan, R.G., Hoffman, M.D., Jebara, T.: Variational autoencoders for collaborative filtering. In: Proceedings of the World Wide Web Conference (TheWebConf), pp. 689–698 (2018)
Lin, Y., et al.: Meta matrix factorization for federated rating predictions. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pp. 981–990. Springer, Heidelberg (2020)
Long, J., Chen, T., Nguyen, Q.V.H., Yin, H.: Decentralized collaborative learning framework for next poi recommendation. ACM Trans. Inf. Syst. 41(3) (2023). https://doi.org/10.1145/3555374
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 1273–1282 (2017)
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021)
Melchiorre, A.B., Rekabsaz, N., Parada-Cabaleiro, E., Brandl, S., Lesota, O., Schedl, M.: Investigating gender fairness of recommendation algorithms in the music domain. Inf. Process. Manag. (IP &P) 58(5), 102666 (2021)
Müllner, P., Lex, E., Schedl, M., Kowald, D.: Reuseknn: neighborhood reuse for differentially-private knn-based recommendations. ACM Trans. Intell. Syst. Technol. (2023). https://doi.org/10.1145/3608481
Muellner, P., Kowald, D., Lex, E.: Robustness of meta matrix factorization against strict privacy constraints. In: Hiemstra, D., Moens, M.-F., Mothe, J., Perego, R., Potthast, M., Sebastiani, F. (eds.) ECIR 2021. LNCS, vol. 12657, pp. 107–119. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72240-1_8
Müllner, P., Lex, E., Schedl, M., Kowald, D.: Differential privacy in collaborative filtering recommender systems: a review. Front. Big Data 6 (2023). https://doi.org/10.3389/fdata.2023.1249997
Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In: Proceedings of the IEEE Symposium on Security and Privacy (S &P), pp. 739–753 (2019)
Ni, J., Li, J., McAuley, J.: Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 188–197 (2019)
Parra, D., Sahebi, S.: Recommender systems: sources of knowledge and evaluation metrics. In: Advanced Techniques in Web Intelligence-2: Web User Browsing Behaviour and Preference Analysis, pp. 149–175. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-33326-2_7
Ren, H., Deng, J., Xie, X.: GRNN: generative regression neural network-a data leakage attack for federated learning. ACM Trans. Intell. Syst. Technol. (TIST) 13(4), 1–24 (2022)
Saveski, M., Mantrach, A.: Item cold-start recommendations: learning local collective embeddings. In: Proceedings of the 8th ACM Conference on Recommender systems (RecSys), pp. 89–96 (2014)
Schedl, M., Bauer, C.: Distance-and rank-based music mainstreaminess measurement. In: Adjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization (UMAP): Workshop on Surprise, Opposition, and Obstruction in Adaptive and Personalized Systems (SOAP), pp. 364–367 (2017)
Schedl, M., Bauer, C., Reisinger, W., Kowald, D., Lex, E.: Listener modeling and context-aware music recommendation based on country archetypes. Front. Artif. Intell. 3, 508725 (2021)
Lam, S.K., Frankowski, D., Riedl, J.: Do you trust your recommendations? an exploration of security and privacy issues in recommender systems. In: Müller, G. (ed.) ETRICS 2006. LNCS, vol. 3995, pp. 14–29. Springer, Heidelberg (2006). https://doi.org/10.1007/11766155_2
Sun, J.A., Pentyala, S., Cock, M.D., Farnadi, G.: Privacy-preserving fair item ranking. In: Kamps, J., et al. (eds.) ECIR 2023, vol. 13981, pp. 188–203. Springer, Heidelberg (2023). https://doi.org/10.1007/978-3-031-28238-6_13
Sun, Z., et al.: Are we evaluating rigorously? benchmarking recommendation for reproducible evaluation and fair comparison. In: Proceedings of the 14th ACM Conference on Recommender Systems (RecSys), pp. 23–32 (2020)
Weinsberg, U., Bhagat, S., Ioannidis, S., Taft, N.: Blurme: inferring and obfuscating user gender based on ratings. In: Proceedings of the 6th ACM Conference on Recommender Systems (RecSys), pp. 195–202 (2012)
Xin, X., et al.: On the user behavior leakage from recommender system exposure. ACM Trans. Inf. Syst. (TOIS) 41(3), 1–25 (2023)
Xin, Y., Jaakkola, T.: Controlling privacy in recommender systems. In: Proceedings of the 27th International Conference on Neural Information Processing Systems (NeurIPS), pp. 2618–2626. MIT Press, Cambridge (2014)
Yang, Z., Ge, Y., Su, C., Wang, D., Zhao, X., Ying, Y.: Fairness-aware differentially private collaborative filtering. In: Companion Proceedings of the ACM Web Conference (TheWebConf), pp. 927–931 (2023)
Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. In: International conference on machine learning (ICML), pp. 325–333 (2013)
Zhang, M., et al.: Membership inference attacks against recommender systems. In: Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS), pp. 864–879 (2021)
Zhang, S., Yin, H.: Comprehensive privacy analysis on federated recommender system against attribute inference attacks. IEEE Trans. Knowl. Data Eng. (TKDE) (2023)
Zhu, T., Li, G., Ren, Y., Zhou, W., Xiong, P.: Differential privacy for neighborhood-based collaborative filtering. In: Proceedings of the IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pp. 752–759 (2013)
Acknowledgments
This research is funded by the “DDAI” COMET Module within the COMET - Competence Centers for Excellent Technologies Programme, funded by the Austrian Federal Ministry for Transport, Innovation and Technology (bmvit), the Austrian Federal Ministry for Digital and Economic Affairs (bmdw), the Austrian Research Promotion Agency (FFG), the province of Styria (SFG) and partners from industry and academia. The COMET Programme is managed by FFG. Moreover, this research received support by the Austrian Science Fund (FWF): DFH-23 and P36413; and by the State of Upper Austria and the Federal Ministry of Education, Science, and Research, through grants LIT-2020-9-SEE-113 and LIT-2021-10-YOU-215. For open access purposes, the author has applied a CC BY public copyright license to any author accepted manuscript version arising from this submission.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Müllner, P., Lex, E., Schedl, M., Kowald, D. (2024). The Impact of Differential Privacy on Recommendation Accuracy and Popularity Bias. In: Goharian, N., et al. Advances in Information Retrieval. ECIR 2024. Lecture Notes in Computer Science, vol 14611. Springer, Cham. https://doi.org/10.1007/978-3-031-56066-8_33
Download citation
DOI: https://doi.org/10.1007/978-3-031-56066-8_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-56065-1
Online ISBN: 978-3-031-56066-8
eBook Packages: Computer ScienceComputer Science (R0)