RecSys 2019 - Tutorials - RecSys – RecSys

Tutorials

Bandit Algorithms in Recommender Systems

by Dorota Glowacka (Helsinki University)

The multi-armed bandit problem models an agent that simultaneously attempts to acquire new knowledge (exploration) and optimize his decisions based on existing knowledge (exploitation). The agent attempts to balance these competing tasks in order to maximize his total value over the period of time considered. There are many practical applications of the bandit model, such as clinical trials, adaptive routing or portfolio design. Over the last decade there has been an increased interest in developing bandit algorithms for specific problems in recommender systems, such as news and ad recommendation, the cold start problem in recommendation, personalization, collaborative filtering with bandits, or combining social networks with bandits to improve product recommendation. The aim of this tutorial is to provide an overview of the various applications of bandit algorithms in recommendation.

This introductory 90-minute tutorial is aimed at an audience with some background in computer science, information retrieval or recommender system who have a general interest in the application of machine learning techniques in recommender systems.

Date

Thursday, Sept 19, 2019, 09:00-10:30

Location

Auditorium

Platinum Supporters
 
 
 
Diamond Supporters
 
 
Gold Supporters
 
 
 
 
Silver Supporters
 
 
Special Supporter