TY - GEN
T1 - Neural Contextual Bandits for Personalized Recommendation
AU - Ban, Yikun
AU - Qi, Yunzhe
AU - He, Jingrui
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2024/5/13
Y1 - 2024/5/13
N2 - In the dynamic landscape of online businesses, recommender systems are pivotal in enhancing user experiences. While traditional approaches have relied on static supervised learning, the quest for adaptive, user-centric recommendations has led to the emergence of the formulation of contextual bandits. This tutorial investigates the contextual bandits as a powerful framework for personalized recommendations. We delve into the challenges, advanced algorithms and theories, collaborative strategies, and open challenges and future prospects within this field. Different from existing related tutorials, (1) we focus on the exploration perspective of contextual bandits to alleviate the “Matthew Effect” in the recommender systems, i.e., the rich get richer and the poor get poorer, concerning the popularity of items; (2) in addition to the conventional linear contextual bandits, we will also dedicated to neural contextual bandits which have emerged as an important branch in recent years, to investigate how neural networks benefit contextual bandits for personalized recommendation both empirically and theoretically; (3) we will cover the latest topic, collaborative neural contextual bandits, to incorporate both user heterogeneity and user correlations customized for recommender system; (4) we will provide and discuss the new emerging challenges and open questions for neural contextual bandits with applications in the personalized recommendation, especially for large neural models. Compared with other greedy personalized recommendation approaches, Contextual Bandits techniques provide distinct ways of modeling user preferences. We believe this tutorial can benefit researchers and practitioners by appreciating the power of exploration and the performance guarantee brought by neural contextual bandits, as well as rethinking the challenges caused by the increasing complexity of neural models and the magnitude of data.
AB - In the dynamic landscape of online businesses, recommender systems are pivotal in enhancing user experiences. While traditional approaches have relied on static supervised learning, the quest for adaptive, user-centric recommendations has led to the emergence of the formulation of contextual bandits. This tutorial investigates the contextual bandits as a powerful framework for personalized recommendations. We delve into the challenges, advanced algorithms and theories, collaborative strategies, and open challenges and future prospects within this field. Different from existing related tutorials, (1) we focus on the exploration perspective of contextual bandits to alleviate the “Matthew Effect” in the recommender systems, i.e., the rich get richer and the poor get poorer, concerning the popularity of items; (2) in addition to the conventional linear contextual bandits, we will also dedicated to neural contextual bandits which have emerged as an important branch in recent years, to investigate how neural networks benefit contextual bandits for personalized recommendation both empirically and theoretically; (3) we will cover the latest topic, collaborative neural contextual bandits, to incorporate both user heterogeneity and user correlations customized for recommender system; (4) we will provide and discuss the new emerging challenges and open questions for neural contextual bandits with applications in the personalized recommendation, especially for large neural models. Compared with other greedy personalized recommendation approaches, Contextual Bandits techniques provide distinct ways of modeling user preferences. We believe this tutorial can benefit researchers and practitioners by appreciating the power of exploration and the performance guarantee brought by neural contextual bandits, as well as rethinking the challenges caused by the increasing complexity of neural models and the magnitude of data.
KW - Contextual Bandits
KW - Personalized Recommendation
UR - http://www.scopus.com/inward/record.url?scp=85194457379&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85194457379&partnerID=8YFLogxK
U2 - 10.1145/3589335.3641241
DO - 10.1145/3589335.3641241
M3 - Conference contribution
AN - SCOPUS:85194457379
T3 - WWW 2024 Companion - Companion Proceedings of the ACM Web Conference
SP - 1246
EP - 1249
BT - WWW 2024 Companion - Companion Proceedings of the ACM Web Conference
PB - Association for Computing Machinery
T2 - 33rd ACM Web Conference, WWW 2024
Y2 - 13 May 2024 through 17 May 2024
ER -