This tutorial aims to weave together diverse strands of modern Learning to Rank (LtR) research, and present them in a unified full-day tutorial. First, we will introduce the fundamentals of LtR, and an overview of its various sub-fields. Then, we will discuss some recent advances in gradient boosting methods such as LambdaMART by focusing on their efficiency/effectiveness trade-offs and optimizations. Subsequently, we will then present TF-Ranking, a new open source TensorFlow package for neural LtR models, and how it can be used for modeling sparse textual features. Finally, we will conclude the tutorial by covering unbiased LtR - a new research field aiming at learning from biased implicit user feedback. The tutorial will consist of three two-hour sessions, each focusing on one of the topics described above. It will provide a mix of theoretical and hands-on sessions, and should benefit both academics interested in learning more about the current state-of-the-art in LtR, as well as practitioners who want to use LtR techniques in their applications.
|Data di pubblicazione:||2019|
|Titolo:||Learning to Rank in Theory and Practice: From Gradient Boosting to Neural Networks and Unbiased Learning|
|Titolo del libro:||Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval|
|Digital Object Identifier (DOI):||http://dx.doi.org/10.1145/3331184.3334824|
|Appare nelle tipologie:||4.1 Articolo in Atti di convegno|
File in questo prodotto:
|3331184.3334824.pdf||Versione dell'editore||Accesso chiuso-personale||Riservato|