We analyse the idiosyncrasies of LambdaMART gradients and we introduce some strategies to remove or reduce gradient incoherencies. Specifically, we designed three selection strategies to compute the full gradient for only those documents that should be ranked in the top-k positions of the ranking. We empirically demonstrate on publicly available datasets that the proposed approach leads to models that can achieve statistically significant improvements in terms of NDCG while maintaining the same training efficiently as optimising truncated metrics.
Does LambdaMART Do What You Expect?
Lucchese C.;Marcuzzi F.
;Orlando S.
2023-01-01
Abstract
We analyse the idiosyncrasies of LambdaMART gradients and we introduce some strategies to remove or reduce gradient incoherencies. Specifically, we designed three selection strategies to compute the full gradient for only those documents that should be ranked in the top-k positions of the ranking. We empirically demonstrate on publicly available datasets that the proposed approach leads to models that can achieve statistically significant improvements in terms of NDCG while maintaining the same training efficiently as optimising truncated metrics.File in questo prodotto:
File | Dimensione | Formato | |
---|---|---|---|
paper-16.pdf
accesso aperto
Tipologia:
Versione dell'editore
Licenza:
Creative commons
Dimensione
443.51 kB
Formato
Adobe PDF
|
443.51 kB | Adobe PDF | Visualizza/Apri |
I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.