Deep metric learning has yielded impressive results in tasks such as clustering and image retrieval by leveraging neural networks to obtain highly discriminative feature embeddings, which can be used to group samples into different classes. Much research has been devoted to the design of smart loss functions or data mining strategies for training such networks. Most methods consider only pairs or triplets of samples within a mini-batch to compute the loss function, which is commonly based on the distance between embeddings. We propose Group Loss, a loss function based on a differentiable label-propagation method that enforces embedding similarity across all samples of a group while promoting, at the same time, low-density regions amongst data points belonging to different groups. Guided by the smoothness assumption that '`similar objects should belong to the same group'', the proposed loss trains the neural network for a classification task, enforcing a consistent labelling amongst samples within a class. We design a set of inference strategies tailored towards our algorithm, named Group Loss++ that further improve the results of our model. We show state-of-the-art results on clustering and image retrieval on four retrieval datasets, and present competitive results on two person re-identification datasets, providing a unified framework for retrieval and re-identification.

The Group Loss++: A deeper look into group loss for deep metric learning

Elezi, Ismail
;
Vascon, Sebastiano;Torcinovich, Alessandro;Pelillo, Marcello;Leal-Taixe, Laura
2022-01-01

Abstract

Deep metric learning has yielded impressive results in tasks such as clustering and image retrieval by leveraging neural networks to obtain highly discriminative feature embeddings, which can be used to group samples into different classes. Much research has been devoted to the design of smart loss functions or data mining strategies for training such networks. Most methods consider only pairs or triplets of samples within a mini-batch to compute the loss function, which is commonly based on the distance between embeddings. We propose Group Loss, a loss function based on a differentiable label-propagation method that enforces embedding similarity across all samples of a group while promoting, at the same time, low-density regions amongst data points belonging to different groups. Guided by the smoothness assumption that '`similar objects should belong to the same group'', the proposed loss trains the neural network for a classification task, enforcing a consistent labelling amongst samples within a class. We design a set of inference strategies tailored towards our algorithm, named Group Loss++ that further improve the results of our model. We show state-of-the-art results on clustering and image retrieval on four retrieval datasets, and present competitive results on two person re-identification datasets, providing a unified framework for retrieval and re-identification.
File in questo prodotto:
File Dimensione Formato  
2204.01509.pdf

accesso aperto

Descrizione: preprint
Tipologia: Documento in Pre-print
Licenza: Accesso libero (no vincoli)
Dimensione 6.36 MB
Formato Adobe PDF
6.36 MB Adobe PDF Visualizza/Apri
The_Group_Loss_A_deeper_look_into_group_loss_for_deep_metric_learning.pdf

non disponibili

Descrizione: final version
Tipologia: Versione dell'editore
Licenza: Accesso chiuso-personale
Dimensione 3.94 MB
Formato Adobe PDF
3.94 MB Adobe PDF   Visualizza/Apri

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5003334
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 6
social impact