Exploiting Distribution Constraints for Scalable and Efficient Image Retrieval

1The University of Texas at Austin

Abstract

Image retrieval is a crucial problem in robotics and computer vision, with downstream applications in robot place recognition and vision-based product recommendations. Modern retrieval systems face two key challenges: scalability and efficiency. State-of-the-art image retrieval systems train specific neural networks for each dataset, an approach that lacks scalability. Furthermore, since retrieval speed is directly proportional to embedding size, existing systems that use large embeddings lack efficiency. To tackle scalability, recent works propose using off-the-shelf foundation models. However, these models, though applicable across datasets, fall short in achieving performance comparable to that of dataset-specific models. Our key observation is that, while foundation models capture necessary subtleties for effective retrieval, the underlying distribution of their embedding space can negatively impact cosine similarity searches. We introduce Autoencoders with Strong Variance Constraints (AE-SVC), which, when used for projection, significantly improves the performance of foundation models. We provide an in-depth theoretical analysis of AE-SVC. Addressing efficiency, we introduce Single-shot Similarity Space Distillation ((SS)2D), a novel approach to learn embeddings with adaptive sizes that offers a better trade-off between size and performance. We conducted extensive experiments on four retrieval datasets, including Stanford Online Products (SoP) and Pittsburgh30k, using four different off-the-shelf foundation models, including DinoV2 and CLIP. AE-SVC demonstrates up to a 16% improvement in retrieval performance, while (SS)2D shows a further 10% improvement for smaller embedding sizes.

Methodology

pipeline

Figure: The proposed approach follows a two-step pipeline. (A) AE-SVC trains an autoencoder with our constraints to improve foundation model embeddings. (B) (SS)2D uses the improved embeddings from AE-SVC to learn adaptive embeddings for improved retrieval at any embedding size. (C) Once trained, (SS)2D can be directly applied to foundation model embeddings to generate adaptive embeddings for improved retrieval. (D) AE-SVC (orange) boosts performance significantly, while (SS)2D (green) further enhances results with smaller embeddings. Dino (blue) achieves optimal performance at 9 GLOPs, whereas (SS)2D on top of AE-SVC achieves similar performance at only 2.5 GLOPs.

Impact of the Proposed Constraints on the Cosine Similarity Distribution

sim space

Figure: AE-SVC reduces the variance of cosine similarity distributions in both foundation (a) and dataset-specific models (b), with a more significant shift in foundation models (a). This results in greater improvement in retrieval performance for the foundation model (Dino) compared to the dataset-specific model (Cosplace), as shown in (c).

AE-SVC Results

AE-SVC

Figure: AE-SVC significantly improves the retrieval performance of foundation models. AE-SVC (solid lines) consistently outperforms the off-the-shelf foundation models, i.e., PCA (dashed lines), on four datasets, achieving a 15.5% average improvement in retrieval performance.

(SS)2D Results

SS2D

Figure: Applying (SS)2D over AE-SVC leads to further performance boost at lower embedding sizes. Compared to VAE and SSD, (SS)2D offers superior single-shot dimensionality reduction, achieving up to a 10% enhancement at smaller embedding sizes, closely approaching SSD’s theoretical upper bound.


BibTeX

@misc{omama2025exploitingdistributionconstraintsscalable,
title={Exploiting Distribution Constraints for Scalable and Efficient Image Retrieval}, 
author={Mohammad Omama and Po-han Li and Sandeep P. Chinchali},
year={2025},
eprint={2410.07022},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2410.07022}}