On The Similarities Of Embeddings In Contrastive Learning | Awesome Similarity Search Papers

On The Similarities Of Embeddings In Contrastive Learning

Chungpa Lee, Sehee Lim, Kibok Lee, Jy-Yong Sohn Β· International Conference on Machine Learning (ICML) 2025 Β· 2025

Contrastive learning operates on a simple yet effective principle: Embeddings of positive pairs are pulled together, while those of negative pairs are pushed apart. In this paper, we propose a unified framework for understanding contrastive learning through the lens of cosine similarity, and present two key theoretical insights derived from this framework. First, in full-batch settings, we show that perfect alignment of positive pairs is unattainable when negative-pair similarities fall below a threshold, and this misalignment can be mitigated by incorporating within-view negative pairs into the objective. Second, in mini-batch settings, smaller batch sizes induce stronger separation among negative pairs in the embedding space, i.e., higher variance in their similarities, which in turn degrades the quality of learned representations compared to full-batch settings. To address this, we propose an auxiliary loss that reduces the variance of negative-pair similarities in mini-batch settings. Empirical results show that incorporating the proposed loss improves performance in small-batch settings.

Explore more on:
Uncategorized
Similar Work
Loading…