Vision and Language (VL) models offer an effective method for aligning
representation spaces of images and text, leading to numerous applications such
as cross-modal retrieval, visual question answering, captioning, and more.
However, the aligned image-text spaces learned by all the popular VL models are
still suffering from the so-called object bias' - their representations behave
as bags of nouns’, mostly ignoring or downsizing the attributes, relations,
and states of objects described/appearing in texts/images. Although some great
attempts at fixing these compositional reasoning' issues were proposed in the
recent literature, the problem is still far from being solved. In this paper,
we uncover two factors limiting the VL models' compositional reasoning
performance. These two factors are properties of the paired VL dataset used for
finetuning and pre-training the VL model: (i) the caption quality, or in other
words image-alignment’, of the texts; and (ii) the `density’ of the captions
in the sense of mentioning all the details appearing on the image. We propose a
fine-tuning approach for automatically treating these factors leveraging a
standard VL dataset (CC3M). Applied to CLIP, we demonstrate its significant
compositional reasoning performance increase of up to (\sim27%) over the base
model, up to (\sim20%) over the strongest baseline, and by (6.7%) on average.