Fusion-supervised Deep Cross-modal Hashing | Awesome Similarity Search Papers

Fusion-supervised Deep Cross-modal Hashing

Li Wang, Lei Zhu, En Yu, Jiande Sun, Huaxiang Zhang Β· 2019 IEEE International Conference on Multimedia and Expo (ICME) Β· 2019

Deep hashing has recently received attention in cross-modal retrieval for its impressive advantages. However, existing hashing methods for cross-modal retrieval cannot fully capture the heterogeneous multi-modal correlation and exploit the semantic information. In this paper, we propose a novel Fusion-supervised Deep Cross-modal Hashing (FDCH) approach. Firstly, FDCH learns unified binary codes through a fusion hash network with paired samples as input, which effectively enhances the modeling of the correlation of heterogeneous multi-modal data. Then, these high-quality unified hash codes further supervise the training of the modality-specific hash networks for encoding out-of-sample queries. Meanwhile, both pair-wise similarity information and classification information are embedded in the hash networks under one stream framework, which simultaneously preserves cross-modal similarity and keeps semantic consistency. Experimental results on two benchmark datasets demonstrate the state-of-the-art performance of FDCH.

Explore more on:
Deep Hashing Cross-Modal Hashing Survey Paper
Similar Work
Loading…