Irrelevance reduction with locality-sensitive hash learning for efficient cross-media retrieval | Multimedia Tools and Applications Skip to main content
Log in

Irrelevance reduction with locality-sensitive hash learning for efficient cross-media retrieval

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Cross-media retrieval is an imperative approach to handle the explosive growth of multimodal data on the web. However, existing approaches to cross-media retrieval are computationally expensive due to high dimensionality. To efficiently retrieve in multimodal data, it is essential to reduce the proportion of irrelevant documents. In this paper, we propose a fast cross-media retrieval approach (FCMR) based on locality-sensitive hashing (LSH) and neural networks. One modality of multimodal information is projected by LSH algorithm to cluster similar objects into the same hash bucket and dissimilar objects into different ones and then another modality is mapped into these hash buckets using hash functions learned through neural networks. Once given a textual or visual query, it can be efficiently mapped to a hash bucket in which objects stored can be near neighbors of this query. Experimental results show that, in the set of the queries’ near neighbors obtained by the proposed method, the proportions of relevant documents can be much boosted, and it indicates that the retrieval based on near neighbors can be effectively conducted. Further evaluations on two public datasets demonstrate the efficacy of the proposed retrieval method compared to the baselines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. http://www.svcl.ucsd.edu/projects/crossmodal/.

  2. http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm.

References

  1. Bengio Y (2009) Learning deep architectures for AI. Founda Trends Mach Learn 2(1):1–127

    Article  Google Scholar 

  2. Bian J, Yang Y, Zhang H et al (2015) Multimedia summarization for social events in microblog stream. IEEE Trans Multimed 17(2):216–228

    Article  Google Scholar 

  3. Blei DM, Jordan MI (2003) Modeling annotated data. In: Proceedings of the 26th annual international ACM SIGIR conference on research and development in informaion retrieval. ACM, pp 127–134

  4. Blei DM, Ng AY, Jordan MI (2003) Latent dirichlet allocation[J]. J Mach Learn Res 3:993–1022

    MATH  Google Scholar 

  5. Chen Z, Samarabandu J, Rodrigo R (2012) Recent advances in simultaneous localization and map-building using computer vision. Adv Robot 21(3-4):233–265

    Article  Google Scholar 

  6. Dong J, Li X, Snoek CGM (2016) Word2visualvec: image and video to sentence matching by visual feature prediction. CoRR

  7. Eisenschtat A, Wolf L (2016) Linking image and text with 2-way nets. Eprint ArXiv

  8. Fang H, Gupta S, Iandola F, Srivastava RK, Deng L, Dollar P, Gao J, He X, Mitchell M, Platt JC (2014) From captions to visual concepts and back. Eprint Arxiv, 1473–1482

  9. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. Comput Vis Pattern Recogn 580–587

  10. Ghosh S, Das N, Goncalves T, Quaresma P (2016) Representing image captions as concept graphs using semantic information. In: International conference on advances in computing, communications and informatics, pp 162–167

  11. Gong Y, Ke Q, Isard M, Svetlana L (2013) A multi-view embedding space for modeling internet images, tags, and their semantics. Int J Comput Vis 106 (2):210–233

    Article  Google Scholar 

  12. Hardoon D R, Szedmak S, Shawe-Taylor J (2004) Canonical correlation analysis: an overview with application to learning methods. Neural Comput 16(12):2639–2664

    Article  Google Scholar 

  13. He X, Zhang H, Kan MY, Chua TS (2016) Fast matrix factorization for online recommendation with implicit feedback. In: International ACM SIGIR conference on research and development in information retrieval. ACM, pp 549–558

  14. Huang Y, Wang W, Wang L (2016) Instance-aware image and sentence matching with selective multimodal LSTM. Eprint ArXiv

  15. Jia Y, Salzmann M, Darrell T (2011) Learning cross-modality similarity for multinomial data. In: 2011 IEEE International conference on computer vision (ICCV). IEEE, pp 2407–2414

  16. Karpathy A, Joulin A, Li FF (2014) Deep fragment embeddings for bidirectional image sentence mapping. Adv Neural Inf Process Syst 3:1889–1897

    Google Scholar 

  17. Kulis B, Grauman K (2009) Kernelized locality-sensitive hashing for scalable image search. In: IEEE 12th International conference on computer vision. IEEE, pp 2130–2137

  18. Kumar S, Udupa R (2011) Learning hash functions for cross-view similarity search. Proc Int Joint Conf Artif Intell 22(1):1360–1365

    Google Scholar 

  19. Li J, Li J (2015) Fast image search with locality-sensitive hashing and homogeneous kernels map. The Scientific World Journal

  20. Lv Q, Josephson W, Wang Z et al (2007) Multi-probe LSH: efficient indexing for high. In: Proceedings of international conference of very large data bases, pp 950–961

  21. Ma L, Lu Z, Shang L, Li H (2015) Multimodal convolutional neural networks for matching image and sentence. In: IEEE International conference on computer vision, pp 2623–2631

  22. Masci J, Meier U, Dan C et al (2011) Stacked convolutional auto-encoders for hierarchical feature extraction. In: International conference on artificial neural networks. Springer-Verlag, pp 52–59

  23. Paulevé L, Jégou H, Amsaleg L (2010) Locality sensitive hashing: a comparison of hash function types and querying mechanisms. Pattern Recogn Lett 31 (11):1348–1358

    Article  Google Scholar 

  24. Plummer BA, Wang L, Cervantes CM, Caicedo JC (2015) Flickr30k entities: collecting region-to-phrase correspondences for richer image-to-sentence models. In: IEEE International conference on computer vision, 2641–2649

  25. Sharma A, Kumar A, Daume H, Jacobs DW (2012) Generalized multiview analysis: a discriminative latent space. In: IEEE Conference on computer vision and pattern recognition, pp 2160–2167

  26. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. Comput Sci

  27. Socher R, Karpathy A, Le QV, Manning CD, Ng AY (2013) Grounded compositional semantics for finding and describing images with sentences. Nlp.stanford.edu

  28. Vinyals O, Toshev A, Bengio S, Erhan D (2015) Show and tell: a neural image caption generator. Comput Sci 3156–3164

  29. Wang C, Yang H, Bartz C, Meinel C (2016) Image captioning with deep bidirectional LSTMs. In: ACM Conference on multimedia, pp 988–997

  30. Wu F, Lu X, Zhang Z et al (2013) Cross-media semantic representation via bi-directional learning to rank. In: Proceedings of the 21st ACM international conference on Multimedia. ACM, pp 877–886

  31. Zhang H, Shang X, Luan H, Wang M, Chua TS (2016) Learning from collective intelligence: feature learning using social images and tags. ACM Trans Multimed Comput Commun Appl (TOMM) 13(1):1

    Article  Google Scholar 

  32. Zhang H, Shen F, Liu W et al (2016) Discrete collaborative filtering. In: International ACM SIGIR conference on research and development in information retrieval. ACM, pp 325–334

  33. Zhuang Y, Yu Z, Wang W et al (2014) Cross-media hashing with neural networks. In: Proceedings of the ACM international conference on multimedia. ACM, pp 901–904

Download references

Acknowledgments

This work is supported by the Natural Science Foundation of China under Grant No. 61571453, No. 61502264, and No. 61405252, Natural Science Foundation of Hunan Province, China under Grant No. 14JJ3010, Research Funding of National University of Defense Technology under grant No. ZK16-03-37.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peng Wang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jia, Y., Bai, L., Wang, P. et al. Irrelevance reduction with locality-sensitive hash learning for efficient cross-media retrieval. Multimed Tools Appl 77, 29435–29455 (2018). https://doi.org/10.1007/s11042-018-5692-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-018-5692-3

Keywords

Navigation