Learning Latent Features with Infinite Non-negative Binary Matrix Tri-factorization | SpringerLink
Skip to main content

Learning Latent Features with Infinite Non-negative Binary Matrix Tri-factorization

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2016)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9947))

Included in the following conference series:

Abstract

Non-negative Matrix Factorization (NMF) has been widely exploited to learn latent features from data. However, previous NMF models often assume a fixed number of features, say p features, where p is simply searched by experiments. Moreover, it is even difficult to learn binary features, since binary matrix involves more challenging optimization problems. In this paper, we propose a new Bayesian model called infinite non-negative binary matrix tri-factorizations model (iNBMT), capable of learning automatically the latent binary features as well as feature number based on Indian Buffet Process (IBP). Moreover, iNBMT engages a tri-factorization process that decomposes a nonnegative matrix into the product of three components including two binary matrices and a non-negative real matrix. Compared with traditional bi-factorization, the tri-factorization can better reveal the latent structures among items (samples) and attributes (features). Specifically, we impose an IBP prior on the two infinite binary matrices while a truncated Gaussian distribution is assumed on the weight matrix. To optimize the model, we develop an efficient modified maximization-expectation algorithm (ME-algorithm), with the iteration complexity one order lower than another recently-proposed Maximization-Expectation-IBP model [9]. We present the model definition, detail the optimization, and finally conduct a series of experiments. Experimental results demonstrate that our proposed iNBMT model significantly outperforms the other comparison algorithms in both synthetic and real data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Since IBP-IBP is mainly for clustering, we do not show its (almost messy) reconstruction results for fairness.

References

  1. Ding, C., He, X., Simon, H.D.: On the equivalence of nonnegative matrix factorization and spectral clustering. In: SIAM International Conference on Data Mining (2005)

    Google Scholar 

  2. Ding, C., Li, T., Peng, W., Park, H.: Orthogonal nonnegative matrix tri-factorizations for clustering. In: Proceedings of the Twelfth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 126–135. ACM Press (2006)

    Google Scholar 

  3. Doshi-velez, F., Miller, K.T., Van Gael, J., Teh, Y.W.: Variational inference for the Indian buffet process. In: Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, AISTATS, pp. 137–144 (2009)

    Google Scholar 

  4. Ghahramani, Z., Beal, M.J.: Propagation algorithms for variational Bayesian learning. In: Advances in Neural Information Processing Systems, vol. 13, pp. 507–513 (2000)

    Google Scholar 

  5. Griffiths, T.L., Ghahramani, Z.: Infinite latent feature models and the Indian buffet process. In: Advances in Neural Information Processing Systems, vol. 18, pp. 475–482 (2005)

    Google Scholar 

  6. Knowles, D., Ghahramani, Z.: Infinite sparse factor analysis and infinite independent components analysis. In: Davies, M.E., James, C.J., Abdallah, S.A., Plumbley, M.D. (eds.) ICA 2007. LNCS, vol. 4666, pp. 381–388. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  7. Kurihara, K., Welling, M.: Bayesian \({k}\text{- }\text{ means }\) as a “maximization-expectation” algorithm. Neural Comput. 21(4), 1145–1172 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  8. Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: NIPS, pp. 556–562. MIT Press (2001)

    Google Scholar 

  9. Reed, C., Ghahramani, Z.: Scaling the Indian buffet process via submodular maximization. In: Proceedings of the 30th International Conference on Machine Learning, pp. 1013–1021(2013)

    Google Scholar 

  10. Zhang, Z., Li, T., Ding, C.H.Q., Ren, X.-W., Zhang, X.-S.: Binary matrix factorization for analyzing gene expression data. Data Min. Knowl. Discov. 20(1), 28–52 (2010)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgement

The paper was supported by the National Basic Research Program of China (2012CB316301), National Science Foundation of China (NSFC 61473236), and Jiangsu University Natural Science Research Programme (14KJB520037).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kaizhu Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Yang, X., Huang, K., Zhang, R., Hussain, A. (2016). Learning Latent Features with Infinite Non-negative Binary Matrix Tri-factorization. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9947. Springer, Cham. https://doi.org/10.1007/978-3-319-46687-3_65

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46687-3_65

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46686-6

  • Online ISBN: 978-3-319-46687-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics