Reversible Data Hiding for Encrypted Images Based on Statistical Learning | SpringerLink
Skip to main content

Reversible Data Hiding for Encrypted Images Based on Statistical Learning

  • Conference paper
  • First Online:
Information Security and Privacy (ACISP 2016)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 9722))

Included in the following conference series:

  • 1765 Accesses

Abstract

In this paper, we propose a novel reversible data hiding (RDH) approach for encrypted images by using statistical learning. To hide the data, a new random permutation algorithm is proposed using a high-speed stream cipher to secure the data hiding process. A secret message is embedded into the permuted image blocks based on a checkerboard pattern, by modifying the least significant encrypted bits. To detect the hidden data, prior works utilize a single spatial correlation. In contrast, our approach is novel in that it uses a high-dimensional statistical feature vector upon which a new boosting algorithm for high reversibility is proposed. A complete encoding and decoding procedure of RDH for encrypted images is elaborated. The experimental results show that the proposed method can detect secret message bits and restore the original image simultaneously with \(100\,\%\) reversibility with a higher capacity, significantly outperforming the state-of-the-art RDH methods on encrypted images.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Fridrich, J., Goljan, M., Du, R.: Invertible authentication. In: Electronic Imaging Photonics West, pp. 197–208. International Society for Optics and Photonics (2001)

    Google Scholar 

  2. Celik, M.U., Sharma, G., Tekalp, A.M., Saber, E.: Reversible data hiding. In: IEEE International Conference on Image Processing, vol. 2, pp. II–157 (2002)

    Google Scholar 

  3. Shi, Y.Q., Ni, Z., Zou, D., Liang, C., Xuan, G.: Lossless data hiding: fundamentals, algorithms and applications. In: Proceedings of the 2004 International Symposium on Circuits and Systems, vol. 2, pp. II–33. IEEE (2004)

    Google Scholar 

  4. Ni, Z., Shi, Y.Q., Ansari, N., Su, W.: Reversible data hiding. IEEE Trans. Circ. Syst. Video Technol. 16(3), 354–362 (2006)

    Article  Google Scholar 

  5. Lu, Z.-M., Li, Z.: High capacity reversible data hiding for3D meshes in the PVQ domain. In: Shi, Y.Q., Kim, H.-J., Katzenbeisser, S. (eds.) IWDW 2007. LNCS, vol. 5041, pp. 233–243. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  6. Boxcryptor: May 2011. https://www.boxcryptor.com/en/google-drive

  7. Dennis, O.: August 2013. http://www.cnet.com/how-to/two-free-ways-to-encrypt-google-drive-files/

  8. Ra, M.R., Govindan, R., Ortega, A.: P3: toward privacy-preserving photo sharing. In: Presented as Part of the 10th USENIX Symposium on Networked Systems Design and Implementation, pp. 515–528 (2013)

    Google Scholar 

  9. CNET: July 2013. http://news.cnet.com/8301-13578_3-57594171-38/google-tests-encryption-to-protect-users-drive-files-against-government-demands/

  10. Puech, W., Chaumont, M., Strauss, O.: A reversible data hiding method for encrypted images. In: Electronic Imaging, p. 68191E. International Society for Optics and Photonics (2008)

    Google Scholar 

  11. Xinpeng, Z.: Reversible data hiding in encrypted image. IEEE Signal Process. Lett. 18(4), 255–258 (2011)

    Article  Google Scholar 

  12. Hong, W., Chen, T.S., Wu, H.Y.: An improved reversible data hiding in encrypted images using side match. IEEE Signal Process. Lett. 19(4), 199–202 (2012)

    Article  Google Scholar 

  13. Xinpeng, Z.: Separable reversible data hiding in encrypted image. IEEE Trans. Inf. Forensics Secur. 7(2), 826–832 (2012)

    Article  Google Scholar 

  14. Ma, K., Zhang, W., Zhao, X., Yu, N., Li, F.: Reversible data hiding in encrypted images by reserving room before encryption. IEEE Trans. Inf. Forensics Secur. 8(3), 553–562 (2013)

    Article  Google Scholar 

  15. Freund, Y., Schapire, R.E.: A desicion-theoretic generalization of on-line learning and an application to boosting. In: Vitányi, Paul M.B. (ed.) EuroCOLT 1995. LNCS, vol. 904, pp. 23–37. Springer, Heidelberg (1995)

    Chapter  Google Scholar 

  16. Buhlmann, P., Yu, B.: Additive logistic regression: A statistical view of boosting-discussion (2000)

    Google Scholar 

  17. Zhu, J., Rosset, S., Zou, H., Hastie, T.: Multi-class adaboost. Ann Arbor 1001(48109), 1612 (2006)

    MATH  Google Scholar 

  18. Henricksen, M.: Two dragons-a family of fast word-based stream ciphers. In: International Conference on Security and Cryptography, Iceland, pp. 35–44 (2012)

    Google Scholar 

  19. Weber, A.G.: The usc-sipi image database. USC-SIPI Report 315, pp. 1–24 (1997)

    Google Scholar 

  20. Hao, P.: (2004). http://www.eecs.qmul.ac.uk/~phao/cip/images/

  21. UIUC (2006). http://www-cvr.ai.uiuc.edu/ponce_grp/data/

  22. Wicker, S.B., Bhargava, V.K.: Reed-Solomon Codes and Their Applications. Wiley, New York (1999)

    Book  Google Scholar 

Download references

Acknowledgment

We wish to express our sincere thanks to Dr. Matt Henricksen from Institute for Infocomm Research for providing valuable advices and suggestions. Our heartfelt thanks also go to Dr. Jiayuan Fan for her work in the initial phase of the algorithm implementation. Wei Wu is supported by National Natural Science Foundation of China (61472083, 61402110), Program for New Century Excellent Talents in Fujian University (JA14067) and Distinguished Young Scholars Fund of Fujian (2016J06013).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Wu .

Editor information

Editors and Affiliations

Appendices

A Appendix

Spatial Smoothness Features

$$\begin{aligned} {v_{3}}=\sum \limits _{i=2}^{n-1}{\sum \limits _{j=2}^{n-1}{\left( {\left| {{c_{i,j}}-{c_{i-1,j}}}\right| +\left| {{c_{i,j}}-{c_{i+1,j}}}\right| +\left| {{c_{i,j}}-{c_{i,j-1}}}\right| +\left| {{c_{i,j}}-{c_{i,j+1}}}\right| }\right) }} \end{aligned}$$
(A.1)
$$\begin{aligned} {v_{4}}=\sum \limits _{i=2}^{n-1}{\sum \limits _{j=2}^{n-1}{\left( {\left| {{c_{i,j}}-{c_{i-1,j-1}}}\right| +\left| {{c_{i,j}}-{c_{i+1,j+1}}}\right| +\left| {{c_{i,j}}-{c_{i+1,j-1}}}\right| +\left| {{c_{i,j}}-{c_{i-1,j+1}}}\right| }\right) }} \end{aligned}$$
(A.2)

where \(c_{i,j}\) is a pixel value in a \(n\times n\) square image block. Natural images tend to have lower values of these features while unnatural images tends to have higher values.

Average pixel values:

$$\begin{aligned} {v_{5}}=\bar{c}=\frac{1}{{n^{2}}}\sum \limits _{i=1}^{n}{\sum \limits _{j=1}^{n}{\left| {c_{i,j}}\right| }} \end{aligned}$$
(A.3)

Absolute mean value difference between black and white marked pixels:

$$\begin{aligned} {v_{6}}=\left| {\sum {\sum \nolimits _{\left( {i+j}\right) \,\bmod \,2=0}{c_{i,j}}}-\sum {\sum \nolimits _{\left( {i+j}\right) \,\bmod \,2=1}{c_{i,j}}}}\right| \end{aligned}$$
(A.4)

Geometric mean value:

$$\begin{aligned} {v_{7}}={\left( {\prod \limits _{i=1}^{n}{\prod \limits _{j=1}^{n}{c_{i,j}}}}\right) ^{\frac{1}{{n^{2}}}}} \end{aligned}$$
(A.5)

Mean absolute deviation:

$$\begin{aligned} {v_{8}}=\frac{1}{{n^{2}}}\sum \limits _{i=1}^{n}{\sum \limits _{j=1}^{n}{\left| {{c_{i,j}}-{\bar{c}}}\right| }} \end{aligned}$$
(A.6)

Second to seventh order of the central sample moments:

$$\begin{aligned} {v_{7+k}}=M_{k}=\frac{1}{n^{2}}\sum \limits _{i=1}^{n}{\sum \limits _{j=1}^{n}\left| c_{i,j}-\bar{c}\right| ^{k}},\,k=2,\cdots ,7 \end{aligned}$$
(A.7)

where the second order statistic is also used in [10].

Discrete Cosine Transform. DCT coefficients constitutes \(n\times n\) features:

$$\begin{aligned} {V_{u,w}}=\sum \limits _{i=1}^{n}{\sum \limits _{j=1}^{n}{{c_{i,j}}\cos \left[ {\frac{\pi }{n}\left( {i-\frac{1}{2}}\right) u}\right] }}\cos \left[ {\frac{\pi }{n}\left( {j-\frac{1}{2}}\right) w}\right] ,1\le u,w\le n. \end{aligned}$$
(A.8)

Other features include skewness \(v_{15}\), kurtosis \(v_{16}\), median value \(v_{17}\), trimmed mean value excluding the outliers \(v_{18}\), range of the values \(v_{19}\) and interquartile range of the values \(v_{20}\).

B Appendix

Inspired by [15, 16], a new two-class Adaboost algorithm is developed here based on additive logistic regression and empirical loss:

$$\begin{aligned} \mathop {\min }\limits _{\mathbf{F}\left( \mathbf{x}\right) }\;\sum \limits _{i=1}^{N}{\exp \left( -\frac{1}{2}{} \mathbf{y}_{i}^{T}{} \mathbf{F}\left( \mathbf{X}_{i}\right) \right) } \end{aligned}$$
(B.1)

with the symmetric constraint \(\small F_{1}\left( \mathbf{X}_{i}\right) +F_{2}\left( \mathbf{X}_{i}\right) =0\), where \(\small \mathbf{F}\left( \mathbf{X}\right) \) is a continuous-valued Adaboost classifier. We consider \(\small \mathbf{F}\left( \mathbf{X}\right) \) that has the following forward stage additive modelling form:

$$\begin{aligned} \mathbf{F}\left( \mathbf{X}\right) =\sum \limits _{l=1}^{L}{\beta ^{\left( l\right) }g^{\left( l\right) }\left( \mathbf{X}\right) } \end{aligned}$$
(B.2)

where \(\small \beta ^{\left( l\right) }\in \mathfrak {R}\) are weighting coefficients and \(\small g^{\left( l\right) }\left( \mathbf{X}\right) \) are basis functions satisfying the symmetric constraint \(\small g_{1}\left( \mathbf{X}\right) +g_{2}\left( \mathbf{X}\right) =0\). The additive model is \(\small \mathbf{F}^{\left( l\right) }\left( \mathbf{X}\right) =\mathbf{F}^{\left( l-1\right) }\left( \mathbf{X}\right) +\beta ^{\left( l\right) }{} \mathbf{g}^{\left( l\right) }\left( \mathbf{X}\right) \) thus the optimization can be represented as

$$\begin{aligned} \left( \beta ^{\left( l\right) },\mathbf{g}^{\left( l\right) }\right)= & {} \arg \;\mathop {\min }\limits _{\beta ,\mathbf{g}}\sum \limits _{i=1}^{N}{\exp \left( -\frac{1}{2}{} \mathbf{y}_{i}^{T}\left( \mathbf{F}^{\left( l-1\right) }\left( \mathbf{X}\right) +\beta ^{\left( l\right) }{} \mathbf{g}^{\left( l\right) }\left( \mathbf{X}\right) \right) \right) }\nonumber \\= & {} \arg \;\mathop {\min }\limits _{\beta ,\mathbf{g}}\sum \limits _{i=1}^{N}{w_{i}\exp \left( -\frac{1}{2}{} \mathbf{y}_{i}^{T}\beta ^{\left( l\right) }{} \mathbf{g}^{\left( l\right) }\left( \mathbf{X}\right) \right) } \end{aligned}$$
(B.3)

where \(\small w_{i}=\exp \left( -\frac{1}{2}{} \mathbf{y}_{i}^{T}{} \mathbf{F}^{\left( l-1\right) }\left( \mathbf{X}\right) \right) \) is the current sample weight. In this work, basis function \(\mathbf{g}\left( \mathbf{X}\right) \) is defined as:

$$\begin{aligned} g_{k}\left( \mathbf{X}\right) =\frac{1}{A}\cdot \frac{2\exp \left( -d_{k}\left( \mathbf{X}\right) \right) }{1+\exp \left( -d_{k}\left( \mathbf{X}\right) \right) }-\frac{1}{2} \end{aligned}$$
(B.4)

where A is a normalization factor and \(\small d_{k}\left( \mathbf{X}\right) \) is defined as the l1-norm distance between the feature vector \(\small \mathbf{X}\) and the k-th class of the detection vectors as follow

$$\begin{aligned} d_{k}\left( \mathbf{X}\right) =\frac{1}{\left| \mathbf{Y}_{k}\right| }\sum \limits _{\mathbf{V}\in \mathbf{Y}_{k}}\left\| \mathbf{X}-\mathbf{V}\right\| ,\,k=0,1 \end{aligned}$$
(B.5)

where \(\small \mathbf{V}\) is a feature vector, \(\small \mathbf{Y}_{k}\) is the set of the feature vectors in the k-th class, and \(\left| \cdot \right| \) denotes the cardinality of a set. A small \(\small d_{k}\left( \mathbf{X}\right) \) means \(\small \mathbf {X}\) is close to the k-th class, and \(\small g_{k}\left( \mathbf{X}\right) \) will be large.

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Li, Z., Wu, W. (2016). Reversible Data Hiding for Encrypted Images Based on Statistical Learning. In: Liu, J., Steinfeld, R. (eds) Information Security and Privacy. ACISP 2016. Lecture Notes in Computer Science(), vol 9722. Springer, Cham. https://doi.org/10.1007/978-3-319-40253-6_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-40253-6_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-40252-9

  • Online ISBN: 978-3-319-40253-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics