Abstract
In this paper, we propose a novel reversible data hiding (RDH) approach for encrypted images by using statistical learning. To hide the data, a new random permutation algorithm is proposed using a high-speed stream cipher to secure the data hiding process. A secret message is embedded into the permuted image blocks based on a checkerboard pattern, by modifying the least significant encrypted bits. To detect the hidden data, prior works utilize a single spatial correlation. In contrast, our approach is novel in that it uses a high-dimensional statistical feature vector upon which a new boosting algorithm for high reversibility is proposed. A complete encoding and decoding procedure of RDH for encrypted images is elaborated. The experimental results show that the proposed method can detect secret message bits and restore the original image simultaneously with \(100\,\%\) reversibility with a higher capacity, significantly outperforming the state-of-the-art RDH methods on encrypted images.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Fridrich, J., Goljan, M., Du, R.: Invertible authentication. In: Electronic Imaging Photonics West, pp. 197–208. International Society for Optics and Photonics (2001)
Celik, M.U., Sharma, G., Tekalp, A.M., Saber, E.: Reversible data hiding. In: IEEE International Conference on Image Processing, vol. 2, pp. II–157 (2002)
Shi, Y.Q., Ni, Z., Zou, D., Liang, C., Xuan, G.: Lossless data hiding: fundamentals, algorithms and applications. In: Proceedings of the 2004 International Symposium on Circuits and Systems, vol. 2, pp. II–33. IEEE (2004)
Ni, Z., Shi, Y.Q., Ansari, N., Su, W.: Reversible data hiding. IEEE Trans. Circ. Syst. Video Technol. 16(3), 354–362 (2006)
Lu, Z.-M., Li, Z.: High capacity reversible data hiding for3D meshes in the PVQ domain. In: Shi, Y.Q., Kim, H.-J., Katzenbeisser, S. (eds.) IWDW 2007. LNCS, vol. 5041, pp. 233–243. Springer, Heidelberg (2008)
Boxcryptor: May 2011. https://www.boxcryptor.com/en/google-drive
Dennis, O.: August 2013. http://www.cnet.com/how-to/two-free-ways-to-encrypt-google-drive-files/
Ra, M.R., Govindan, R., Ortega, A.: P3: toward privacy-preserving photo sharing. In: Presented as Part of the 10th USENIX Symposium on Networked Systems Design and Implementation, pp. 515–528 (2013)
CNET: July 2013. http://news.cnet.com/8301-13578_3-57594171-38/google-tests-encryption-to-protect-users-drive-files-against-government-demands/
Puech, W., Chaumont, M., Strauss, O.: A reversible data hiding method for encrypted images. In: Electronic Imaging, p. 68191E. International Society for Optics and Photonics (2008)
Xinpeng, Z.: Reversible data hiding in encrypted image. IEEE Signal Process. Lett. 18(4), 255–258 (2011)
Hong, W., Chen, T.S., Wu, H.Y.: An improved reversible data hiding in encrypted images using side match. IEEE Signal Process. Lett. 19(4), 199–202 (2012)
Xinpeng, Z.: Separable reversible data hiding in encrypted image. IEEE Trans. Inf. Forensics Secur. 7(2), 826–832 (2012)
Ma, K., Zhang, W., Zhao, X., Yu, N., Li, F.: Reversible data hiding in encrypted images by reserving room before encryption. IEEE Trans. Inf. Forensics Secur. 8(3), 553–562 (2013)
Freund, Y., Schapire, R.E.: A desicion-theoretic generalization of on-line learning and an application to boosting. In: Vitányi, Paul M.B. (ed.) EuroCOLT 1995. LNCS, vol. 904, pp. 23–37. Springer, Heidelberg (1995)
Buhlmann, P., Yu, B.: Additive logistic regression: A statistical view of boosting-discussion (2000)
Zhu, J., Rosset, S., Zou, H., Hastie, T.: Multi-class adaboost. Ann Arbor 1001(48109), 1612 (2006)
Henricksen, M.: Two dragons-a family of fast word-based stream ciphers. In: International Conference on Security and Cryptography, Iceland, pp. 35–44 (2012)
Weber, A.G.: The usc-sipi image database. USC-SIPI Report 315, pp. 1–24 (1997)
Hao, P.: (2004). http://www.eecs.qmul.ac.uk/~phao/cip/images/
UIUC (2006). http://www-cvr.ai.uiuc.edu/ponce_grp/data/
Wicker, S.B., Bhargava, V.K.: Reed-Solomon Codes and Their Applications. Wiley, New York (1999)
Acknowledgment
We wish to express our sincere thanks to Dr. Matt Henricksen from Institute for Infocomm Research for providing valuable advices and suggestions. Our heartfelt thanks also go to Dr. Jiayuan Fan for her work in the initial phase of the algorithm implementation. Wei Wu is supported by National Natural Science Foundation of China (61472083, 61402110), Program for New Century Excellent Talents in Fujian University (JA14067) and Distinguished Young Scholars Fund of Fujian (2016J06013).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
A Appendix
Spatial Smoothness Features
where \(c_{i,j}\) is a pixel value in a \(n\times n\) square image block. Natural images tend to have lower values of these features while unnatural images tends to have higher values.
Average pixel values:
Absolute mean value difference between black and white marked pixels:
Geometric mean value:
Mean absolute deviation:
Second to seventh order of the central sample moments:
where the second order statistic is also used in [10].
Discrete Cosine Transform. DCT coefficients constitutes \(n\times n\) features:
Other features include skewness \(v_{15}\), kurtosis \(v_{16}\), median value \(v_{17}\), trimmed mean value excluding the outliers \(v_{18}\), range of the values \(v_{19}\) and interquartile range of the values \(v_{20}\).
B Appendix
Inspired by [15, 16], a new two-class Adaboost algorithm is developed here based on additive logistic regression and empirical loss:
with the symmetric constraint \(\small F_{1}\left( \mathbf{X}_{i}\right) +F_{2}\left( \mathbf{X}_{i}\right) =0\), where \(\small \mathbf{F}\left( \mathbf{X}\right) \) is a continuous-valued Adaboost classifier. We consider \(\small \mathbf{F}\left( \mathbf{X}\right) \) that has the following forward stage additive modelling form:
where \(\small \beta ^{\left( l\right) }\in \mathfrak {R}\) are weighting coefficients and \(\small g^{\left( l\right) }\left( \mathbf{X}\right) \) are basis functions satisfying the symmetric constraint \(\small g_{1}\left( \mathbf{X}\right) +g_{2}\left( \mathbf{X}\right) =0\). The additive model is \(\small \mathbf{F}^{\left( l\right) }\left( \mathbf{X}\right) =\mathbf{F}^{\left( l-1\right) }\left( \mathbf{X}\right) +\beta ^{\left( l\right) }{} \mathbf{g}^{\left( l\right) }\left( \mathbf{X}\right) \) thus the optimization can be represented as
where \(\small w_{i}=\exp \left( -\frac{1}{2}{} \mathbf{y}_{i}^{T}{} \mathbf{F}^{\left( l-1\right) }\left( \mathbf{X}\right) \right) \) is the current sample weight. In this work, basis function \(\mathbf{g}\left( \mathbf{X}\right) \) is defined as:
where A is a normalization factor and \(\small d_{k}\left( \mathbf{X}\right) \) is defined as the l1-norm distance between the feature vector \(\small \mathbf{X}\) and the k-th class of the detection vectors as follow
where \(\small \mathbf{V}\) is a feature vector, \(\small \mathbf{Y}_{k}\) is the set of the feature vectors in the k-th class, and \(\left| \cdot \right| \) denotes the cardinality of a set. A small \(\small d_{k}\left( \mathbf{X}\right) \) means \(\small \mathbf {X}\) is close to the k-th class, and \(\small g_{k}\left( \mathbf{X}\right) \) will be large.
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Li, Z., Wu, W. (2016). Reversible Data Hiding for Encrypted Images Based on Statistical Learning. In: Liu, J., Steinfeld, R. (eds) Information Security and Privacy. ACISP 2016. Lecture Notes in Computer Science(), vol 9722. Springer, Cham. https://doi.org/10.1007/978-3-319-40253-6_12
Download citation
DOI: https://doi.org/10.1007/978-3-319-40253-6_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-40252-9
Online ISBN: 978-3-319-40253-6
eBook Packages: Computer ScienceComputer Science (R0)