Water Conservancy Remote Sensing Image Classification Based on Target-Scene Deep Semantic Enhancement | SpringerLink
Skip to main content

Water Conservancy Remote Sensing Image Classification Based on Target-Scene Deep Semantic Enhancement

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2023 (ICANN 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14256))

Included in the following conference series:

  • 1226 Accesses

Abstract

Water conservancy remote sensing image classification is an important task for water conservancy image interpretation, which provides indispensable analysis results for the applications of water conservancy remote sensing images. However, in the high-resolution remote sensing images of water conservancy, the objects and water bodies are usually diverse, and the image semantics are fuzzy, leading to poor classification performance. To handle it, this paper proposes a novel classification method based on target-scene deep semantic enhancement for high-resolution remote sensing images of water conservancy, which consists of two key branches. The upper branch is an improved ResNet18 network based on dilated convolution, which is used to extract scene-level features of images. The lower branch is a novel multi-level semantic-understanding based Faster R-CNN, which is used to extract the target-level features of images. The experimental results show that the features extracted by the proposed method contain more discriminative scene-level information as well as more detailed target-level information, which can effectively help generate better classification performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 9380
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 11725
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Gómez-Chova, L., Tuia, D., Moser, G., Camps-Valls, G.: Multimodal classification of remote sensing images: a review and future directions. Proc. IEEE 103(9), 1560–1584 (2015)

    Article  Google Scholar 

  2. Ran, Q., et al.: The status and influencing factors of surface water dynamics on the Qinghai-Tibet plateau during 2000–2020. IEEE Trans. Geosci. Remote Sens. 61, 1–14 (2022)

    Article  Google Scholar 

  3. Kadapala, B.K.R., Hakeem, A.: Region-growing-based automatic localized adaptive thresholding algorithm for water extraction using sentinel-2 MSI imagery. IEEE Trans. Geosci. Remote Sens. 61, 1–8 (2023)

    Article  Google Scholar 

  4. Xiang, D., Zhang, X., Wu, W., Liu, H.: Denseppmunet-a: a robust deep learning network for segmenting water bodies from aerial images. IEEE Trans. Geosci. Remote Sens. 61, 1–11 (2023)

    Google Scholar 

  5. Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122 (2015)

  6. Chaudhuri, B., Demir, B., Chaudhuri, S., Bruzzone, L.: Multilabel remote sensing image retrieval using a semisupervised graph-theoretic method. IEEE Trans. Geosci. Remote Sens. 56(2), 1144–1158 (2017)

    Article  Google Scholar 

  7. Hua, Y., Mou, L., Zhu, X.X.: Relation network for multilabel aerial image classification. IEEE Trans. Geosci. Remote Sens. 58(7), 4558–4572 (2020)

    Article  Google Scholar 

  8. Hua, Y., Mou, L., Zhu, X.X.: Recurrently exploring class-wise attention in a hybrid convolutional and bidirectional LSTM network for multi-label aerial image classification. ISPRS J. Photogramm. Remote. Sens. 149, 188–199 (2019)

    Article  Google Scholar 

  9. Yu, F., Koltun, V., Funkhouser, T.: Dilated residual networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 472–480 (2017)

    Google Scholar 

  10. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28 (2015)

    Google Scholar 

  11. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)

    Google Scholar 

  12. Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)

    Google Scholar 

  13. Wang, J., Chen, K., Yang, S., Loy, C.C., Lin, D.: Region proposal by guided anchoring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2965–2974 (2019)

    Google Scholar 

  14. Li, K., Wan, G., Cheng, G., Meng, L., Han, J.: Object detection in optical remote sensing images: a survey and a new benchmark. ISPRS J. Photogramm. Remote. Sens. 159, 296–307 (2020)

    Article  Google Scholar 

  15. Xia, G.S., et al.: AID: a benchmark data set for performance evaluation of aerial scene classification. IEEE Trans. Geosci. Remote Sens. 55(7), 3965–3981 (2017)

    Article  Google Scholar 

  16. Li, H., et al.: RSI-CB: a large-scale remote sensing image classification benchmark using crowdsourced data. Sensors 20(6), 1594 (2020)

    Article  Google Scholar 

  17. Dai, D., Yang, W.: Satellite image classification via two-layer sparse coding with biased image representation. IEEE Geosci. Remote Sens. Lett. 8(1), 173–176 (2010)

    Article  Google Scholar 

  18. Zhou, W., Newsam, S., Li, C., Shao, Z.: Patternnet: a benchmark dataset for performance evaluation of remote sensing image retrieval. ISPRS J. Photogramm. Remote. Sens. 145, 197–209 (2018)

    Article  Google Scholar 

  19. Yang, Y., Newsam, S.: Bag-of-visual-words and spatial extensions for land-use classification. In: Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, pp. 270–279 (2010)

    Google Scholar 

  20. Cheng, G., Han, J., Lu, X.: Remote sensing image scene classification: benchmark and state of the art. Proc. IEEE 105(10), 1865–1883 (2017)

    Article  Google Scholar 

  21. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  22. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, X., Zuo, G., Li, K., Li, L., Shi, A. (2023). Water Conservancy Remote Sensing Image Classification Based on Target-Scene Deep Semantic Enhancement. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14256. Springer, Cham. https://doi.org/10.1007/978-3-031-44213-1_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44213-1_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44212-4

  • Online ISBN: 978-3-031-44213-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics