DAP: A Framework for Driver Attention Prediction | SpringerLink
Skip to main content

DAP: A Framework for Driver Attention Prediction

  • Conference paper
  • First Online:
Intelligent Systems and Applications (IntelliSys 2023)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 824))

Included in the following conference series:

  • 426 Accesses

Abstract

Human drivers employ their attentional systems during driving to focus on critical items and make judgments. Because gaze data can indicate human attention, collecting and analyzing gaze data has emerged in recent years to improve autonomous driving technologies. In safety-critical situations, it is important to predict not only where the driver focuses his attention but also on which objects. In this work, we propose DAP, a novel framework for driver attention prediction that bridges the attention prediction gap between pixels and objects. The DAP Framework is evaluated on the Berkeley DeepDrive Attention (BDD-A) dataset. DAP achieves state-of-the-art performance in both pixel-level and object-level attention prediction, especially improving object detection accuracy from 78 to 90%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 26311
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 32889
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Gou, C., Zhou, Y., Li, D.: Driver attention prediction based on convolution and transformers. J. Supercomput. 78(6), 8268–8284 (2022)

    Article  Google Scholar 

  2. Rong, Y., et al.: Where and what: driver attention-based object detection. In: Proceedings of the ACM on Human-Computer Interaction 6. ETRA, pp. 1–22 (2022)

    Google Scholar 

  3. Su, Y., et al.: A unified transformer framework for group-based segmentation: co-segmentation, co-saliency detection and video salient object detection (2022). arXiv:2203.04708

  4. Fang, J., et al.: DADA: driver attention prediction in driving accident scenarios. In: IEEE Transactions on Intelligent Transportation Systems (2021)

    Google Scholar 

  5. Siddique, N., et al.: U-net and its variants for medical image segmentation: a review of theory and applications. IEEE Access 9, 82031–82057 (2021)

    Google Scholar 

  6. Pal, A., Mondal, S., Christensen, H.I.: Looking at the right stuff-guided semantic-gaze for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  7. Li, C., Chan, S.H., Chen, Y.T.: Who make drivers stop? towards driver-centric risk assessment: risk object identification via causal inference. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE (2020)

    Google Scholar 

  8. Dosovitskiy, A., et al.: An image is worth 16 \(\times \) 16 words: transformers for image recognition at scale (2020). arXiv:2010.11929

  9. Kang, Y., Yin, H., Berger, C.: Test your self-driving algorithm: an overview of publicly available driving datasets and virtual testing environments. IEEE Trans. Intell. Veh. 4(2), 171–185 (2019)

    Article  Google Scholar 

  10. Guo, J., Kurup, U., Shah, M.: Is it safe to drive? an overview of factors, metrics, and datasets for driveability assessment in autonomous driving. IEEE Trans. Intell. Transp. Syst. 21(8), 3135–3151 (2019)

    Article  Google Scholar 

  11. Deng, T., et al.: How do drivers allocate their potential attention? driving fixation prediction via convolutional neural networks. In: IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 5, pp. 2146–2154 (2019)

    Google Scholar 

  12. Liu, C., et al.: A gaze model improves autonomous driving. In: Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (2019)

    Google Scholar 

  13. Fang, J., et al.: Dada-2000: can driving accident be predicted by driver attention analyzed by a benchmark. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEE (2019)

    Google Scholar 

  14. Fang, J., et al.: Dada: a large-scale benchmark and model for driver attention prediction in accidental scenarios (2019). arXiv:1912.12148

  15. Xia, Y., et al.: Predicting driver attention in critical situations. In: Asian Conference on Computer Vision. Springer, Cham (2018)

    Google Scholar 

  16. Palazzi, A., et al.: Predicting the driver’s focus of attention: the DR (eye) VE project. In: IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1720–1733 (2018)

    Google Scholar 

  17. Tawari, A., Mallela, P., Martin, S.: Learning to attend to salient targets in driving videos using fully convolutional RNN. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE (2018)

    Google Scholar 

  18. Liu, N., Han, J., Yang, M.H.: PiCANet: Learning pixel-wise contextual attention for saliency detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)

    Google Scholar 

  19. Yin, H., Berger, C.: When to use what data set for your self-driving car algorithm: an overview of publicly available driving datasets. In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). IEEE (2017)

    Google Scholar 

  20. Tawari, A., Kang, B.: A computational framework for driver’s visual attention using a fully convolutional architecture. In: 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE (2017)

    Google Scholar 

  21. Palazzi, A., et al.: Learning where to attend like a human driver. In: 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE (2017)

    Google Scholar 

  22. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  23. Alletto, S., et al.: DR (eye) VE: a dataset for attention-based tasks with applications to autonomous and assisted driving. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2016)

    Google Scholar 

  24. Cornia, M., et al.: A deep multi-level network for saliency prediction. In: 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE (2016)

    Google Scholar 

  25. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham (2015)

    Google Scholar 

  26. Huang, X., et al.: Salicon: reducing the semantic gap in saliency prediction by adapting deep neural networks. In: Proceedings of the IEEE International Conference on Computer Vision (2015)

    Google Scholar 

  27. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv:1409.1556

  28. Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems, vol. 19 (2006)

    Google Scholar 

  29. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  30. Jocher, G., Stoken, A., Borovec, J., Chaurasia, A., Xie, T., Changyu, L., Abhiram, V., Hogan, A., Hajek, J., Diaconu, L., Kwon, Y., Defretin, Y., Lohia, A.: Laughing, tkianai, yxNONG, lorenzomammana, AlexWang1900, Marc, oleg, wanghaoyang0106, ml5ah, Ben Milanko, Benjamin Fineran, Daniel Khromov, Ding Yiwei. Durgesh, and Francisco Ingham, Doug, NanoCode012 ultralytics/yolov5: v5.0 - YOLOv5-P6 1280 models (2021). https://doi.org/10.5281/zenodo.4679653

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ahmed Kamel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kamel, A., Sobh, I., Al-Atabany, W. (2024). DAP: A Framework for Driver Attention Prediction. In: Arai, K. (eds) Intelligent Systems and Applications. IntelliSys 2023. Lecture Notes in Networks and Systems, vol 824. Springer, Cham. https://doi.org/10.1007/978-3-031-47715-7_6

Download citation

Publish with us

Policies and ethics