Explaining Taxi Demand Prediction Models Based on Feature Importance | SpringerLink
Skip to main content

Explaining Taxi Demand Prediction Models Based on Feature Importance

  • Conference paper
  • First Online:
Artificial Intelligence. ECAI 2023 International Workshops (ECAI 2023)

Abstract

The prediction of city-wide taxi demand is used to proactively relocate idle taxis. Often neural network-based models are applied to tackle this problem, which is difficult due to its multivariate input and output space. As these models are composed of multiple layers, their predictions become opaque. This opaqueness makes debugging, optimising, and using the models difficult. To address this, we propose the usage of eXplainable AI (XAI) – feature importance methods.

In this paper, we build and train four city-wide taxi demand prediction models of commonly used neural network types on the New York City Yellow Taxi Trip data set. To explain their predictions, we select three existing XAI techniques – reduced Layerwise Relevance Propagation, Local Interpretable Model-agnostic Explanation, and Shapely Additive Explanations – and enable their usage on the specified problem. In addition, we propose a suite of five quantitative evaluation metrics suitable for explaining models that tackle regression problems with multivariate input and output space. Lastly, we compare the selected XAI techniques through the proposed evaluation metrics along four real-world scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 12583
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 15729
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Alwosheel, A., Van Cranenburgh, S., Chorus, C.G.: Why did you predict that? Towards explainable artificial neural networks for travel demand analysis. Transp. Res. C Emerg. Technol. 128, 103143 (2021). https://doi.org/10.1016/j.trc.2021.103143

    Article  Google Scholar 

  2. Anders, C.J., Neumann, D., Samek, W., Müller, K.R., Lapuschkin, S.: Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy, February 2023

    Google Scholar 

  3. Apley, D.W., Zhu, J.: Visualizing the effects of predictor variables in black box supervised learning models. J. R. Stat. Soc. Ser. B Stat Methodol. 82(4), 1059–1086 (2020). https://doi.org/10.1111/rssb.12377

    Article  MathSciNet  Google Scholar 

  4. Arias-Duart, A., Pares, F., Garcia-Gasulla, D., Gimenez-Abalos, V.: Focus! Rating XAI methods and finding biases. In: 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Padua, Italy, pp. 1–8. IEEE, July 2022. https://doi.org/10.1109/FUZZ-IEEE55066.2022.9882821

  5. Arras, L., Osman, A., Samek, W.: CLEVR-XAI: a benchmark dataset for the ground truth evaluation of neural network explanations. Inf. Fusion 81, 14–40 (2022). https://doi.org/10.1016/j.inffus.2021.11.008

    Article  Google Scholar 

  6. Aslam, N., et al.: Anomaly detection using explainable random forest for the prediction of undesirable events in oil wells. Appl. Comput. Intell. Soft Comput. 2022, 1–14 (2022). https://doi.org/10.1155/2022/1558381

    Article  Google Scholar 

  7. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015). https://doi.org/10.1371/journal.pone.0130140

    Article  Google Scholar 

  8. Biessmann, F., Refiano, D.: Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated, July 2021

    Google Scholar 

  9. Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019). https://doi.org/10.3390/electronics8080832

    Article  Google Scholar 

  10. City of New York: TLC Trip Record Data - TLC (2023). https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page

  11. Goodman, B., Flaxman, S.: European Union regulations on algorithmic decision making and a “right to explanation.” AI Mag. 38(3), 50–57 (2017). https://doi.org/10.1609/aimag.v38i3.2741

  12. Gunning, D., Aha, D.W.: DARPA’s explainable artificial intelligence program. AI Mag. 40(2), 44–58 (2019). https://doi.org/10.1609/aimag.v40i2.2850

    Article  Google Scholar 

  13. Haliem, M., Mani, G., Aggarwal, V., Bhargava, B.: A distributed model-free ride-sharing approach for joint matching, pricing, and dispatching using deep reinforcement learning. IEEE Trans. Intell. Transp. Syst. 22(12), 7931–7942 (2021). https://doi.org/10.1109/TITS.2021.3096537

    Article  Google Scholar 

  14. Hoepner, A.G.F., McMillan, D., Vivian, A., Wese Simen, C.: Significance, relevance and explainability in the machine learning age: an econometrics and financial data science perspective. Eur. J. Finance 27(1–2), 1–7 (2021). https://doi.org/10.1080/1351847X.2020.1847725

  15. Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for Explainable AI: Challenges and Prospects, February 2019

    Google Scholar 

  16. Ishiguro, S., Kawasaki, S., Fukazawa, Y.: Taxi demand forecast using real-time population generated from cellular networks. In: Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, Singapore Singapore, pp. 1024–1032. ACM, October 2018. https://doi.org/10.1145/3267305.3274157

  17. Jiang, S., Chen, W., Li, Z., Yu, H.: Short-term demand prediction method for online car-hailing services based on a least squares support vector machine. IEEE Access 7, 11882–11891 (2019). https://doi.org/10.1109/ACCESS.2019.2891825

    Article  Google Scholar 

  18. Ke, J., Feng, S., Zhu, Z., Yang, H., Ye, J.: Joint predictions of multi-modal ride-hailing demands: a deep multi-task multi-graph learning-based approach. Transp. Res. C Emerg. Technol. 127, 103063 (2021). https://doi.org/10.1016/j.trc.2021.103063

    Article  Google Scholar 

  19. Kim, J.Y., Cho, S.B.: Explainable prediction of electric energy demand using a deep autoencoder with interpretable latent space. Expert Syst. Appl. 186, 115842 (2021). https://doi.org/10.1016/j.eswa.2021.115842

    Article  Google Scholar 

  20. Kontou, E., Garikapati, V., Hou, Y.: Reducing ridesourcing empty vehicle travel with future travel demand prediction. Transp. Res. C Emerg. Technol. 121, 102826 (2020). https://doi.org/10.1016/j.trc.2020.102826

    Article  Google Scholar 

  21. Korth, M., Schleibaum, S., Müller, J.P., Ehlers, R.: On the influence of grid cell size on taxi demand prediction. In: Pires, I.M., Zdravevski, E., Garcia, N.C. (eds.) Smart Objects and Technologies for Social Goods. LNCIS, vol. 476, pp. 19–36. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-28813-5_2

    Chapter  Google Scholar 

  22. Kraus, S., et al.: AI for explaining decisions in multi-agent environments. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, NY, USA, 7–12 February 2020, pp. 13534–13538. AAAI Press (2020)

    Google Scholar 

  23. Lee, K., Eo, M., Jung, E., Yoon, Y., Rhee, W.: Short-term traffic prediction with deep neural networks: a survey. IEEE Access 9, 54739–54756 (2021). https://doi.org/10.1109/ACCESS.2021.3071174

    Article  Google Scholar 

  24. Lin, Q., Xu, W., Chen, M., Lin, X.: A probabilistic approach for demand-aware ride-sharing optimization. In: Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing, Catania Italy, pp. 141–150. ACM, July 2019. https://doi.org/10.1145/3323679.3326512

  25. Lin, Y.S., Lee, W.C., Celik, Z.B.: What do you see?: Evaluation of explainable artificial intelligence (XAI) interpretability through neural backdoors. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Virtual Event Singapore, pp. 1027–1035. ACM, August 2021. https://doi.org/10.1145/3447548.3467213

  26. Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy 23(1), 18 (2020). https://doi.org/10.3390/e23010018

    Article  Google Scholar 

  27. Loff, E.: Explaining taxi demand prediction models based on feature importance. Bachelor’s thesis, Clausthal University of Technology (2023)

    Google Scholar 

  28. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, Red Hook, NY, USA, pp. 4768–4777. Curran Associates Inc. (2017)

    Google Scholar 

  29. McDermid, J.A., Jia, Y., Porter, Z., Habli, I.: Artificial intelligence explainability: the technical and ethical dimensions. Philos. Trans. Royal Soc. A Math. Phys. Eng. Sci. 379(2207), 20200363 (2021). https://doi.org/10.1098/rsta.2020.0363

    Article  Google Scholar 

  30. Monje, L., Carrasco, R.A., Rosado, C., Sánchez-Montañés, M.: Deep learning XAI for bus passenger forecasting: a use case in Spain. Mathematics 10(9), 1428 (2022). https://doi.org/10.3390/math10091428

    Article  Google Scholar 

  31. Moreira-Matias, L., Gama, J., Ferreira, M., Mendes-Moreira, J., Damas, L.: Predicting taxi-passenger demand using streaming data. IEEE Trans. Intell. Transp. Syst. 14(3), 1393–1402 (2013). https://doi.org/10.1109/TITS.2013.2262376

    Article  Google Scholar 

  32. O’Sullivan, S., et al.: Explainable artificial intelligence (XAI): closing the gap between image analysis and navigation in complex invasive diagnostic procedures. World J. Urol. 40(5), 1125–1134 (2022). https://doi.org/10.1007/s00345-022-03930-7

    Article  Google Scholar 

  33. Pun, L., Zhao, P., Liu, X.: A multiple regression approach for traffic flow estimation. IEEE Access 7, 35998–36009 (2019). https://doi.org/10.1109/ACCESS.2019.2904645

    Article  Google Scholar 

  34. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, vol. 13–17-August, pp. 1135–1144 (2016). https://doi.org/10.1145/2939672.2939778

  35. Rosenfeld, A.: Better metrics for evaluating explainable artificial intelligence. In: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2021, pp. 45–50. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC (2021)

    Google Scholar 

  36. Sajja, S., Aggarwal, N., Mukherjee, S., Manglik, K., Dwivedi, S., Raykar, V.: Explainable AI based interventions for pre-season decision making in fashion retail. In: Proceedings of the 3rd ACM India Joint International Conference on Data Science & Management of Data (8th ACM IKDD CODS & 26th COMAD), Bangalore India, pp. 281–289. ACM, January 2021. https://doi.org/10.1145/3430984.3430995

  37. Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (XAI): toward medical XAI. IEEE Trans. Neural Netw. Learn. Syst. 32(11), 4793–4813 (2021). https://doi.org/10.1109/TNNLS.2020.3027314

    Article  Google Scholar 

  38. Tong, Y., et al.: The simpler the better: a unified approach to predicting original taxi demands based on large-scale online platforms. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax NS Canada, pp. 1653–1662. ACM, August 2017. https://doi.org/10.1145/3097983.3098018

  39. Van Der Velden, B.H., Kuijf, H.J., Gilhuijs, K.G., Viergever, M.A.: Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med. Image Anal. 79, 102470 (2022). https://doi.org/10.1016/j.media.2022.102470

    Article  Google Scholar 

  40. Wang, C., Hou, Y., Barth, M.: Data-driven multi-step demand prediction for ride-hailing services using convolutional neural network. In: Arai, K., Kapoor, S. (eds.) Advances in Computer Vision, vol. 944, pp. 11–22. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-17798-0_2

  41. Xu, C., Li, C., Zhou, X.: Interpretable LSTM based on mixture attention mechanism for multi-step residential load forecasting. Electronics 11(14), 2189 (2022). https://doi.org/10.3390/electronics11142189

    Article  Google Scholar 

  42. Xu, J., Rahmatizadeh, R., Boloni, L., Turgut, D.: A sequence learning model with recurrent neural networks for taxi demand prediction. In: 2017 IEEE 42nd Conference on Local Computer Networks (LCN), Singapore, pp. 261–268. IEEE, October 2017. https://doi.org/10.1109/LCN.2017.31

  43. Xu, Y., Li, D.: Incorporating graph attention and recurrent architectures for city-wide taxi demand prediction. ISPRS Int. J. Geo Inf. 8(9), 414 (2019). https://doi.org/10.3390/ijgi8090414

    Article  Google Scholar 

  44. Ye, J., Sun, L., Du, B., Fu, Y., Xiong, H.: Coupled layer-wise graph convolution for transportation demand prediction. Association for the Advancement of Artificial Intelligence, December 2020

    Google Scholar 

  45. Yousif, Y.M., Müller, J.P.: Generating explanatory saliency maps for mixed traffic flow using a behaviour cloning model. In: Lorig, F., Norling, E. (eds.) Multi-Agent-Based Simulation XXIII, vol. 13743, pp. 107–120. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-22947-3_9

  46. Zhang, C., Zhu, F., Wang, X., Sun, L., Tang, H., Lv, Y.: Taxi demand prediction using parallel multi-task learning model. IEEE Trans. Intell. Transp. Syst. 23(2), 794–803 (2022). https://doi.org/10.1109/TITS.2020.3015542

    Article  Google Scholar 

  47. Zhou, J., Gandomi, A.H., Chen, F., Holzinger, A.: Evaluating the quality of machine learning explanations: a survey on methods and metrics. Electronics 10(5), 593 (2021). https://doi.org/10.3390/electronics10050593

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eric Loff .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Loff, E., Schleibaum, S., Müller, J.P., Säfken, B. (2024). Explaining Taxi Demand Prediction Models Based on Feature Importance. In: Nowaczyk, S., et al. Artificial Intelligence. ECAI 2023 International Workshops. ECAI 2023. Communications in Computer and Information Science, vol 1947. Springer, Cham. https://doi.org/10.1007/978-3-031-50396-2_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-50396-2_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-50395-5

  • Online ISBN: 978-3-031-50396-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics