Resource Allocation Optimization in Business Processes Supported by Reinforcement Learning and Process Mining | SpringerLink
Skip to main content

Resource Allocation Optimization in Business Processes Supported by Reinforcement Learning and Process Mining

  • Conference paper
  • First Online:
Intelligent Systems (BRACIS 2022)

Abstract

Resource allocation to execute business processes is increasingly crucial for organizations. As the cost of executing process tasks relies on several dynamic factors, optimizing resource allocation can be addressed as a sequential decision process. Process mining can aid this optimization with the use of data from the event log, which records historical data related to the corresponding business process executions. Probabilistic approaches are relevant to solve process mining issues, especially when applied to the usually unstructured and noisy real-world business processes. We present an approach in which the problem of resource allocation in a business process is modeled as a Markovian decision process and batch reinforcement learning algorithm is applied to get a resource allocation policy that minimizes the cycle time. With batch reinforcement learning algorithms, the knowledge underlying the event log data is used both during policy learning procedures and to model the environment. Resource allocation is performed considering the task to be executed and the resources’ current workload. The results with both Fitted Q-Iteration and Neural Fitted Q-Iteration batch reinforcement learning algorithms demonstrate that this approach enables a resource allocation more adherent to the business interests. Per the evaluation we performed on data of a real-world business process, if our approach had been used, up to 37.2% of the time spent to execute all the tasks could have been avoided compared to what is represented in the historical data at the event log.

This study was partially supported by CAPES (Finance Code 001) and FAPESP (Process Number 2020/05248-4).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    www.win.tue.nl/bpi/2012/challenge.

  2. 2.

    https://apromore.org/.

  3. 3.

    Colaboratory virtual machine with CPU Intel Xeon processor, 2.30 GHz of frequency, two cores, RAM 12 GB and 25 GB of HD.

  4. 4.

    Developed code available at: https://github.com/pm-usp/RL-resource-allocation.

  5. 5.

    More information about the implementations of Linear Regression, Random Forest, MLP and RPROP used are available at: https://bit.ly/3imZQRx, https://bit.ly/3hJqMMi, https://bit.ly/3xNfGvh and https://bit.ly/36Gr6VR.

References

  1. van der Aalst, W.M.P.: Process Mining: Data Science in Action, 2nd edn. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49851-4

    Book  Google Scholar 

  2. Aydin, M., Öztemel, E.: Dynamic job-shop scheduling using reinforcement learning agents. Robot. Auton. Syst. 33(2), 169–178 (2000)

    Article  Google Scholar 

  3. Baker, K.R.: Introduction to Sequencing and Scheduling, 1st edn. Wiley, Hoboken (1974)

    Google Scholar 

  4. van Dongen, B.: BPI challenge 2012. 4TU.ResearchData.Dataset (2012). https://doi.org/10.4121/uuid:3926db30-f712-4394-aebc-75976070e91f

  5. Dumas, M., de La Rosa, M., Mendling, J., Reijers, H.A.: Fundamentals of Business Process Management, 2nd edn. Springer, Heidelberg (2018). https://doi.org/10.1007/978-3-662-56509-4

    Book  Google Scholar 

  6. Ernst, D., Geurts, P., Wehenkel, L.: Tree-based batch mode reinforcement learning. J. Mach. Learn. Res. 6(18), 503–556 (2005)

    MathSciNet  MATH  Google Scholar 

  7. Firouzian, I., Zahedi, M., Hassanpour, H.: Cycle time optimization of processes using an entropy-based learning for task allocation. Int. J. Eng. 32(8), 1090–1100 (2019)

    Google Scholar 

  8. Folino, F., Pontieri, L.: Ai-empowered process mining for complex application scenarios: survey and discussion. J. Data Semant. 10, 77–106 (2021)

    Article  Google Scholar 

  9. Garcia, C.d.S., et al.: Process mining techniques and applications - a systematic mapping study. Expert Syst. Appl. 133, 260–295 (2019)

    Google Scholar 

  10. Huang, Z., van der Aalst, W., Lu, X., Duan, H.: Reinforcement learning based resource allocation in business process management. Data Knowl. Eng. 70(1), 127–145 (2011)

    Article  Google Scholar 

  11. Jaramillo, J., Arias, J.: Automatic classification of event logs sequences for failure detection in WfM/BPM systems. In: Proceedings of the IEEE Colombian Conference on Applications of Computational Intelligence, pp. 1–6. IEEE (2019)

    Google Scholar 

  12. Justin, G.H., Wickens, C.D.: Engineering Psychology and Human Performance. Pearson, Tokyo (1999)

    Google Scholar 

  13. Koschmider, A., Yingbo, L., Schuster, T.: Role assignment in business process models. In: Daniel, F., Barkaoui, K., Dustdar, S. (eds.) BPM 2011. LNBIP, vol. 99, pp. 37–49. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-28108-2_4

    Chapter  Google Scholar 

  14. Kumar, A., van der Aalst, W., Verbeek, H.: Dynamic work distribution in workflow management systems: how to balance quality and performance. J. Manag. Inf. Syst. 18(3), 157–194 (2002)

    Article  Google Scholar 

  15. Levine, S., Kumar, A., Tucker, G., Fu, J.: Offline reinforcement learning: tutorial, review, and perspectives on open problems. CoRR abs/2005.01643 (2020)

    Google Scholar 

  16. Liu, X., Chen, J., Ji, Yu., Yu, Y.: Q-learning algorithm for task allocation based on social relation. In: Cao, J., Wen, L., Liu, X. (eds.) PAS 2014. CCIS, vol. 495, pp. 49–58. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46170-9_5

    Chapter  Google Scholar 

  17. Maita, A.R.C., Martins, L.C., Paz, C.R.L., Rafferty, L., Hung, P.C.K., Peres, S.M.: A systematic mapping study of process mining. Enterp. Inf. Syst. 12, 1–45 (2017)

    Google Scholar 

  18. Puterman, M.L.: Markov Decision Processes. Wiley, Hoboken (1994)

    Google Scholar 

  19. Riedmiller, M.: Neural fitted Q iteration – first experiences with a data efficient neural reinforcement learning method. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, pp. 317–328. Springer, Heidelberg (2005). https://doi.org/10.1007/11564096_32

    Chapter  Google Scholar 

  20. Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: the RPROP algorithm. In: Proceedings of the IEEE International Conference on Neural Networks, vol. 1, pp. 586–591 (1993)

    Google Scholar 

  21. da Silva, G.A., Ferreira, D.R.: Applying hidden Markov models to process mining. In: Proceedings of the 4th Iberian Conference on Information Systems and Technologies, pp. 207–210. AISTI (2009)

    Google Scholar 

  22. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)

    MATH  Google Scholar 

  23. Weske, M.: Business Process Management: Concepts, Languages, Architectures, 2nd edn. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-642-28616-2

    Book  Google Scholar 

  24. Yaghoubi, M., Zahedi, M.: Resource allocation using task similarity distance in business process management systems. In: Proceedings of the 2nd International Conference of Signal Processing and Intelligent Systems, pp. 1–5 (2016)

    Google Scholar 

  25. Zhang, W., Dietterich, T.G.: A reinforcement learning approach to job-shop scheduling. In: Proceedings of the 14th International Joint Conference on Artificial Intelligence, vol. 2, pp. 1114–1120. Morgan Kaufmann, Burlington (1995)

    Google Scholar 

  26. Zhao, W., Pu, S., Jiang, D.: A human resource allocation method for business processes using team faultlines. Appl. Intell. 50(9), 2887–2900 (2020). https://doi.org/10.1007/s10489-020-01686-4

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thais Rodrigues Neubauer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Neubauer, T.R., da Silva, V.F., Fantinato, M., Peres, S.M. (2022). Resource Allocation Optimization in Business Processes Supported by Reinforcement Learning and Process Mining. In: Xavier-Junior, J.C., Rios, R.A. (eds) Intelligent Systems. BRACIS 2022. Lecture Notes in Computer Science(), vol 13653. Springer, Cham. https://doi.org/10.1007/978-3-031-21686-2_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21686-2_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21685-5

  • Online ISBN: 978-3-031-21686-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics