Deep Reinforcement Learning for Control of Probabilistic Boolean Networks | SpringerLink
Skip to main content

Deep Reinforcement Learning for Control of Probabilistic Boolean Networks

  • Conference paper
  • First Online:
Complex Networks & Their Applications IX (COMPLEX NETWORKS 2020 2020)

Part of the book series: Studies in Computational Intelligence ((SCI,volume 944))

Included in the following conference series:

Abstract

Probabilistic Boolean Networks (PBNs) were introduced as a computational model for the study of complex dynamical systems, such as Gene Regulatory Networks (GRNs). Controllability in this context is the process of making strategic interventions to the state of a network in order to drive it towards some other state that exhibits favourable biological properties. In this paper we study the ability of a Double Deep Q-Network with Prioritized Experience Replay in learning control strategies within a finite number of time steps that drive a PBN towards a target state, typically an attractor. The control method is model-free and does not require knowledge of the network’s underlying dynamics, making it suitable for applications where inference of such dynamics is intractable. We present extensive experiment results on two synthetic PBNs and the PBN model constructed directly from gene-expression data of a study on metastatic-melanoma.

This research was partly funded by EIT Digital IVZW, under the Real-Time Flow project, activity 18387-SGA2018, and partly by the EPSRC project AGELink (EP/R511791/1). We would also like to thank Vytenis Sliogeris for implementing the PBN inference pipeline from gene-expression data of the metastatic-melanoma.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 34319
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 42899
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
JPY 42899
Price includes VAT (Japan)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Acernese, A., Yerudkar, A., Glielmo, L., Vecchio, C.D.: Reinforcement learning approach to feedback stabilization problem of probabilistic Boolean control networks. IEEE Control Syst. Lett. 5(1), 337–342 (2021)

    Google Scholar 

  2. Albert, R., Othmer, H.G.: The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in Drosophila melanogaster. J. Theor. Biol. 223(1), 1–18 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)

    MATH  Google Scholar 

  4. Bittner, M., Meltzer, P., Chen, Y., et, al.: Molecular classification of cutaneous malignant melanoma by gene expression profiling. Nature 406(6795), 536–540 (2000)

    Google Scholar 

  5. Choo, S.M., Ban, B., Joo, J., Cho, K.H.: The phenotype control kernel of a biomolecular regulatory network. BMC Systems Biology 12(19) (2018)

    Google Scholar 

  6. Cornelius, S.P., Kath, W.L., Motter, A.E.: Realistic control of network dynamics. Nature Commun. 4, 1942 (2013)

    Article  Google Scholar 

  7. Datta, A., Pal, R., Choudhary, A., Dougherty, E.: Control approaches for probabilistic gene regulatory networks - what approaches have been developed for addressing the issue of intervention? IEEE Signal Process. Mag. 24(1), 54–63 (2007)

    Article  Google Scholar 

  8. Datta, A., Choudhary, A., Bittner, M.L., Dougherty, E.R.: External control in Markovian genetic regulatory networks. Mach. Learn. 52(1–2), 169–191 (2003)

    Article  MATH  Google Scholar 

  9. Faryabi, B., Datta, A., Dougherty, E.R.: On reinforcement learning in genetic regulatory networks. In: IEEE/SP 14th Workshop on Statistical Signal Processing, pp. 11–15 (2007)

    Google Scholar 

  10. Gao, J., Liu, Y.Y., D’Sousa, R., Barabasi, A.L.: Target control of complex networks. Nat. Commun. 5(5415), 1–18 (2014)

    Google Scholar 

  11. Hasselt, H.v., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proc. of the Thirtieth AAAI Conference on Artificial Intelligence, pp. 2094–2100. AAAI Press (2016)

    Google Scholar 

  12. Huang, S.: Gene expression profiling, genetic networks, and cellular states: an integrating concept for tumorigenesis and drug discovery. J. Mol. Med. 77(6), 469–480 (1999)

    Article  Google Scholar 

  13. Huang, S., Ingber, D.: Shape-dependent control of cell growth, differentiation, and apoptosis: switching between attractors in cell regulatory networks. Exp. Cell Res. 261(1), 91–103 (2000)

    Article  Google Scholar 

  14. Karlsen, M.R., Moschoyiannis, S.: Evolution of control with learning classifier systems. Appl. Netw. Sci. 3(1), 30 (2018)

    Article  Google Scholar 

  15. Karlsen, M.R., Moschoyiannis, S.: Learning versus optimal intervention in random Boolean networks. Appl. Netw. Sci. 4(1), 1–29 (2019)

    Article  Google Scholar 

  16. Kim, J., Park, S.M., Cho, K.H.: Discovery of a kernel for controlling biomolecular regulatory networks. Sci. Rep. 3, 2223 (2013)

    Article  Google Scholar 

  17. Kobayashi, K., Hiraishi, K.: Design of probabilistic Boolean networks based on network structure and steady-state probabilities. IEEE Trans. Neural Netw. Learn. Syst. 28(8), 1966–1971 (2017)

    Article  MathSciNet  Google Scholar 

  18. Liu, Q., He, Y., Wang, J.: Optimal control for probabilistic Boolean networks using discrete-time Markov decision processes. Phys. A 503, 1297–1307 (2018)

    Article  MathSciNet  Google Scholar 

  19. Liu, Y.Y., Slotine, J.J., Barabási, A.L.: Controllability of complex networks. Nature 473(7346), 167 (2011)

    Article  Google Scholar 

  20. Marques-Pita, M., Rocha, L.M.: Canalization and control in automata networks: body segmentation in drosophila melanogaster. PLoS ONE 8(3), e55946 (2013)

    Article  Google Scholar 

  21. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  22. Moschoyiannis, S., Elia, N., Penn, A., Lloyd, D.J.B., Knight, C.: A web-based tool for identifying strategic intervention points in complex systems. In: Proceedings of Games for the Synthesis of Complex Systems (CASSTING @ ETAPS). EPTCS, vol. 220, pp. 39–52 (2016)

    Google Scholar 

  23. Pal, R., Datta, A., Dougherty, E.: Optimal infinite horizon control for probabilistic Boolean networks. IEEE Trans. Signal Process. 54, 2375–2387 (2006)

    Article  MATH  Google Scholar 

  24. Papagiannis, G., Moschoyiannis, S.: Learning to control random Boolean networks: A deep reinforcement learning approach. In: Complex Networks 2019. Studies in Computational Intelligence, vol. 881, pp. 721–734. Springer, Cham (2019)

    Google Scholar 

  25. Schaul, T., Quan, J., I., A., Silver, D.: Prioritized experience replay. In: International Conference on Learning Representations (ICLR) (2016)

    Google Scholar 

  26. Shmulevich, I., Dougherty, E., Zhang, W.: Gene perturbation and intervention in probabilistic Boolean networks. Bioinformatics 18(10), 1319–1331 (2002)

    Article  Google Scholar 

  27. Shmulevich, I., Dougherty, E.R., Kim, S., Zhang, W.: Probabilistic Boolean networks: a rule-based uncertainty model for gene regulatory networks. Bioinformatics 18(2), 261–274 (2002)

    Article  Google Scholar 

  28. Sirin, U., Polat, F., Alhajj, R.: Employing batch reinforcement learning to control gene regulation without explicitly constructing gene regulatory networks. In: 23rd International Joint Conference on Artificial Intelligence (IJCAI), pp. 2042–2048 (2013)

    Google Scholar 

  29. Sootla, A., Strelkowa, N., Ernst, D., Barahona, M., Stan, G.: Toggling a genetic switch using reinforcement learning. In: 9th French Meeting on Planning, Decision Making and Learning (2014)

    Google Scholar 

  30. Toyoda, M., Wu, Y.: On optimal time-varying feedback controllability for probabilistic Boolean control networks. IEEE Trans. Neural Netw. Learn. Syst. 31(6), 2202–2208 (2020)

    Article  MathSciNet  Google Scholar 

  31. van Hasselt, H.: Double Q-learning. Adv. Neural Inf. Process. Syst. 23, 2613–2621 (2010)

    Google Scholar 

  32. Velarde, C., et al.: Boolean networks: a study on microarray data discretization. In: XIV XIV Congreso Español sobre Tecnologias y Lógica fuzzy (ESTYLF) Cuencas Mineras (Mieres-Langreo),pp. 17–19 (2008)

    Google Scholar 

  33. Wu, Y., Shen, T.: Policy iteration algorithm for optimal control of stochastic logical dynamical systems. IEEE Trans. Neural Netw. Learn. Syst. 29(5), 2031–2036 (2019)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Georgios Papagiannis .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 156 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Papagiannis, G., Moschoyiannis, S. (2021). Deep Reinforcement Learning for Control of Probabilistic Boolean Networks. In: Benito, R.M., Cherifi, C., Cherifi, H., Moro, E., Rocha, L.M., Sales-Pardo, M. (eds) Complex Networks & Their Applications IX. COMPLEX NETWORKS 2020 2020. Studies in Computational Intelligence, vol 944. Springer, Cham. https://doi.org/10.1007/978-3-030-65351-4_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-65351-4_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-65350-7

  • Online ISBN: 978-3-030-65351-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics