{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2023,10,3]],"date-time":"2023-10-03T09:12:55Z","timestamp":1696324375747},"reference-count":8,"publisher":"Association for Computing Machinery (ACM)","issue":"2","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["SIGMETRICS Perform. Eval. Rev."],"published-print":{"date-parts":[[2023,9,28]]},"abstract":"Lack of explainability is hindering the practical adoption of high-performance Deep Reinforcement Learning (DRL) controllers. Prior work focused on explaining the controller by identifying salient features of the controller's input. However, these feature-based methods focus solely on inputs and do not fully explain the controller's policy. In this paper, we put forward future-based explainers as an essential tool for providing insights into the controller's decision-making process and, thereby, facilitating the practical deployment of DRL controllers. We highlight two applications of futurebased explainers in the networking domain: online safety assurance and guided controller design. Finally, we provide a roadmap for the practical development and deployment of future-based explainers for DRL network controllers.<\/jats:p>","DOI":"10.1145\/3626570.3626607","type":"journal-article","created":{"date-parts":[[2023,10,2]],"date-time":"2023-10-02T22:16:57Z","timestamp":1696285017000},"page":"100-102","update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Towards Future-Based Explanations for Deep RL Network Controllers"],"prefix":"10.1145","volume":"51","author":[{"given":"Sagar","family":"Patel","sequence":"first","affiliation":[{"name":"University of Califoria, Irvine, Irvine, CA, USA"}]},{"given":"Sangeetha","family":"Abdu Jyothi","sequence":"additional","affiliation":[{"name":"University of Califoria, Irvine, Irvine, CA, USA"}]},{"given":"Nina","family":"Narodytska","sequence":"additional","affiliation":[{"name":"VMware Research"}]}],"member":"320","published-online":{"date-parts":[[2023,10,2]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1613\/jair.1.12228"},{"key":"e_1_2_1_2_1","first-page":"1","volume-title":"Neural Computing and Applications","author":"Cruz Francisco","year":"2021","unstructured":"Francisco Cruz , Richard Dazeley , Peter Vamplew , and Ithan Moreira . Explainable robotic systems: Understanding goal-driven actions in a reinforcement learning scenario . Neural Computing and Applications , pages 1 -- 18 , 2021 . Francisco Cruz, Richard Dazeley, Peter Vamplew, and Ithan Moreira. Explainable robotic systems: Understanding goal-driven actions in a reinforcement learning scenario. Neural Computing and Applications, pages 1--18, 2021."},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3548606.3560609"},{"key":"e_1_2_1_4_1","volume-title":"IJCAI\/ECAI Workshop on explainable artificial intelligence","author":"Juozapaitis Zoe","year":"2019","unstructured":"Zoe Juozapaitis , Anurag Koul , Alan Fern , Martin Erwig , and Finale Doshi-Velez . Explainable reinforcement learning via reward decomposition . In IJCAI\/ECAI Workshop on explainable artificial intelligence , 2019 . Zoe Juozapaitis, Anurag Koul, Alan Fern, Martin Erwig, and Finale Doshi-Velez. Explainable reinforcement learning via reward decomposition. In IJCAI\/ECAI Workshop on explainable artificial intelligence, 2019."},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3387514.3405859"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3422604.3425940"},{"key":"e_1_2_1_7_1","volume-title":"Reliable post hoc explanations: Modeling uncertainty in explainability. Advances in neural information processing systems, 34:9391--9404","author":"Slack Dylan","year":"2021","unstructured":"Dylan Slack , Anna Hilgard , Sameer Singh , and Himabindu Lakkaraju . Reliable post hoc explanations: Modeling uncertainty in explainability. Advances in neural information processing systems, 34:9391--9404 , 2021 . Dylan Slack, Anna Hilgard, Sameer Singh, and Himabindu Lakkaraju. Reliable post hoc explanations: Modeling uncertainty in explainability. Advances in neural information processing systems, 34:9391--9404, 2021."},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/3375627.3375830"}],"container-title":["ACM SIGMETRICS Performance Evaluation Review"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3626570.3626607","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,10,2]],"date-time":"2023-10-02T22:17:01Z","timestamp":1696285021000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3626570.3626607"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,28]]},"references-count":8,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2023,9,28]]}},"alternative-id":["10.1145\/3626570.3626607"],"URL":"https:\/\/doi.org\/10.1145\/3626570.3626607","relation":{},"ISSN":["0163-5999"],"issn-type":[{"value":"0163-5999","type":"print"}],"subject":[],"published":{"date-parts":[[2023,9,28]]},"assertion":[{"value":"2023-10-02","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}