{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,9,22]],"date-time":"2024-09-22T04:17:36Z","timestamp":1726978656246},"reference-count":24,"publisher":"European Alliance for Innovation n.o.","issue":"1","license":[{"start":{"date-parts":[[2023,1,3]],"date-time":"2023-01-03T00:00:00Z","timestamp":1672704000000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/3.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["EAI Endorsed Trans Ind Net Intel Syst"],"abstract":"In this paper, we propose a deep reinforcement learning (DRL) approach for solving the optimisation problem of the network\u2019s sum-rate in device-to-device (D2D) communications supported by an intelligent reflecting surface (IRS). The IRS is deployed to mitigate the interference and enhance the signal between the D2D transmitter and the associated D2D receiver. Our objective is to jointly optimise the transmit power at the D2D transmitter and the phase shift matrix at the IRS to maximise the network sum-rate. We formulate a Markov decision process and then propose the proximal policy optimisation for solving the maximisation game. Simulation results show impressive performance in terms of the achievable rate and processing time.<\/jats:p>","DOI":"10.4108\/eetinis.v10i1.2864","type":"journal-article","created":{"date-parts":[[2023,1,3]],"date-time":"2023-01-03T13:05:33Z","timestamp":1672751133000},"page":"e1","source":"Crossref","is-referenced-by-count":6,"title":["Deep Reinforcement Learning for Intelligent Reflecting Surface-assisted D2D Communications"],"prefix":"10.4108","volume":"10","author":[{"given":"Khoi Khac","family":"Nguyen","sequence":"first","affiliation":[]},{"ORCID":"http:\/\/orcid.org\/0000-0002-2299-8487","authenticated-orcid":false,"given":"Antonino","family":"Masaracchia","sequence":"additional","affiliation":[]},{"given":"Cheng","family":"Yin","sequence":"additional","affiliation":[]}],"member":"2587","published-online":{"date-parts":[[2023,1,3]]},"reference":[{"key":"15732","doi-asserted-by":"crossref","unstructured":"Huang, J., Xing, C.C. and Guizani, M. (2020) Power allocation for D2D communications with SWIPT. IEEE Trans. Wireless Commun. 19(4): 2308\u20132320.","DOI":"10.1109\/TWC.2019.2963833"},{"key":"15733","doi-asserted-by":"crossref","unstructured":"Nguyen, K.K., Duong, T.Q., Vien, N.A., Le-Khac, N.A. and Nguyen, L.D. (2019) Distributed deep deterministic policy gradient for power allocation control in D2D-based V2V communications. IEEE Access 7: 164533\u2013164543.","DOI":"10.1109\/ACCESS.2019.2952411"},{"key":"15734","doi-asserted-by":"crossref","unstructured":"Mousavifar, S.A., Liu, Y., Leung, C., Elkashlan, M. and Duong, T.Q. (September 2014) Wireless energy harvesting and spectrum sharing in cognitive radio. In Proc. IEEE 80th Vehicular Technology Conference (VTC2014-Fall), Vancouver, BC, Canada: 1\u20135.","DOI":"10.1109\/VTCFall.2014.6966232"},{"key":"15735","doi-asserted-by":"crossref","unstructured":"Yu, H., Tuan, H.D., Nasir, A.A., Duong, T.Q. and Poor, H. V. (2020) Joint design of reconfigurable intelligent surfaces and transmit beamforming under proper and improper Gaussian signaling. IEEE J. Select. Areas Commun. 38(11): 2589\u20132603.","DOI":"10.1109\/JSAC.2020.3007059"},{"key":"15736","doi-asserted-by":"crossref","unstructured":"Zou, Y., Gong, S., Xu, J., Cheng, W., Hoang, D.T. and Niyato, D. (2020) Wireless powered intelligent reflecting surfaces for enhancing wireless communications. IEEE Transactions on Vehicular Technology 69(10): 12369\u201312373.","DOI":"10.1109\/TVT.2020.3011942"},{"key":"15737","doi-asserted-by":"crossref","unstructured":"Zheng, B., You, C. and Zhang, R. (2021) Efficient channel estimation for double-IRS aided multi-user MIMO system. IEEE Trans. Commun. 69(6): 3818\u20133832.","DOI":"10.1109\/TCOMM.2021.3064947"},{"key":"15738","doi-asserted-by":"crossref","unstructured":"Nguyen, K.K., Khosravirad, S., Costa, D.B.D., Nguyen, L. D. and Duong, T.Q. (2022) Reconfigurable intelligent surface-assisted multi-UAV networks: Efficient resource allocation with deep reinforcement learning. IEEE J. Selected Topics in Signal Process. 16(3): 358\u2013368.","DOI":"10.1109\/JSTSP.2021.3134162"},{"key":"15739","doi-asserted-by":"crossref","unstructured":"Chen, Y., Ai, B., Zhang, H., Niu, Y., Song, L., Han, Z. and Poor, H.V. (2021) Reconfigurable intelligent surface assisted device-to-device communications. IEEE Trans. Wireless Commun. 20(5): 2792\u20132804.","DOI":"10.1109\/TWC.2020.3044302"},{"key":"15740","doi-asserted-by":"crossref","unstructured":"Jia, S., Yuan, X. and Liang, Y.C. (2021) Reconfigurable intelligent surfaces for energy efficiency in D2D communication network. IEEE Wireless Commun. Lett. 10(3): 683\u2013687.","DOI":"10.1109\/LWC.2020.3046358"},{"key":"15741","doi-asserted-by":"crossref","unstructured":"Pradhan, C., Li, A., Song, L., Li, J., Vucetic, B. and Li, Y. (2020) Reconfigurable intelligent surface (RIS)-enhanced two-way OFDM communications. IEEE Transactions on Vehicular Technology 69(12): 16270\u201316275.","DOI":"10.1109\/TVT.2020.3038942"},{"key":"15742","doi-asserted-by":"crossref","unstructured":"Cao, Y., Lv, T., Ni, W. and Lin, Z. (2021) Sum-rate maximization for multi-reconfigurable intelligent surface-assisted device-to-device communications. IEEE Trans. Commun. 69(11): 7283\u20137296.","DOI":"10.1109\/TCOMM.2021.3106334"},{"key":"15743","doi-asserted-by":"crossref","unstructured":"Yang, G., Liao, Y., Liang, Y.C., Tirkkonen, O., Wang, G. and Zhu, X. (2021) Reconfigurable intelligent surface empowered device-to-device communication underlaying cellular networks. IEEE Trans. Commun. 69(11): 7790\u20137805.","DOI":"10.1109\/TCOMM.2021.3102640"},{"key":"15744","doi-asserted-by":"crossref","unstructured":"Nguyen, K.K., Vien, N.A., Nguyen, L.D., Le, M.T., Hanzo, L. and Duong, T.Q. (2021) Real-time energy harvesting aided scheduling in UAV-assisted D2D networks relying on deep reinforcement learning. IEEE Access 9: 3638\u20133648.","DOI":"10.1109\/ACCESS.2020.3046499"},{"key":"15745","doi-asserted-by":"crossref","unstructured":"Huang, C., Mo, R. and Yuen, C. (2020) Reconfigurable intelligent surface assisted multiuser MISO systems exploiting deep reinforcement learning. IEEE J. Select. Areas Commun. 38(8): 1839\u20131850.","DOI":"10.1109\/JSAC.2020.3000835"},{"key":"15746","doi-asserted-by":"crossref","unstructured":"Shokry, M., Elhattab, M., Assi, C., Sharafeddine, S. and Ghrayeb, A. (2021) Optimizing age of informa-tion through aerial reconfigurable intelligent surfaces: A deep reinforcement learning approach. IEEE Transac-tions on Vehicular Technology 70(4): 3978\u20133983.","DOI":"10.1109\/TVT.2021.3063953"},{"key":"15747","doi-asserted-by":"crossref","unstructured":"Feng, K., Wang, Q., Li, X. and Wen, C.K. (2020) Deep reinforcement learning based intelligent reflecting surface optimization for MISO communication systems. IEEE Wireless Commun. Lett. 9(5): 745\u2013749.","DOI":"10.1109\/LWC.2020.2969167"},{"key":"15748","doi-asserted-by":"crossref","unstructured":"Nguyen, K.K., Duong, T.Q., Do-Duy, T., Claussen, H. and Hanzo, L. (2022) 3D UAV trajectory and data collection optimisation via deep reinforcement learning. IEEE Trans. Commun. 70(4): 2358\u20132371.","DOI":"10.1109\/TCOMM.2022.3148364"},{"key":"15749","unstructured":"Bertsekas, D.P. (1995) Dynamic Programming and Optimal Control, 1 (Athena Scientific Belmont, MA)."},{"key":"15750","unstructured":"Schulman, J., Wolski, F., Dhariwal, P., Radford, A. and Klimov, O. (2017), Proximal policy optimization algorithms. URL https:\/\/arxiv.org\/abs\/1707.06347."},{"key":"15751","unstructured":"Schulman, J., Moritz, P., Levine, S., Jordan, M.I. and Abbeel, P. (2016) High-dimensional continuous control using generalized advantage estimation. In Proc. 4th International Conf. Learning Representations (ICLR)."},{"key":"15752","unstructured":"Mnih, V. et al. (2016) Asynchronous methods for deep reinforcement learning. In Proc. Int. Conf. Mach. Learn.(PMLR): 1928\u20131937."},{"key":"15753","unstructured":"Kingma, D.P. and Ba, J.L. (2014), Adam: A method for stochastic optimization. URL arXivpreprintarXiv: 1412.6980."},{"key":"15754","unstructured":"Abadi, M. et al. (2016) Tensorflow: A system for large-scale machine learning. In Proc. 12th USENIX Sym. Opr. Syst. Design and Imp. (OSDI 16): 265\u2013283."},{"key":"15755","unstructured":"Sutton, R.S., McAllester, D., Singh, S. and Mansour, Y. (2000) Policy gradient methods for reinforcement learning with function approximation. In Adv. Neural Inf. Process. Syst.: 1057\u20131063."}],"container-title":["EAI Endorsed Transactions on Industrial Networks and Intelligent Systems"],"original-title":[],"link":[{"URL":"https:\/\/publications.eai.eu\/index.php\/inis\/article\/download\/2864\/2278","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/publications.eai.eu\/index.php\/inis\/article\/download\/2864\/2278","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,9,21]],"date-time":"2024-09-21T16:52:21Z","timestamp":1726937541000},"score":1,"resource":{"primary":{"URL":"https:\/\/publications.eai.eu\/index.php\/inis\/article\/view\/2864"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,1,3]]},"references-count":24,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2023,1,3]]}},"URL":"https:\/\/doi.org\/10.4108\/eetinis.v10i1.2864","relation":{},"ISSN":["2410-0218"],"issn-type":[{"type":"electronic","value":"2410-0218"}],"subject":[],"published":{"date-parts":[[2023,1,3]]}}}