{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,4,13]],"date-time":"2024-04-13T05:09:55Z","timestamp":1712984995942},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"12","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"Despite the recent success of deep reinforcement learning (RL), domain adaptation remains an open problem. Although the generalization ability of RL agents is critical for the real-world applicability of Deep RL, zero-shot policy transfer is still a challenging problem since even minor visual changes could make the trained agent completely fail in the new task. To address this issue, we propose a two-stage RL agent that first learns a latent unified state representation (LUSR) which is consistent across multiple domains in the first stage, and then do RL training in one source domain based on LUSR in the second stage. The cross-domain consistency of LUSR allows the policy acquired from the source domain to generalize to other target domains without extra training. We first demonstrate our approach in variants of CarRacing games with customized manipulations, and then verify it in CARLA, an autonomous driving simulator with more complex and realistic visual observations. Our results show that this approach can achieve state-of-the-art domain adaptation performance in related RL tasks and outperforms prior approaches based on latent-representation based RL and image-to-image translation.<\/jats:p>","DOI":"10.1609\/aaai.v35i12.17251","type":"journal-article","created":{"date-parts":[[2022,9,8]],"date-time":"2022-09-08T19:37:48Z","timestamp":1662665868000},"page":"10452-10459","source":"Crossref","is-referenced-by-count":14,"title":["Domain Adaptation In Reinforcement Learning Via Latent Unified State Representation"],"prefix":"10.1609","volume":"35","author":[{"given":"Jinwei","family":"Xing","sequence":"first","affiliation":[]},{"given":"Takashi","family":"Nagata","sequence":"additional","affiliation":[]},{"given":"Kexin","family":"Chen","sequence":"additional","affiliation":[]},{"given":"Xinyun","family":"Zou","sequence":"additional","affiliation":[]},{"given":"Emre","family":"Neftci","sequence":"additional","affiliation":[]},{"given":"Jeffrey L.","family":"Krichmar","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2021,5,18]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/17251\/17058","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/17251\/17058","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,9,8]],"date-time":"2022-09-08T19:37:48Z","timestamp":1662665868000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/17251"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,5,18]]},"references-count":0,"journal-issue":{"issue":"12","published-online":{"date-parts":[[2021,5,28]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v35i12.17251","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2021,5,18]]}}}