{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,4,12]],"date-time":"2025-04-12T11:41:59Z","timestamp":1744458119202,"version":"3.37.3"},"reference-count":79,"publisher":"Association for Computing Machinery (ACM)","issue":"6","funder":[{"DOI":"10.13039\/501100000038","name":"NSERC","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100000038","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/https:\/\/doi.org\/10.13039\/100000001","name":"NSF","doi-asserted-by":"publisher","award":["IIS-2047632 and IIS-2232066"],"id":[{"id":"10.13039\/https:\/\/doi.org\/10.13039\/100000001","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2023,12,5]]},"abstract":"\n Motivated by humans' ability to adapt skills in the learning of new ones, this paper presents AdaptNet, an approach for modifying the latent space of existing policies to allow new behaviors to be quickly learned from like tasks in comparison to learning from scratch. Building on top of a given reinforcement learning controller, AdaptNet uses a two-tier hierarchy that augments the original state embedding to support modest changes in a behavior and further modifies the policy network layers to make more substantive changes. The technique is shown to be effective for adapting existing physics-based controllers to a wide range of new styles for locomotion, new task targets, changes in character morphology and extensive changes in environment. Furthermore, it exhibits significant increase in learning efficiency, as indicated by greatly reduced training times when compared to training from scratch or using other approaches that modify existing policies. Code is available at\n https:\/\/motion-lab.github.io\/AdaptNet<\/jats:italic>\n .\n <\/jats:p>","DOI":"10.1145\/3618375","type":"journal-article","created":{"date-parts":[[2023,12,5]],"date-time":"2023-12-05T15:20:48Z","timestamp":1701789648000},"page":"1-17","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":10,"title":["AdaptNet: Policy Adaptation for Physics-Based Character Control"],"prefix":"10.1145","volume":"42","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7851-3971","authenticated-orcid":false,"given":"Pei","family":"Xu","sequence":"first","affiliation":[{"name":"Clemson University, USA and Roblox, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5877-9374","authenticated-orcid":false,"given":"Kaixiang","family":"Xie","sequence":"additional","affiliation":[{"name":"McGill University, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9776-117X","authenticated-orcid":false,"given":"Sheldon","family":"Andrews","sequence":"additional","affiliation":[{"name":"\u00c9cole de Technologie Sup\u00e9rieure, Canada and Roblox, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4176-6857","authenticated-orcid":false,"given":"Paul G.","family":"Kry","sequence":"additional","affiliation":[{"name":"McGill University, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0226-2808","authenticated-orcid":false,"given":"Michael","family":"Neff","sequence":"additional","affiliation":[{"name":"University of California, Davis, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1074-0953","authenticated-orcid":false,"given":"Morgan","family":"Mcguire","sequence":"additional","affiliation":[{"name":"Roblox, USA and University of Waterloo, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-4315-6556","authenticated-orcid":false,"given":"Ioannis","family":"Karamouzas","sequence":"additional","affiliation":[{"name":"University of California, Riverside, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7309-7013","authenticated-orcid":false,"given":"Victor","family":"Zordan","sequence":"additional","affiliation":[{"name":"Roblox, USA and Clemson University, USA"}]}],"member":"320","published-online":{"date-parts":[[2023,12,5]]},"reference":[{"volume-title":"Proc. of the IEEE\/CVF Int. Conf. on Computer Vision. 4432--4441","author":"Abdal R.","key":"e_1_2_2_1_1","unstructured":"R. Abdal, Y. Qin, and P. Wonka. 2019. Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space?. In Proc. of the IEEE\/CVF Int. Conf. on Computer Vision. 4432--4441."},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3386569.3392469"},{"key":"e_1_2_2_3_1","doi-asserted-by":"crossref","unstructured":"A. Aghajanyan S. Gupta and L. Zettlemoyer. 2021. Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. In 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 7319--7328.","DOI":"10.18653\/v1\/2021.acl-long.568"},{"key":"e_1_2_2_4_1","volume-title":"Conf. on Robot Learning (Proc. of Machine Learning Research","volume":"868","author":"Alet F.","unstructured":"F. Alet, T. Lozano-Perez, and L. P. Kaelbling. 2018. Modular meta-learning. In Conf. on Robot Learning (Proc. of Machine Learning Research, Vol. 87). 856--868."},{"key":"e_1_2_2_5_1","unstructured":"M. Andrychowicz M. Denil S. G. Colmenarejo M. W. Hoffman D. Pfau T. Schaul B. Shillingford and N. de Freitas. 2016. Learning to Learn by Gradient Descent by Gradient Descent. In Neural Information Processing Systems. 3988--3996."},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3355089.3356536"},{"key":"e_1_2_2_7_1","volume-title":"BEGAN: Boundary Equilibrium Generative Adversarial Networks. arXiv:1703.10717 [cs.LG]","author":"Berthelot D.","year":"2017","unstructured":"D. Berthelot, T. Schumm, and L. Metz. 2017. BEGAN: Boundary Equilibrium Generative Adversarial Networks. arXiv:1703.10717 [cs.LG]"},{"key":"e_1_2_2_8_1","volume-title":"Optimizing the Latent Space of Generative Networks. In Int. Conf. on Machine Learning (Proc. of Machine Learning Research","volume":"609","author":"Bojanowski P.","unstructured":"P. Bojanowski, A. Joulin, D. Lopez-Pas, and A. Szlam. 2018. Optimizing the Latent Space of Generative Networks. In Int. Conf. on Machine Learning (Proc. of Machine Learning Research, Vol. 80). 600--609."},{"volume-title":"ACM SIGGRAPH Conf. on Motion, Interaction and Games. Article 3.","author":"Chemin J.","key":"e_1_2_2_9_1","unstructured":"J. Chemin and J. Lee. 2018. A Physics-Based Juggling Simulation Using Reinforcement Learning. In ACM SIGGRAPH Conf. on Motion, Interaction and Games. Article 3."},{"volume-title":"Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. In NIPS 2014 Workshop on Deep Learning.","author":"Chung J.","key":"e_1_2_2_10_1","unstructured":"J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. 2014. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. In NIPS 2014 Workshop on Deep Learning."},{"volume-title":"Learning Modular Neural Network Policies for Multi-Task and Multi-Robot Transfer. In IEEE Int. Conf. on Robotics and Automation. 2169--2176","author":"Devin C.","key":"e_1_2_2_11_1","unstructured":"C. Devin, A. Gupta, T. Darrell, P. Abbeel, and S. Levine. 2017. Learning Modular Neural Network Policies for Multi-Task and Multi-Robot Transfer. In IEEE Int. Conf. on Robotics and Automation. 2169--2176."},{"key":"e_1_2_2_12_1","unstructured":"Y. Duan J. Schulman X. Chen P. L. Bartlett I. Sutskever and P. Abbeel. 2016. RL2: Fast Reinforcement Learning via Slow Reinforcement Learning. arXiv:1611.02779 [cs.AI]"},{"key":"e_1_2_2_13_1","first-page":"616","article-title":"BlobGAN","volume":"2022","author":"Epstein D.","year":"2022","unstructured":"D. Epstein, T. Park, R. Zhang, E. Shechtman, and A. A. Efros. 2022. BlobGAN: Spatially Disentangled Scene Representations. In Computer Vision - ECCV 2022. 616--635.","journal-title":"Spatially Disentangled Scene Representations. In Computer Vision - ECCV"},{"volume-title":"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In Int. Conf. on Machine Learning. 1126--1135","author":"Finn C.","key":"e_1_2_2_14_1","unstructured":"C. Finn, P. Abbeel, and S. Levine. 2017. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In Int. Conf. on Machine Learning. 1126--1135."},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.14636"},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.5555\/2946645.2946704"},{"key":"e_1_2_2_17_1","first-page":"5769","article-title":"Improved Training of Wasserstein GANs","volume":"30","author":"Gulrajani I.","year":"2017","unstructured":"I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville. 2017. Improved Training of Wasserstein GANs. In Neural Information Processing Systems, Vol. 30. 5769--5779.","journal-title":"Neural Information Processing Systems"},{"key":"e_1_2_2_18_1","first-page":"5307","article-title":"Meta-Reinforcement Learning of Structured Exploration Strategies","volume":"31","author":"Gupta A.","year":"2018","unstructured":"A. Gupta, R. Mendonca, Y. Liu, P. Abbeel, and S. Levine. 2018. Meta-Reinforcement Learning of Structured Exploration Strategies. In Neural Information Processing Systems, Vol. 31. 5307--5316.","journal-title":"Neural Information Processing Systems"},{"volume-title":"Reinforcement Learning with Deep Energy-Based Policies. In Int. Conf. on Machine Learning. 1352--1361","author":"Haarnoja T.","key":"e_1_2_2_19_1","unstructured":"T. Haarnoja, H. Tang, P. Abbeel, and S. Levine. 2017. Reinforcement Learning with Deep Energy-Based Policies. In Int. Conf. on Machine Learning. 1352--1361."},{"key":"e_1_2_2_20_1","first-page":"494","article-title":"Quantitative Evaluation Method for Pose and Motion Similarity Based on Human Perception","volume":"1","author":"Harada T.","year":"2004","unstructured":"T. Harada, S. Taoka, T. Mori, and T. Sato. 2004. Quantitative Evaluation Method for Pose and Motion Similarity Based on Human Perception. In IEEE\/RAS Int. Conf. on Humanoid Robots, Vol. 1. 494--512.","journal-title":"IEEE\/RAS Int. Conf. on Humanoid Robots"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3386569.3392480"},{"key":"e_1_2_2_22_1","unstructured":"N. Heess J. J. Hunt T. P. Lillicrap and D. Silver. 2015. Memory-based control with recurrent neural networks. arXiv:1512.04455 [cs.LG]"},{"key":"e_1_2_2_23_1","unstructured":"N. Heess D. TB S. Sriram J. Lemmon J. Merel G. Wayne Y. Tassa T. Erez Z. Wang S. M. A. Eslami M. Riedmiller and D. Silver. 2017. Emergence of Locomotion Behaviours in Rich Environments. arXiv:1707.02286 [cs.AI]"},{"key":"e_1_2_2_24_1","first-page":"4159","article-title":"Hierarchically Decoupled Imitation For Morphological Transfer. In 37th","volume":"119","author":"Hejna D.","year":"2020","unstructured":"D. Hejna, L. Pinto, and P. Abbeel. 2020. Hierarchically Decoupled Imitation For Morphological Transfer. In 37th Int. Conf. on Machine Learning, Vol. 119. 4159--4171.","journal-title":"Int. Conf. on Machine Learning"},{"key":"e_1_2_2_25_1","unstructured":"J. Ho and S. Ermon. 2016. Generative Adversarial Imitation Learning. Advances in Neural Information Processing Systems 29 (2016)."},{"key":"e_1_2_2_26_1","first-page":"5405","article-title":"Evolved policy gradients","volume":"31","author":"Houthooft R.","year":"2018","unstructured":"R. Houthooft, Y. Chen, P. Isola, B. Stadie, F. Wolski, O. Jonathan Ho, and P. Abbeel. 2018. Evolved policy gradients. In Neural Information Processing Systems, Vol. 31. 5405--5414.","journal-title":"Neural Information Processing Systems"},{"key":"e_1_2_2_27_1","unstructured":"E. J. Hu Y. Shen P. Wallis Z. Allen-Zhu Y. Li S. Wang L. Wang and W. Chen. 2021. LoRA: Low-Rank Adaptation of Large Language Models. arXiv:2106.09685 [cs.CL]"},{"volume-title":"Int. Conf. on Learning Representations.","author":"Jahanian A.","key":"e_1_2_2_28_1","unstructured":"A. Jahanian, L. Chai, and P. Isola. 2020. On the \"Steerability\" of Generative Adversarial Networks. In Int. Conf. on Learning Representations."},{"volume-title":"PADL: Language-Directed Physics-Based Character Control. In SIGGRAPH Asia 2022 Conf. Papers. Article 19","author":"Juravsky J.","key":"e_1_2_2_29_1","unstructured":"J. Juravsky, Y. Guo, S. Fidler, and X. B. Peng. 2022. PADL: Language-Directed Physics-Based Character Control. In SIGGRAPH Asia 2022 Conf. Papers. Article 19."},{"volume-title":"Curriculum Learning for Motor Skills. In Canadian Conf. on Artificial Intelligence. Springer, 325--330","author":"Karpathy A.","key":"e_1_2_2_30_1","unstructured":"A. Karpathy and M. van de Panne. 2012. Curriculum Learning for Motor Skills. In Canadian Conf. on Artificial Intelligence. Springer, 325--330."},{"volume-title":"Proc. of the IEEE\/CVF Conf. on Computer Vision and Pattern Recognition. 8110--8119","author":"Karras T.","key":"e_1_2_2_31_1","unstructured":"T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila. 2020. Analyzing and Improving the Image Quality of StyleGAN. In Proc. of the IEEE\/CVF Conf. on Computer Vision and Pattern Recognition. 8110--8119."},{"key":"e_1_2_2_32_1","volume-title":"Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs.LG]","author":"Kingma D. P.","year":"2017","unstructured":"D. P. Kingma and J. Ba. 2017. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs.LG]"},{"key":"e_1_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.14504"},{"volume-title":"Measuring the Intrinsic Dimension of Objective Landscapes. In Int. Conf. on Learning Representations.","author":"Li C.","key":"e_1_2_2_34_1","unstructured":"C. Li, H. Farkhoor, R. Liu, and J. Yosinski. 2018. Measuring the Intrinsic Dimension of Objective Landscapes. In Int. Conf. on Learning Representations."},{"key":"e_1_2_2_35_1","unstructured":"J. H. Lim and J. C. Ye. 2017. Geometric GAN. arXiv:1705.02894 [stat.ML]"},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3386569.3392422"},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3083723"},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201315"},{"volume-title":"ACM SIGGRAPH Conf. on Motion, Interaction and Games. Article 15","author":"Luo Y.","key":"e_1_2_2_39_1","unstructured":"Y. Luo, K. Xie, S. Andrews, and P. Kry. 2021. Catching and Throwing Control of a Physically Simulated Hand. In ACM SIGGRAPH Conf. on Motion, Interaction and Games. Article 15."},{"key":"e_1_2_2_40_1","volume-title":"Isaac Gym: High Performance GPU-Based Physics Simulation For Robot Learning. arXiv:2108.10470 [cs.RO]","author":"Makoviychuk V.","year":"2021","unstructured":"V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, A. Handa, and G. State. 2021. Isaac Gym: High Performance GPU-Based Physics Simulation For Robot Learning. arXiv:2108.10470 [cs.RO]"},{"key":"e_1_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13555"},{"key":"e_1_2_2_42_1","unstructured":"J. Merel Y. Tassa D. TB S. Srinivasan J. Lemmon Z. Wang G. Wayne and N. Heess. 2017. Learning human behaviors from motion capture by adversarial imitation. arXiv:1707.02201 [cs.RO]"},{"key":"e_1_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/3386569.3392474"},{"key":"e_1_2_2_44_1","doi-asserted-by":"crossref","unstructured":"C. Mou X. Wang L. Xie Y. Wu J. Zhang Z. Qi Y. Shan and X. Qie. 2023. T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models. arXiv:2302.08453 [cs.CV]","DOI":"10.1609\/aaai.v38i5.28226"},{"key":"e_1_2_2_45_1","unstructured":"A. Nichol J. Achiam and J. Schulman. 2018. On First-Order Meta-Learning Algorithms. arXiv:1803.02999 [cs.LG]"},{"volume-title":"Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning. In Int. Conf. on Learning Representations.","author":"Parisotto E.","key":"e_1_2_2_46_1","unstructured":"E. Parisotto, L. J. Ba, and R. Salakhutdinov. 2016. Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning. In Int. Conf. on Learning Representations."},{"key":"e_1_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201311"},{"volume-title":"Sim-to-Real Transfer of Robotic Control with Dynamics Randomization. In IEEE Int. Conf. on Robotics and Automation. 3803--3810","author":"Peng X. B.","key":"e_1_2_2_48_1","unstructured":"X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel. 2018b. Sim-to-Real Transfer of Robotic Control with Dynamics Randomization. In IEEE Int. Conf. on Robotics and Automation. 3803--3810."},{"key":"e_1_2_2_49_1","volume-title":"MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies. In Advances in Neural Information Processing Systems. 3681--3692.","author":"Peng X. B.","year":"2019","unstructured":"X. B. Peng, M. Chang, G. Zhang, P. Abbeel, and S. Levine. 2019. MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies. In Advances in Neural Information Processing Systems. 3681--3692."},{"key":"e_1_2_2_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3528223.3530110"},{"key":"e_1_2_2_51_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459670"},{"volume-title":"The Intrinsic Dimension of Images and Its Impact on Learning. In Int. Conf. on Learning Representations.","author":"Pope P.","key":"e_1_2_2_52_1","unstructured":"P. Pope, C. Zhu, A. Abdelkader, M. Goldblum, and T. Goldstein. 2021. The Intrinsic Dimension of Images and Its Impact on Learning. In Int. Conf. on Learning Representations."},{"key":"e_1_2_2_53_1","unstructured":"A. Radford L. Metz and S. Chintala. 2016. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv:1511.06434 [cs.LG]"},{"volume-title":"EPOpt: Learning Robust Neural Network Policies Using Model Ensembles. In Int. Conf. on Learning Representations.","author":"Rajeswaran A.","key":"e_1_2_2_54_1","unstructured":"A. Rajeswaran, S. Ghotra, B. Ravindran, and S. Levine. 2017. EPOpt: Learning Robust Neural Network Policies Using Model Ensembles. In Int. Conf. on Learning Representations."},{"volume-title":"Int. Conf. on Learning Representations.","author":"Ravi S.","key":"e_1_2_2_55_1","unstructured":"S. Ravi and H. Larochelle. 2017. Optimization as a Model for Few-Shot Learning. In Int. Conf. on Learning Representations."},{"key":"e_1_2_2_56_1","unstructured":"A. A. Rusu S. G. Colmenarejo C. Gulcehre G. Desjardins J. Kirkpatrick R. Pascanu V. Mnih K. Kavukcuoglu and R. Hadsell. 2016a. Policy Distillation. arXiv:1511.06295 [cs.LG]"},{"key":"e_1_2_2_57_1","unstructured":"A. A. Rusu N. C. Rabinowitz G. Desjardins H. Soyer J. Kirkpatrick K. Kavukcuoglu R. Pascanu and R. Hadsell. 2016b. Progressive Neural Networks. arXiv:1606.04671 [cs.LG]"},{"volume-title":"Conf. on Robot Learning. 262--270","author":"Rusu A. A.","key":"e_1_2_2_58_1","unstructured":"A. A. Rusu, M. Ve\u010der\u00edk, T. Roth\u00f6rl, N. Heess, R. Pascanu, and R. Hadsell. 2017. Sim-to-Real Robot Learning from Pixels with Progressive Nets. In Conf. on Robot Learning. 262--270."},{"key":"e_1_2_2_59_1","unstructured":"J. Schulman F. Wolski P. Dhariwal A. Radford and O. Klimov. 2017. Proximal Policy Optimization Algorithms. arXiv:1707.06347 [cs.LG]"},{"volume-title":"Proc. of the IEEE\/CVF Conf. on Computer Vision and Pattern Recognition. 9243--9252","author":"Shen Y.","key":"e_1_2_2_60_1","unstructured":"Y. Shen, J. Gu, X. Tang, and B. Zhou. 2020. Interpreting the Latent Space of GANs for Semantic Face Editing. In Proc. of the IEEE\/CVF Conf. on Computer Vision and Pattern Recognition. 9243--9252."},{"key":"e_1_2_2_61_1","unstructured":"T. Silver K. Allen J. Tenenbaum and L. Kaelbling. 2019. Residual Policy Learning. arXiv:1812.06298 [cs.RO]"},{"key":"e_1_2_2_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/3528223.3530178"},{"key":"e_1_2_2_63_1","doi-asserted-by":"publisher","DOI":"10.1002\/cav.260"},{"volume-title":"Proceedings. Article 47","author":"Tao T.","key":"e_1_2_2_64_1","unstructured":"T. Tao, M. Wilson, R. Gou, and M. van de Panne. 2022. Learning to Get Up. In ACM SIGGRAPH 2022 Conf. Proceedings. Article 47."},{"volume-title":"Proceedings. Article 37","author":"Tessler C.","key":"e_1_2_2_65_1","unstructured":"C. Tessler, Y. Kasten, Y. Guo, S. Mannor, G. Chechik, and X. B. Peng. 2023. CALM: Conditional Adversarial Latent Models for Directable Virtual Characters. In ACM SIGGRAPH 2023 Conf. Proceedings. Article 37."},{"volume-title":"Adversarial Discriminative Domain Adaptation. In IEEE Conf. on Computer Vision and Pattern Recognition. 2962--2971","author":"Tzeng E.","key":"e_1_2_2_66_1","unstructured":"E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell. 2017. Adversarial Discriminative Domain Adaptation. In IEEE Conf. on Computer Vision and Pattern Recognition. 2962--2971."},{"volume-title":"Int. Conf. on Learning Representations.","author":"Wang D.","key":"e_1_2_2_67_1","unstructured":"D. Wang, E. Shelhamer, S. Liu, B. A. Olshausen, and T. Darrell. 2021. Tent: Fully TestTime Adaptation by Entropy Minimization. In Int. Conf. on Learning Representations."},{"key":"e_1_2_2_68_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459761"},{"key":"e_1_2_2_69_1","volume-title":"Advances in Neural Information Processing Systems","volume":"29","author":"Wu J.","unstructured":"J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum. 2016. Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling. In Advances in Neural Information Processing Systems, Vol. 29."},{"key":"e_1_2_2_70_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.14115"},{"volume-title":"Proceedings. Article 25","author":"Xie Z.","key":"e_1_2_2_71_1","unstructured":"Z. Xie, S. Starke, H. Y. Ling, and M. van de Panne. 2022. Learning Soccer Juggling Skills with Layer-Wise Mixture-of-Experts. In ACM SIGGRAPH 2022 Conf. Proceedings. Article 25."},{"key":"e_1_2_2_72_1","volume-title":"Proc. of the ACM on Computer Graphics and Interactive Techniques 4, 3, Article 44","author":"Xu P.","year":"2021","unstructured":"P. Xu and I. Karamouzas. 2021. A GAN-Like Approach for Physics-Based Imitation Learning and Interactive Character Control. Proc. of the ACM on Computer Graphics and Interactive Techniques 4, 3, Article 44 (2021)."},{"key":"e_1_2_2_73_1","doi-asserted-by":"publisher","DOI":"10.1145\/3592447"},{"key":"e_1_2_2_74_1","volume-title":"Meta-Gradient Reinforcement Learning. In Advances in Neural Information Processing Systems","volume":"31","author":"Xu Z.","unstructured":"Z. Xu, H. P. van Hasselt, and D. Silver. 2018. Meta-Gradient Reinforcement Learning. In Advances in Neural Information Processing Systems, Vol. 31."},{"key":"e_1_2_2_75_1","doi-asserted-by":"publisher","DOI":"10.1145\/3550454.3555434"},{"key":"e_1_2_2_76_1","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459817"},{"key":"e_1_2_2_77_1","doi-asserted-by":"publisher","DOI":"10.1145\/3197517.3201397"},{"key":"e_1_2_2_78_1","doi-asserted-by":"crossref","unstructured":"L. Zhang and M. Agrawala. 2023. Adding Conditional Control to Text-to-Image Diffusion Models. arXiv:2302.05543 [cs.CV]","DOI":"10.1109\/ICCV51070.2023.00355"},{"volume-title":"Int. Conf. on Learning Representations.","author":"Zhuang P.","key":"e_1_2_2_79_1","unstructured":"P. Zhuang, O. O. Koyejo, and A. Schwing. 2021. Enjoy Your Editing: Controllable GANs for Image Editing via Latent Space Navigation. In Int. Conf. on Learning Representations."}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3618375","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,12,4]],"date-time":"2024-12-04T11:43:53Z","timestamp":1733312633000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3618375"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,12,5]]},"references-count":79,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2023,12,5]]}},"alternative-id":["10.1145\/3618375"],"URL":"https:\/\/doi.org\/10.1145\/3618375","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"type":"print","value":"0730-0301"},{"type":"electronic","value":"1557-7368"}],"subject":[],"published":{"date-parts":[[2023,12,5]]},"assertion":[{"value":"2023-12-05","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}