SciTePress - Publication Details
loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Simyung Chang 1 ; YoungJoon Yoo 2 ; Jaeseok Choi 3 and Nojun Kwak 3

Affiliations: 1 Seoul National University, Seoul, Korea, Samsung Electronics, Suwon and Korea ; 2 Clova AI Research, NAVER Corp, Seongnam and Korea ; 3 Seoul National University, Seoul and Korea

Keyword(s): BOOK, Book Learning, Reinforcement Learning.

Abstract: We introduce a novel method to train agents of reinforcement learning (RL) by sharing knowledge in a way similar to the concept of using a book. The recorded information in the form of a book is the main means by which humans learn knowledge. Nevertheless, the conventional deep RL methods have mainly focused either on experiential learning where the agent learns through interactions with the environment from the start or on imitation learning that tries to mimic the teacher. Contrary to these, our proposed book learning shares key information among different agents in a book-like manner by delving into the following two characteristic features: (1) By defining the linguistic function, input states can be clustered semantically into a relatively small number of core clusters, which are forwarded to other RL agents in a prescribed manner. (2) By defining state priorities and the contents for recording, core experiences can be selected and stored in a small container. We call this conta iner as ‘BOOK’. Our method learns hundreds to thousand times faster than the conventional methods by learning only a handful of core cluster information, which shows that deep RL agents can effectively learn through the shared knowledge from other agents. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 8.209.245.224

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Chang, S. ; Yoo, Y. ; Choi, J. and Kwak, N. (2019). BOOK: Storing Algorithm-Invariant Episodes for Deep Reinforcement Learning. In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods - ICPRAM; ISBN 978-989-758-351-3; ISSN 2184-4313, SciTePress, pages 73-82. DOI: 10.5220/0007308000730082

@conference{icpram19,
author={Simyung Chang and YoungJoon Yoo and Jaeseok Choi and Nojun Kwak},
title={BOOK: Storing Algorithm-Invariant Episodes for Deep Reinforcement Learning},
booktitle={Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods - ICPRAM},
year={2019},
pages={73-82},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007308000730082},
isbn={978-989-758-351-3},
issn={2184-4313},
}

TY - CONF

JO - Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods - ICPRAM
TI - BOOK: Storing Algorithm-Invariant Episodes for Deep Reinforcement Learning
SN - 978-989-758-351-3
IS - 2184-4313
AU - Chang, S.
AU - Yoo, Y.
AU - Choi, J.
AU - Kwak, N.
PY - 2019
SP - 73
EP - 82
DO - 10.5220/0007308000730082
PB - SciTePress