Triple-Memory Networks: A Brain-Inspired Method for Continual Learning - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 May;33(5):1925-1934.
doi: 10.1109/TNNLS.2021.3111019. Epub 2022 May 2.

Triple-Memory Networks: A Brain-Inspired Method for Continual Learning

Triple-Memory Networks: A Brain-Inspired Method for Continual Learning

Liyuan Wang et al. IEEE Trans Neural Netw Learn Syst. 2022 May.

Abstract

Continual acquisition of novel experience without interfering with previously learned knowledge, i.e., continual learning, is critical for artificial neural networks, while limited by catastrophic forgetting. A neural network adjusts its parameters when learning a new task but then fails to conduct the old tasks well. By contrast, the biological brain can effectively address catastrophic forgetting through consolidating memories as more specific or more generalized forms to complement each other, which is achieved in the interplay of the hippocampus and neocortex, mediated by the prefrontal cortex. Inspired by such a brain strategy, we propose a novel approach named triple-memory networks (TMNs) for continual learning. TMNs model the interplay of the three brain regions as a triple-network architecture of generative adversarial networks (GANs). The input information is encoded as specific representations of data distributions in a generator, or generalized knowledge of solving tasks in a discriminator and a classifier, with implementing appropriate brain-inspired algorithms to alleviate catastrophic forgetting in each module. Particularly, the generator replays generated data of the learned tasks to the discriminator and the classifier, both of which are implemented with a weight consolidation regularizer to complement the lost information in the generation process. TMNs achieve the state-of-the-art performance of generative memory replay on a variety of class-incremental learning benchmarks on MNIST, SVHN, CIFAR-10, and ImageNet-50.

PubMed Disclaimer

Similar articles

Cited by

Publication types

LinkOut - more resources