Modular representation of layered neural networks
- PMID: 29096203
- DOI: 10.1016/j.neunet.2017.09.017
Modular representation of layered neural networks
Abstract
Layered neural networks have greatly improved the performance of various applications including image processing, speech recognition, natural language processing, and bioinformatics. However, it is still difficult to discover or interpret knowledge from the inference provided by a layered neural network, since its internal representation has many nonlinear and complex parameters embedded in hierarchical layers. Therefore, it becomes important to establish a new methodology by which layered neural networks can be understood. In this paper, we propose a new method for extracting a global and simplified structure from a layered neural network. Based on network analysis, the proposed method detects communities or clusters of units with similar connection patterns. We show its effectiveness by applying it to three use cases. (1) Network decomposition: it can decompose a trained neural network into multiple small independent networks thus dividing the problem and reducing the computation time. (2) Training assessment: the appropriateness of a trained result with a given hyperparameter or randomly chosen initial parameters can be evaluated by using a modularity index. And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network.
Keywords: Community detection; Layered neural networks; Network analysis.
Copyright © 2017 Elsevier Ltd. All rights reserved.
Similar articles
-
Multi-layered greedy network-growing algorithm: extension of greedy network-growing algorithm to multi-layered networks.Int J Neural Syst. 2004 Feb;14(1):9-26. doi: 10.1142/S012906570400184X. Int J Neural Syst. 2004. PMID: 15034944
-
Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research.J Pharm Biomed Anal. 2000 Jun;22(5):717-27. doi: 10.1016/s0731-7085(99)00272-1. J Pharm Biomed Anal. 2000. PMID: 10815714 Review.
-
The uniqueness theorem for complex-valued neural networks with threshold parameters and the redundancy of the parameters.Int J Neural Syst. 2008 Apr;18(2):123-34. doi: 10.1142/S0129065708001439. Int J Neural Syst. 2008. PMID: 18452246
-
A machine learning method for extracting symbolic knowledge from recurrent neural networks.Neural Comput. 2004 Jan;16(1):59-71. doi: 10.1162/08997660460733994. Neural Comput. 2004. PMID: 15006023
-
Acquisition of nonlinear forward optics in generative models: two-stage "downside-up" learning for occluded vision.Neural Netw. 2011 Mar;24(2):148-58. doi: 10.1016/j.neunet.2010.10.004. Epub 2010 Oct 27. Neural Netw. 2011. PMID: 21094592 Review.
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources