{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,7,15]],"date-time":"2024-07-15T11:13:59Z","timestamp":1721042039734},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"01","license":[{"start":{"date-parts":[[2019,7,17]],"date-time":"2019-07-17T00:00:00Z","timestamp":1563321600000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/www.aaai.org"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"We explore the loss landscape of fully-connected and convolutional neural networks using random, low-dimensional hyperplanes and hyperspheres. Evaluating the Hessian, H, of the loss function on these hypersurfaces, we observe 1) an unusual excess of the number of positive eigenvalues of H, and 2) a large value of Tr(H)\/||H|| at a well defined range of configuration space radii, corresponding to a thick, hollow, spherical shell we refer to as the Goldilocks zone. We observe this effect for fully-connected neural networks over a range of network widths and depths on MNIST and CIFAR-10 datasets with the ReLU and tanh non-linearities, and a similar effect for convolutional networks. Using our observations, we demonstrate a close connection between the Goldilocks zone, measures of local convexity\/prevalence of positive curvature, and the suitability of a network initialization. We show that the high and stable accuracy reached when optimizing on random, low-dimensional hypersurfaces is directly related to the overlap between the hypersurface and the Goldilocks zone, and as a corollary demonstrate that the notion of intrinsic dimension is initialization-dependent. We note that common initialization techniques initialize neural networks in this particular region of unusually high convexity\/prevalence of positive curvature, and offer a geometric intuition for their success. Furthermore, we demonstrate that initializing a neural network at a number of points and selecting for high measures of local convexity such as Tr(H)\/||H||, number of positive eigenvalues of H, or low initial loss, leads to statistically significantly faster training on MNIST. Based on our observations, we hypothesize that the Goldilocks zone contains an unusually high density of suitable initialization configurations.<\/jats:p>","DOI":"10.1609\/aaai.v33i01.33013574","type":"journal-article","created":{"date-parts":[[2019,9,6]],"date-time":"2019-09-06T07:43:54Z","timestamp":1567755834000},"page":"3574-3581","source":"Crossref","is-referenced-by-count":5,"title":["The Goldilocks Zone: Towards Better Understanding of Neural Network Loss Landscapes"],"prefix":"10.1609","volume":"33","author":[{"given":"Stanislav","family":"Fort","sequence":"first","affiliation":[]},{"given":"Adam","family":"Scherlis","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2019,7,17]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/4237\/4115","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/4237\/4115","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,11,7]],"date-time":"2022-11-07T06:35:44Z","timestamp":1667802944000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/4237"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,7,17]]},"references-count":0,"journal-issue":{"issue":"01","published-online":{"date-parts":[[2019,7,23]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v33i01.33013574","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2019,7,17]]}}}