{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,9,7]],"date-time":"2024-09-07T06:34:46Z","timestamp":1725690886371},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"9","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"Transformer models are widely used in AI applications such as Natural Language Processing (NLP), Computer Vision (CV), etc. However, enormous computation workload be-comes an obstacle to train large transformer models efficiently. Recently, some methods focus on reducing the computation workload during the training by skipping some layers. How-ever, these methods use simple probability distribution and coarse-grained probability calculation, which significantly affect the model accuracy. To address the issue, in this paper we propose a novel method to accelerate training\u2014Sensitivity-Based Layer Dropping (SBLD). SBLD uses lay-er-wise sensitivity data to switch on\/off transformer layers in proper order to keep high accuracy. Besides, we adjust the probability of skipping transformer layers with a scheduler to accelerate training speed and get faster convergence. Our results show that SBLD solves the accuracy drop issue com-pared with prior layer dropping methods. Our SBLD method can decrease end-to-end training time by 19.67% during training of GPT-3 Medium model, the same time increasing the accuracy by 1.65% w.r.t. baseline. Furthermore, for SwinV2-L model the obtained Top-1 and Top-5 accuracies are also higher vs. the baseline. Thus, the proposed method is efficient and practical to improve the large transformer model training.<\/jats:p>","DOI":"10.1609\/aaai.v37i9.26321","type":"journal-article","created":{"date-parts":[[2023,6,27]],"date-time":"2023-06-27T17:47:00Z","timestamp":1687888020000},"page":"11156-11163","source":"Crossref","is-referenced-by-count":1,"title":["Acceleration of Large Transformer Model Training by Sensitivity-Based Layer Dropping"],"prefix":"10.1609","volume":"37","author":[{"given":"Yujie","family":"Zeng","sequence":"first","affiliation":[]},{"given":"Wenlong","family":"He","sequence":"additional","affiliation":[]},{"given":"Ihor","family":"Vasyltsov","sequence":"additional","affiliation":[]},{"given":"Jiali","family":"Pang","sequence":"additional","affiliation":[]},{"given":"Lin","family":"Chen","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2023,6,26]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/26321\/26093","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/26321\/26093","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,6,27]],"date-time":"2023-06-27T17:47:01Z","timestamp":1687888021000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/26321"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,26]]},"references-count":0,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2023,6,27]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v37i9.26321","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2023,6,26]]}}}