{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,8,7]],"date-time":"2024-08-07T07:41:08Z","timestamp":1723016468822},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2020,7]]},"abstract":"The inherent heavy computation of deep neural networks prevents their widespread applications. A widely used method for accelerating model inference is quantization, by replacing the input operands of a network using fixed-point values. Then the majority of computation costs focus on the integer matrix multiplication accumulation. In fact, high-bit accumulator leads to partially wasted computation and low-bit one typically suffers from numerical overflow. To address this problem, we propose an overflow aware quantization method by designing trainable adaptive fixed-point representation, to optimize the number of bits for each input tensor while prohibiting numeric overflow during the computation. With the proposed method, we are able to fully utilize the computing power to minimize the quantization loss and obtain optimized inference performance. To verify the effectiveness of our method, we conduct image classification, object detection, and semantic segmentation tasks on ImageNet, Pascal VOC, and COCO datasets, respectively. Experimental results demonstrate that the proposed method can achieve comparable performance with state-of-the-art quantization methods while accelerating the inference process by about 2 times.<\/jats:p>","DOI":"10.24963\/ijcai.2020\/121","type":"proceedings-article","created":{"date-parts":[[2020,7,8]],"date-time":"2020-07-08T08:12:10Z","timestamp":1594195930000},"page":"868-875","source":"Crossref","is-referenced-by-count":7,"title":["Overflow Aware Quantization: Accelerating Neural Network Inference by Low-bit Multiply-Accumulate Operations"],"prefix":"10.24963","author":[{"given":"Hongwei","family":"Xie","sequence":"first","affiliation":[{"name":"Alibaba Group"}]},{"given":"Yafei","family":"Song","sequence":"additional","affiliation":[{"name":"Alibaba Group"}]},{"given":"Ling","family":"Cai","sequence":"additional","affiliation":[{"name":"Alibaba Group"}]},{"given":"Mingyang","family":"Li","sequence":"additional","affiliation":[{"name":"Alibaba Group"}]}],"member":"10584","event":{"number":"28","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-PRICAI-2020","name":"Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}","start":{"date-parts":[[2020,7,11]]},"theme":"Artificial Intelligence","location":"Yokohama, Japan","end":{"date-parts":[[2020,7,17]]}},"container-title":["Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2020,7,8]],"date-time":"2020-07-08T22:13:24Z","timestamp":1594246404000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2020\/121"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2020,7]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2020\/121","relation":{},"subject":[],"published":{"date-parts":[[2020,7]]}}}