A Study on Quantized Parameters for Protection of a Model and Its Inference Input
Journal of Information Processing
Online ISSN : 1882-6652
ISSN-L : 1882-6652
 
A Study on Quantized Parameters for Protection of a Model and Its Inference Input
Hiromasa KitaiNaoto YanaiKazuki IwahanaMasataka TatsumiJason Paul Cruz
Author information
JOURNAL FREE ACCESS

2023 Volume 31 Pages 667-678

Details
Abstract

Protecting a machine learning model and its inference inputs with secure computation is important for providing services with a valuable model. In this paper, we discuss how a model's parameter quantization works to protect the model and its inference inputs. To this end, we present an investigational protocol called MOTUS, based on ternary neural networks whose parameters are ternarized. Through extensive experiments with MOTUS, we found three key insights. First, ternary neural networks can avoid deterioration in accuracy due to secure computation with modulo operations. Second, the increment of model parameter candidates significantly improves accuracy more than an existing technique for accuracy improvement, i.e., batch normalization. Third, protecting both a model and inference inputs reduces inference throughput by four to seven times to provide the same level of accuracy compared with existing protocols protecting only inference inputs. We have released our source code via GitHub.

Content from these authors
© 2023 by the Information Processing Society of Japan
Previous article Next article
feedback
Top