{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T19:15:02Z","timestamp":1732043702095,"version":"3.27.0"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643685489","type":"electronic"}],"license":[{"start":{"date-parts":[[2024,10,16]],"date-time":"2024-10-16T00:00:00Z","timestamp":1729036800000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2024,10,16]]},"abstract":"Federated Learning systems are increasingly subjected to a multitude of model poisoning attacks from clients. Among these, edge-case attacks that target a small fraction of the input space are nearly impossible to detect using existing defenses, leading to a high attack success rate. We propose an effective defense using an external defense dataset, which provides information about the attack target. The defense dataset contains a mix of poisoned and clean examples, with only a few known to be clean. The proposed method, DataDefense, uses this dataset to learn a poisoned data detector model which marks each example in the defense dataset as poisoned or clean. It also learns a client importance model that estimates the probability of a client update being malicious. The global model is then updated as a weighted average of the client models\u2019 updates. The poisoned data detector and the client importance model parameters are updated using an alternating minimization strategy over the Federated Learning rounds. Extensive experiments on standard attack scenarios demonstrate that DataDefense can defend against model poisoning attacks where other state-of-the-art defenses fail. In particular, DataDefense is able to reduce the attack success rate by at least \u223c 40% on standard attack setups and by more than 80% on some setups. Furthermore, DataDefense requires very few defense examples (as few as five) to achieve a near-optimal reduction in attack success rate.<\/jats:p>","DOI":"10.3233\/faia240736","type":"book-chapter","created":{"date-parts":[[2024,10,17]],"date-time":"2024-10-17T13:16:34Z","timestamp":1729170994000},"source":"Crossref","is-referenced-by-count":1,"title":["A Data-Driven Defense Against Edge-Case Model Poisoning Attacks on Federated Learning"],"prefix":"10.3233","author":[{"given":"Kiran","family":"Purohit","sequence":"first","affiliation":[{"name":"Indian Institute of Technology, Kharagpur"}]},{"given":"Soumi","family":"Das","sequence":"additional","affiliation":[{"name":"Indian Institute of Technology, Kharagpur"}]},{"given":"Sourangshu","family":"Bhattacharya","sequence":"additional","affiliation":[{"name":"Indian Institute of Technology, Kharagpur"}]},{"given":"Santu","family":"Rana","sequence":"additional","affiliation":[{"name":"Deakin University, Australia"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","ECAI 2024"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA240736","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,17]],"date-time":"2024-10-17T13:16:34Z","timestamp":1729170994000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA240736"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,16]]},"ISBN":["9781643685489"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia240736","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,10,16]]}}}