MrTF: model refinery for transductive federated learning | Data Mining and Knowledge Discovery Skip to main content

Advertisement

Log in

MrTF: model refinery for transductive federated learning

  • Published:
Data Mining and Knowledge Discovery Aims and scope Submit manuscript

Abstract

We consider a real-world scenario in which a newly-established pilot project needs to make inferences for newly-collected data with the help of other parties under privacy protection policies. Current federated learning (FL) paradigms are devoted to solving the data heterogeneity problem without considering the to-be-inferred data. We propose a novel learning paradigm named transductive federated learning to simultaneously consider the structural information of the to-be-inferred data. On the one hand, the server could use the pre-available test samples to refine the aggregated models for robust model fusion, which tackles the data heterogeneity problem in FL. On the other hand, the refinery process incorporates test samples into training and could generate better predictions in a transductive manner. We propose several techniques including stabilized teachers, rectified distillation, and clustered label refinery to facilitate the model refinery process. Abundant experimental studies verify the superiorities of the proposed Model refinery framework for Transductive Federated learning. The source code is available at https://github.com/lxcnju/MrTF.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Abadi M, Chu A, Goodfellow IJ, McMahan HB, Mironov I, Talwar K, Zhang L (2016) Deep learning with differential privacy. In: Proceedings of the ACM SIGSAC conference on computer and communications security, pp. 308–318

  • Afonin A, Karimireddy SP (2022) Towards model agnostic federated learning using knowledge distillation. In: The Tenth international conference on learning representations

  • Caldas S, Wu P, Li T, Konecný J, McMahan HB, Smith V, Talwalkar A (2018) LEAF: a benchmark for federated settings. arxiv:abs/1812.01097

  • Caron M, Bojanowski P, Joulin A, Douze M (2018) Deep clustering for unsupervised learning of visual features. In: Computer Vision: ECCV 2018, pp. 139–156

  • Chen Y, Wang G, Dong S (2002) Learning with progressive transductive support vector machine. In: Proceedings of the IEEE international conference on data mining, pp. 67–74

  • Feng H, You Z, Chen M, Zhang T, Zhu M, Wu F, Wu C, Chen W (2021) KD3A: unsupervised multi-source decentralized domain adaptation via knowledge distillation. In: Proceedings of the 38th international conference on machine learning, pp. 3274–3283

  • Ganin Y, Lempitsky VS (2015) Unsupervised domain adaptation by backpropagation. In: Proceedings of the 32nd international conference on machine learning, pp. 1180–1189

  • Geiping J, Bauermeister H, Dröge H, Moeller M (2020) Inverting gradients: how easy is it to break privacy in federated learning? In: Advances in neural information processing systems. 33

  • Guha N, Talwalkar A, Smith V (2019) One-shot federated learning. arxiv:abs/1902.11175

  • He C, Li S, So J, Zhang M, Wang H, Wang X, Vepakomma P, Singh A, Qiu H, Shen L, Zhao P, Kang Y, Liu Y, Raskar R, Yang Q, Annavaram M, Avestimehr S (2020) Fedml: A research library and benchmark for federated machine learning. arxiv:abs/2007.13518

  • He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE Conference on computer vision and pattern recognition, pp. 770–778

  • Hinton GE, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. arxiv:abs/1503.02531

  • Hsieh K, Phanishayee A, Mutlu O, Gibbons PB (2020) The non-iid data quagmire of decentralized machine learning. In: Proceedings of the 37th international conference on machine learning, pp. 4387–4398

  • Jeong E, Oh S, Kim H, Park J, Bennis M, Kim S (2018) Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data. arxiv:abs/1811.11479

  • Jeong W, Yoon J, Yang E, Hwang SJ (2021) Federated semi-supervised learning with inter-client consistency & disjoint learning. In: 9th International conference on learning representations

  • Kairouz P, McMahan HB, et al (2019) Advances and open problems in federated learning. arxiv:abs/1912.04977

  • Karimireddy SP, Kale S, Mohri M, Reddi SJ, Stich SU, Suresh AT (2020)SCAFFOLD: stochastic controlled averaging for federated learning. In: Proceedings of the 37th international conference on machine learning, pp. 5132–5143

  • Krizhevsky A (2012) Learning multiple layers of features from tiny images

  • Laine S, Aila T (2017) Temporal ensembling for semi-supervised learning. In: 5th International conference on learning representations

  • Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  • Liang J, Hu D, Feng J (2020) Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In: Proceedings of the 37th international conference on machine learning, pp. 6028–6039

  • Li X, Huang K, Yang W, Wang S, Zhang Z (2020) On the convergence of fedavg on non-iid data. In: 8th International conference on learning representations

  • Lin T, Kong L, Stich SU, Jaggi M (2020) Ensemble distillation for robust model fusion in federated learning. In: Advances in neural information processing systems. 33

  • Li T, Sahu AK, Zaheer M, Sanjabi M, Talwalkar A, Smith V (2020)Federated optimization in heterogeneous networks. In: Proceedings of machine learning and systems

  • Li X, Tang J, Song S, Li B, Li Y, Shao Y, Gan L, Zhan D (2022) Avoid overfitting user specific information in federated keyword spotting. In: Proceedings of the 23rd annual conference of the international speech communication association, pp. 3869–3873

  • Liu Y, Lee J, Park M, Kim S, Yang E, Hwang SJ, Yang Y (2019) Learning to propagate labels: transductive propagation network for few-shot learning. In: 7th International conference on learning representations

  • Li D, Wang J (2019) Fedmd: Heterogenous federated learning via model distillation. arxiv:abs/1910.03581

  • Li X, Wang Y, Gan L, Zhan D (2022) Exploring transferability measures and domain selection in cross-domain slot filling. In: IEEE International conference on acoustics, speech and signal processing, pp. 3758–3762

  • Li X, Xu Y, Song S, Li B, Li Y, Shao Y, Zhan D (2022) Federated learning with position-aware neurons. In: IEEE/CVF conference on computer vision and pattern recognition, pp. 10072–10081

  • Li X, Zhan D (2021) Fedrs: federated learning with restricted softmax for label distribution non-iid data. In: The 27th ACM SIGKDD conference on knowledge discovery and data mining, pp. 995–1005

  • Li X, Zhan D, Shao Y, Li B, Song S (2021) Fedphp: Federated personalization with inherited private models. In: Proceedings of the machine learning and knowledge discovery in databases: European conference, pp. 587–602

  • Li M, Zhou L, Yang Z, Li A, Xia F, Andersen DG, Smola A (2013) Parameter server for distributed machine learning. In: Big Learning NeurIPS workshop. 6:2

  • Long M, Cao Y, Wang J, Jordan MI (2015) Learning transferable features with deep adaptation networks. In: Proceedings of the 32nd international conference on machine learning, pp. 97–105

  • Long Z, Wang J, Wang Y, Xiao H, Ma F (2021) Fedcon: a contrastive framework for federated semi-supervised learning. arxiv:abs/2109.04533

  • McMahan B, Moore E, Ramage D, Hampson S, y Arcas BA (2017) Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th international conference on artificial intelligence and statistics. pp. 1273–1282

  • Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng A (2011) Reading digits in natural images with unsupervised feature learning

  • Peng X, Huang Z, Zhu Y, Saenko K (2020) Federated adversarial domain adaptation. In: 8th International conference on learning representations

  • Reddi SJ, Charles Z, Zaheer M, Garrett Z, Rush K, Konečný J, Kumar S, McMahan HB (2021) Adaptive federated optimization. In: 9th International conference on learning representations

  • Rohrbach M, Ebert S, Schiele B (2013) Transfer learning in a transductive setting. Adv Neural Inf Process Syst 26:46–54

    Google Scholar 

  • Shamir O, Srebro N, Zhang T (2014) Communication-efficient distributed optimization using an approximate newton-type method. In: Proceedings of the 31th International conference on machine learning. pp. 1000–1008

  • Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: 3rd International conference on learning representations

  • Sui D, Chen Y, Zhao J, Jia Y, Xie Y, Sun W (2020) Feded: Federated learning via ensemble distillation for medical relation extraction. In: Proceedings of the conference on empirical methods in natural language processing. pp. 2118–2128

  • Tarvainen A, Valpola H (2017) Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In: 5th International conference on learning representations

  • Tong X, Xu X, Huang S, Zheng L (2021) A mathematical framework for quantifying transferability in multi-source transfer learning. Adv Neural Inf Process Syst 34:26103–26116

    Google Scholar 

  • van der Maaten L (2013) Barnes-hut-sne. In: 1st International conference on learning representations

  • Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst 30:5998–6008

    Google Scholar 

  • Yang Q, Liu Y, Chen T, Tong Y (2019) Federated machine learning: concept and applications. ACM Trans Intell Syst Technol 10(2):12–11219

    Article  Google Scholar 

  • Yang Y, Sun Z-Q, Zhu H, Fu Y, Zhou Y, Xiong H, Yang J (2021) Learning adaptive embedding considering incremental class. IEEE Trans Knowledge Data Eng

  • Yang C, Xie L, Qiao S, Yuille AL (2019) Training deep neural networks in generations: A more tolerant teacher educates better students. In: Proceedings of the 33rd AAAI conference on artificial intelligence, pp. 5628–5635

  • Yao Y, Doretto G (2010) Boosting for transfer learning with multiple sources. In: The Twenty-Third IEEE conference on computer vision and pattern recognition, pp. 1855–1862

  • Yao X, Huang C, Sun L (2018) Two-stream federated learning: Reduce the communication costs. In: IEEE visual communications and image processing, pp. 1–4

  • Yu S, Qian W, Jannesari A (2022) Resource-aware federated learning using knowledge extraction and multi-model fusion. arxiv:abs/2208.07978

  • Zhang L, Shen L, Ding L, Tao D, Duan L (2022) Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In: IEEE/CVF conference on computer vision and pattern recognition, pp. 10164–10173

  • Zhao Y, Li M, Lai L, Suda N, Civin D, Chandra V (2018) Federated learning with non-iid data. arxiv: abs/1806.00582

  • Zhu L, Liu Z, Han S (2019) Deep leakage from gradients. Adv Neural Inf Process Syst 32:14747–14756

    Google Scholar 

Download references

Acknowledgements

This work is partially supported by the National Natural Science Foundation of China (Grant Nos. 61921006, 62006118, 62276131), the National Key RD Program of China (Grant No. 2022YFF0712100) and the Fundamental Research Funds for the Central Universities (Nos. NJ2022028, No.30922010317). Thanks to Huawei Noah’s Ark Lab NetMIND Research Team.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Yang.

Additional information

Responsible editor: Tania Cerquitelli.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, XC., Yang, Y. & Zhan, DC. MrTF: model refinery for transductive federated learning. Data Min Knowl Disc 37, 2046–2069 (2023). https://doi.org/10.1007/s10618-023-00946-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10618-023-00946-4

Keywords

Navigation