Journal article

Towards Fair and Privacy-Preserving Federated Deep Models

L Lyu, J Yu, K Nandakumar, Y Li, X Ma, J Jin, H Yu, KS Ng

IEEE Transactions on Parallel and Distributed Systems | IEEE | Published : 2020


The current standalone deep learning framework tends to result in overfitting and low utility. This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates. Server-based solutions are prone to the problem of a single-point-of-failure. In this respect, collaborative learning frameworks, such as federated learning (FL), are more robust. Existing federated learning frameworks overlook an important aspect of participation: fairness. All parties are given the same final model without regard to their contributions..

View full abstract

University of Melbourne Researchers


Awarded by Australian Research Council

Funding Acknowledgements

This work was supported, in part, by IBM PhD Fellowship; ANUTranslational Fellowship; Nanyang Assistant Professorship (NAP); and NTU-WeBank JRI (NWJ-2019-007). The authors would like to thank Prof. Benjamin Rubinstein, Dr. Kumar Bhaskaran, and Prof. Marimuthu Palaniswami for their insightful discussions. This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200.