Enhancing Robustness of Federated Learning via Server Learning
2026-04-03 • Machine Learning
Machine LearningArtificial Intelligence
AI summaryⓘ
The authors study how to make federated learning more reliable even when some participating clients try to mess it up, and their data is very different from each other. They suggest a method that uses extra learning on the server side, filters bad updates from clients, and combines results in a way that reduces the effect of outliers. Their tests show this method helps improve accuracy a lot, even if more than half of the clients are malicious and the server has only a small or synthetic dataset that doesn't match the clients' data well.
federated learningmalicious attacksnon-iid dataserver learningheuristic algorithmclient update filteringgeometric median aggregationmodel robustnesssynthetic dataset
Authors
Van Sy Mai, Kushal Chakrabarti, Richard J. La, Dipankar Maity
Abstract
This paper explores the use of server learning for enhancing the robustness of federated learning against malicious attacks even when clients' training data are not independent and identically distributed. We propose a heuristic algorithm that uses server learning and client update filtering in combination with geometric median aggregation. We demonstrate via experiments that this approach can achieve significant improvement in model accuracy even when the fraction of malicious clients is high, even more than $50\%$ in some cases, and the dataset utilized by the server is small and could be synthetic with its distribution not necessarily close to that of the clients' aggregated data.