Accelerating Federated Edge Learning
Published in IEEE Communications Letters, 2021
Recommended citation: Tuan Dung Nguyen, Amir R. Balef, Canh T. Dinh, Nguyen H. Tran, Duy T. Ngo, Tuan Anh Le, and Phuong L. Vo. 2021. Accelerating Federated Edge Learning. IEEE Communications Letters, 25(10):3282–3286.
[Paper]
Abstract: Transferring large models in federated learning (FL) networks is often hindered by clients’ limited bandwidth. We propose FedAA , an FL algorithm which achieves fast convergence by exploiting the regularized Anderson acceleration (AA) on the global level. First, we demonstrate that FL can benefit from acceleration methods in numerical analysis. Second, FedAA improves the convergence rate for quadratic losses and improves the empirical performance for smooth and strongly convex objectives, compared to FedAvg, an FL algorithm using gradient descent (GD) local updates. Experimental results demonstrate that employing AA can significantly improve the performance of FedAvg, even when the objective is non-convex.