Skip to content

Our first paper presentation: “Fast Server Learning Rate Tuning for Coded Federated Dropout”

We are excited to announce that in June 2022, Giacomo Verardo presented our  first paper (in this project) titled “Fast Server Learning Rate Tuning for Coded Federated Dropout“ at the International Workshop on Trustworthy Federated Learning in Conjunction with IJCAI 2022 (FL-IJCAI’22).

This is joint work with Giacomo Verardo, Daniel Barreira, Marco Chiesa, Dejan Kostic, and Gerald Quentin Maguire Jr.. Full abstract is below:

In cross-device Federated Learning (FL), clients with low computational power train a common machine model by exchanging parameters via updates instead of potentially private data. Federated Dropout (FD) is a technique that improves the communication efficiency of a FL session by selecting a subset of model parameters to be updated in each training round. However, compared to standard FL, FD produces considerably lower accuracy and faces a longer convergence time. In this paper, we leverage coding theory to enhance FD by allowing different sub-models to be used at each client. We also show that by carefully tuning the server learning rate hyper-parameter, we can achieve higher training speed while also achieving up to the same final accuracy as the no dropout case. For the EMNIST dataset, our mechanism achieves 99.6% of the final accuracy of the no dropout case while requiring 2.43x less bandwidth to achieve this level of accuracy.