Publications

Towards sparsified federated neuroimaging models via weight pruning

Abstract

Federated training of large deep neural networks can often be restrictive due to the increasing costs of communicating the updates with increasing model sizes. Various model pruning techniques have been designed in centralized settings to reduce inference times. Combining centralized pruning techniques with federated training seems intuitive for reducing communication costs—by pruning the model parameters right before the communication step. Moreover, such a progressive model pruning approach during training can also reduce training times/costs. To this end, we propose FedSparsify, which performs model pruning during federated training. In our experiments in centralized and federated settings on the brain age prediction task (estimating a person’s age from their brain MRI), we demonstrate that models can be pruned up to 95% sparsity without affecting performance even in challenging federated …

Date
September 18, 2022
Authors
Dimitris Stripelis, Umang Gupta, Nikhil Dhinagar, Greg Ver Steeg, Paul M Thompson, José Luis Ambite
Book
International Workshop on Distributed, Collaborative, and Federated Learning
Pages
141-151
Publisher
Springer Nature Switzerland