Pruning Neural Networks (ICIP 2021 & BMVC 2023)

Neural Network pruning is an increasingly popular way for producing compact and efficient models, suitable for resource-limited environments, while preserving high performance. While the pruning can be performed using a multi-cycle training and fine-tuning process, the recent trend is to encompass the sparsification process during the standard course of training. We focus on online (during training) and unstructured pruning. In other words, we aim to sparsify the weight tensor of deep neural networks.

Online Weight Pruning Via Adaptive Sparsity Loss

Feather: An Elegant Solution to Effective DNN Sparsification