Publications
Making models shallow again: Jointly learning to reduce non-linearity and depth for latency-efficient private inference
Abstract
Large number of ReLU and MAC operations of Deep neural networks make them ill-suited for latency and compute-efficient private inference. In this paper, we present a model optimization method that allows a model to learn to be shallow. In particular, we leverage the ReLU sensitivity of a convolutional block to remove a ReLU layer and merge its succeeding and preceding convolution layers to a shallow block. Unlike existing ReLU reduction methods, our joint reduction method can yield models with improved reduction of both ReLUs and linear operations by up to 1.73 x and 1.47 x, respectively, evaluated with ResNet18 on CIFAR-100 without any significant accuracy-drop.
- Date
- November 30, 2025
- Authors
- Souvik Kundu, Yuke Zhang, Dake Chen, Peter A Beerel
- Conference
- Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
- Pages
- 4685-4689