TY - GEN
T1 - Distributed and Rate-Adaptive Feature Compression
AU - Deshmukh, Aditya
AU - Veeravalli, Venugopal V.
AU - Verma, Gunjan
N1 - This research was supported by the Army Research Laboratory under Cooperative Agreement W911NF-17-2-0196 (IOBT CRA).
PY - 2024
Y1 - 2024
N2 - We study the problem of distributed and rate-adaptive feature compression in a sensor network, wherein a set of distributed sensors observe disjoint multi-modal features, compress them, and send them to a fusion center containing a pretrained learning model for inference for a downstream task. To gain insight, we first analyze the case where the pretrained model is a linear regressor. We obtain the form of optimal quantizers assuming knowledge of underlying regressor data distribution. Under a practically reasonable approximation, we then propose a distributed compression scheme which works by quantizing a one-dimensional projection of the sensor data. We also propose a simple adaptive scheme for handling changes in communication constraints. For the case when the pretrained model is a general learning model, we propose a VQ-VAE based compression scheme, which is motivated by the fact that VQ-VAE based compression works by quantizing low-dimensional latent representations, which matches the strategy obtained for pretrained linear regressors. We further show that the adaptive strategy proposed for case of linear regression can also be applied effectively to the VQ-VAE based compression scheme. We demonstrated the effectiveness of the VQ-VAE based distributed and adaptive compression scheme on MNIST Audio+Video and CIFAR10 datasets.
AB - We study the problem of distributed and rate-adaptive feature compression in a sensor network, wherein a set of distributed sensors observe disjoint multi-modal features, compress them, and send them to a fusion center containing a pretrained learning model for inference for a downstream task. To gain insight, we first analyze the case where the pretrained model is a linear regressor. We obtain the form of optimal quantizers assuming knowledge of underlying regressor data distribution. Under a practically reasonable approximation, we then propose a distributed compression scheme which works by quantizing a one-dimensional projection of the sensor data. We also propose a simple adaptive scheme for handling changes in communication constraints. For the case when the pretrained model is a general learning model, we propose a VQ-VAE based compression scheme, which is motivated by the fact that VQ-VAE based compression works by quantizing low-dimensional latent representations, which matches the strategy obtained for pretrained linear regressors. We further show that the adaptive strategy proposed for case of linear regression can also be applied effectively to the VQ-VAE based compression scheme. We demonstrated the effectiveness of the VQ-VAE based distributed and adaptive compression scheme on MNIST Audio+Video and CIFAR10 datasets.
UR - http://www.scopus.com/inward/record.url?scp=105002694544&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105002694544&partnerID=8YFLogxK
U2 - 10.1109/IEEECONF60004.2024.10943022
DO - 10.1109/IEEECONF60004.2024.10943022
M3 - Conference contribution
AN - SCOPUS:105002694544
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 1040
EP - 1044
BT - Conference Record of the 58th Asilomar Conference on Signals, Systems and Computers, ACSSC 2024
A2 - Matthews, Michael B.
PB - IEEE Computer Society
T2 - 58th Asilomar Conference on Signals, Systems and Computers, ACSSC 2024
Y2 - 27 October 2024 through 30 October 2024
ER -