There has been growing interest in the deployment of deep learning systems onto resource-constrained platforms for fast and efficient inference. However, typical models are overwhelmingly complex, making such integration very challenging and requiring compression mechanisms such as reduced precision. We present a layer-wise granular precision analysis which allows us to efficiently quantize pre-trained deep neural networks at minimal cost in terms of accuracy degradation. Our results are consistent with recent findings that perturbations in earlier layers are most destructive and hence needing more precision than in later layers. Our approach allows for significant complexity reduction demonstrated by numerical results on the MNIST and CIFAR-10 datasets. Indeed, for an equivalent level of accuracy, our fine-grained approach reduces the minimum precision in the network up to 8 bits over a naive uniform assignment. Furthermore, we match the accuracy level of a state-of-the-art binary network while requiring up to 3.5 × lower complexity. Similarly, when compared to a state-of-the-art fixed-point network, the complexity savings are even higher (up to 14×) with no loss in accuracy.