Convolutional neural networks (CNNs) have gained considerable interest due to their record-breaking performance in many recognition tasks. However, the computational complexity of CNNs precludes their deployments on power-constrained embedded platforms. In this paper, we propose predictive CNN (PredictiveNet), which predicts the sparse outputs of the non-linear layers thereby bypassing a majority of computations. PredictiveNet skips a large fraction of convolutions in CNNs at runtime without modifying the CNN structure or requiring additional branch networks. Analysis supported by simulations is provided to justify the proposed technique in terms of its capability to preserve the mean square error (MSE) of the nonlinear layer outputs. When applied to a CNN for handwritten digit recognition, simulation results show that PredictiveNet can reduce the computational cost by a factor of 2.9χ compared to a state-of-the-art CNN, while incurring marginal accuracy degradation.