Higher sensor throughput has increased the demand for cyberinfrastructure, requiring those unfamiliar with large database management to acquire new skills or outsource. Some have called this shift from sensor-limited data collection the "data deluge." As an alternative, we propose that the deluge is the result of sensor control software failing to keep pace with hardware capabilities. Rather than exploit the potential of powerful embedded operating systems and construct intelligent sensor networks that harvest higher quality data, the old paradigm (i.e. collect everything) is still dominant. To mitigate the deluge, we present an adaptive sampling algorithm based on the Nyquist-Shannon sampling theorem. We calibrate the algorithm for both data reduction and increased sampling over "hot moments," which we define as periods of elevated signal activity, deviating from previous works which have emphasized adaptive sampling for data compression via minimization of signal reconstruction error. Under the feature extraction concept, samples drawn from userdefined events carry greater importance and effective control requires the researcher to describe the context of events in the form of both an identification heuristic (for calibration) and a real-time sampling model. This event-driven approach is important when observation is focused on intermittent dynamics. In our case study application, we develop a heuristic to identify hot moments from historical data and use it to train and evaluate the adaptive model in an offline analysis using soil moisture data. Results indicate the adaptive model is superior to uniform sampling, capable of extracting 20% to 100% more samples during hot moments at equivalent levels of overall efficiency.