Closing the gap between theoretical reservoir operation and the real-world implementation remains a challenge in contemporary reservoir operations. Reservoirs generally operate sub-optimally due to the difficulty of determining and implementing optimal rules for operation. Since the classic work of Young (1967), research has focused on optimization algorithms. In this study, we take on a novel direction by investigating historical release data from 79 reservoirs in California and the Great Plains, using a data-mining approach to explain operators' release decisions. We use information theory - specifically, mutual information - to measure the quality of inference between a set of classic indicators and observed releases at the monthly and weekly timescales. Several general trends are found to explain which sources of hydrologic information dictate reservoir release decisions under different conditions.