Probabilistic modeling has been a dominant approach in Machine Learning research. As the field evolves, the problems of interest become increasingly challenging and complex. Making complex decisions in real world problems often involves assigning values to sets of interdependent variables where the expressive dependency structure can influence, or even dictate, what assignments are possible. This paper surveys Constrained Conditional Models (CCMs), a framework that augments probabilistic models with declarative constraints as a way to support decisions in an expressive output space while maintaining modularity and tractability of training. We show how CCMs are a very natural choice for modeling an information fusion system which aims to coherently predict multiple output variables with very little information. We also present and discuss an information fusion scenario in detail and show how CCMs can be applied to this scenario. We also delineate several interesting connections which information fusion establishes between machine learning, sensor networks, and sampling theory.