Research in machine learning concentrates on the study of learning single concepts from examples. In this framework the learner attempts to learn a single hidden function from a collection of examples, assumed to be drawn independently from some unknown probability distribution. However, in many cases - as in most natural language and visual processing situations - decisions depend on the outcomes of several different but mutually dependent classifiers. The classifiers' outcomes need to respect some constraints that could arise from the sequential nature of the data or other domain specific conditions, thus requiring a level of inference on top the predictions. We will describe research and present challenges related to Inference with Classifiers - a paradigm in which we address the problem of using the outcomes of several different classifiers in making coherent inferences - those that respect constraints on the outcome of the classifiers. Examples will be given from the natural language domain.