This paper develops a theory for learning scenarios where multiple learners co-exist but there are mutual coherency constraints on their outcomes. This is natural in cognitive learning situations, where “natural” constraints are imposed on the outcomes of classifiers so that a valid sentence, image or any other domain representation is produced. We formalize these learning situations, after a model suggested in  and study generalization abilities of learning algorithms under these conditions in several frameworks. We show that the mere existence of coherency constraints, even without the learner’s awareness of them, deems the learning problem easier than predicted by general theories and explains the ability to generalize well from a fairly small number of examples. In particular, it is shown that within this model one can develop an understanding to several realistic learning situations such as highly biased training sets and low dimensional data that is embedded in high dimensional instance spaces.