The standard model for prediction using a pool of experts has an underlying assumption that one of the experts performs well. In this paper, we show that this assumption does not take advantage of situations where both the outcome and the experts' predictions are based on some input which the learner gets to observe too. In particular, we exhibit a situation where each individual expert performs badly but collectively they perform well, and show that the traditional weighted majority techniques perform poorly. To capture this notion of ‘the whole is often greater than the sum of its parts’, we propose an approach to measure the overall competency of a pool of experts with respect to a competency class or structure. A competency class or structure is a set of decompositions of the instance space where each expert is associated with a ‘competency region’ in which we assume he is competent. Our goal is to perform close to the performance of a predictor who knows the best decomposition in the competency class or structure where each expert performs reasonably well in its competency region. We present both positive and negative results in our model.