Using a Knowledge Base Population (KBP) slot filling task as a case study, we describe a re-ranking framework in the context of two experimental settings: (1) high transparency; a few pipelines share similar resources that can be used to provide the developer detailed intermediate answer results; (2) low transparency; many systems use diverse resources, and serve as black boxes, absent of any intermediate system results. In both settings, our results show that statistical re-ranking can effectively combine automated systems, achieving better performance than the best state-of-the-art individual system (6.6% absolute improvement in F-score) and alternative combination methods. Furthermore, to create labeled data for system development and assessment, information extraction tasks often require expensive human annotators to struggle with the vast amounts of information contained within a large scale corpus. In this paper, we demonstrate the impact of our learning-to-rank framework to combine output from multiple slot filling systems to populate entity-attribute facts in a knowledge base. We show that our approach can be used to create answer keys more efficiently and at a lower cost (63.5% reduction) than laborious human annotation.