It is rarely possible to cite every relevant work on a topic. When controversy exists in a field, work holding the same opinion as the citing paper (i.e., homophily) is more likely to be cited. Thus, readers may inadvertently select a non-representative sample of articles to read. Here, we begin to develop a method that guides better sampling of scientific literature by designing and testing two new network metrics. The first metric, the ratio between real and expected citation counts, guides users to papers that were cited many fewer times than expected and may represent marginalized findings. The second metric, the relative evidence coupling strength, guides users to papers that may present a unique view of the field. We test our metrics on a known case of citation bias: a network of 73 papers about whether stress is a risk factor for depression. Our metrics select a cross-section of 21 papers. The intersection of the two metrics selects 3 papers that represent all 3 positions of this claim network. In future work we will test our metrics on more datasets, and we will partner with domain experts to verify whether our metrics do identify a diverse sample of research articles.