Using multiple versions of exams is a common exam security technique to prevent cheating in a variety of contexts. While psycho-metric techniques are routinely used by large high-stakes testing companies to ensure equivalence between exam versions, such approaches are generally cost and effort prohibitive for individual classrooms. As such, exam versions practically present a tension between exam security (which is enhanced by the versioning) and fairness (which results from difficulty variation between versions). In this work, we surveyed students on their perceptions of this trade-off between exam security and fairness on a versioned programming exam and found that significant populations value each aspect over the other. Furthermore, we found that students' expression of concerns about unfairness was not correlated to whether they had received harder versions of the course's most recent exam, but was correlated to lower overall course performance.