While fluorescence image-guided surgery offers improved treatment outcomes for patients with cancer by permitting the identification of tumors during resection, it has been plagued by slow translation into clinical practice due to the lengthy and costly approval process for fluorescent molecular markers. Label-free approaches to image-guided surgery provide an alternative by discriminating between cancerous and noncancerous tissue based on differences in spectral reflectance and autofluorescence between the tumor microenvironment and the surrounding anatomy. Unfortunately, state-of-the-art hyperspectral imaging systems capable of monitoring spectral differences across the entire surgical site utilize complex optomechanical architectures that contribute to low image resolutions, low frame rates, and co-registration error that cannot be calibrated, making these instruments impractical during demanding surgical workflows. To provide label-free surgical guidance while addressing limitations with existing systems, we have developed a single-chip snapshot hyperspectral imaging system that provides 27 spectral bands from ∼450 nm to ∼750 nm. By monolithically integrating a stacked photodiode image sensor with pixelated interference filters, we have produced a highly compact imaging system that achieves a resolution of 1252-by-852 pixels at a rate of 17.2 frames per second while avoiding co-registration error. The system provides a signal-to-noise ratio of ∼55 dB and a dynamic range of ∼62 dB, and it can enable spectral discrimination under standard broadband surgical light sources. Preclinical images of human prostate tumor implants in a murine model have been examined and presented to demonstrate that the imaging system can differentiate between cancerous and noncancerous tissue and can discriminate between distinct cancer types.