Classifying images using features extracted from densely sampled local patches has enjoyed significant success in many detection and recognition tasks. It is also well known that generally more than one type of feature is needed to achieve robust classification performance. Previous works using multiple features have addressed this issue either through simple concatenation of feature vectors or through combining feature specific kernels at the classifier level. In this work we introduce a novel approach for combining features at the feature level by projecting two types of features onto two respective subspaces in which they are maximally correlated. We use their correlation as an augmented feature and demonstrate improvement in classification accuracy over simple combination through concatenation in a pedestrian detection framework.