In this paper, we present an efficient discriminative method for human pose estimation. This method learns a direct mapping from visual observations to human body configurations. The framework requires that the visual features should be powerful enough to discriminate the subtle differences between similar human poses. We propose to describe the image features using salient interest points that are represented by SIFT-like descriptors. The descriptor encode the position, appearance, and local structural information simultaneously. Bag-of-words representation is used to model the distribution of feature space. The descriptor can tolerate a range of illumination and position variations because it is computed on overlapped patches. We use Gaussian process regression to model the mapping from visual observations to human poses. This probabilistic regression algorithm is effective and robust to the pose estimation problem. We test our approach on the HumanEva data set. Experimental results demonstrate that our approach achieves the state of the art performance.