In this paper we present a novel method for classifying relevant points in a sequence of images of a distant target to autonomously guide an underwater vehicle towards it. Feature points are classified by using a measure called motion perceptibility, which relates the magnitudes of the rate of change between matched feature points at different image frames (in distance), thus inherently considering the change in feature's position. This measure helps to detect which feature points are most likely to leave the field of view of the camera, thus indicating that they do not belong to the target region. By using a visual attention model adapted to underwater images, relevant points are detected and tracked using a visual servoing approach. Preliminary results on sea trials demonstrate the feasibility of our methodology.