Fixation of an active camera pair on a given target requires that the pan and tilt angles of the cameras must be set to bring the target to image centers. However, the calibration needed to achieve a specific configuration of real cameras involves tedious estimation of a number of imaging parameters. Fortunately, this excercise is not essential for fixationif images are acquired and used as feedback during the fixation process to continuously direct the cameras to the target. This paper defines a direct mapping from the changes in the direction of target motion in the image plane to changes in camera angles necessary to reduce the disparity between image center and the image plane target location. The mapping captures camera calibration, as well as other effects such as deviations from the assumed imaging model which are difficult to characterize and capture in calibration. The mapping is formulated as a task in nonlinear function approximation and learnt from real data. For computational efficiency, learning is done at multiple resolutions and using a PROBART network. Experimental results are presented using an active vision system.