In this paper, we present an algorithm that integrates computer vision with machine learning to enable a humanoid robot to accurately fire at objects classified as targets. The robot needs to be calibrated to hold the gun and instructed how to pull the trigger. Two algorithms are proposed and are executed depending on the dynamics of the target. If the target is stationery, a least mean square (LMS) approach is used to computethe error and adjust the gun muzzle accordingly. If the target is found to be dynamic, a modified Q-learning is used to best predict the object position and velocity and to adjust relevant parameters, as necessary. The image processing utilizes the OpenCV library to detect the target and point of impact of the bullets. The approach is evaluated on a 53-DOF humanoid robot iCub. This work is an example of fine motor control which is the basis for much of natural language processing by spatial reasoning. It is one aspect of a long term research effort on automatic language acquisition .