We explore beyond existing work in learning from demonstration by asking the question: 'Can robots learn to guide?', that is, can a robot autonomously learn an instructional policy from expert demonstration and use it to instruct humans in executing complex task? As a solution, we propose learning of instructional policy (pi^ I ) that maps the state to an instruction for a human. To learn pi^ I , we define action primitives that addresses the challenge of mapping continuous state action trajectories to human parse-able instructions. Action primitives are demonstrated to be very effective in automatic segmentation of demonstration trajectories into fewer repetitive and reusable segments, and a highly scalable approach in comparison to the existing state-of-the art. Finally, we construct a non-generic policy model as a generative model for instructional policies to generate instruction for an entire task. With few modifications, the proposed model is demonstrated to perform autonomous execution of complex truck loading task on hydraulic actuated scaled excavator robot. Guidance approach is tested based on a controlled group study involving 75 participants, who learn to perform the same task.