In a recent work it is shown that importance sampling can be avoided in the particle filter through an innovation structure inspired by traditional nonlinear filtering combined with Mean-Field Game formalisms. In this paper, the resulting feedback particle filter is used for the purposes of optimal control of a partially observed diffusion process. The feedback particle filter is used to convert the partially observed problem into the fully observed case, and the dynamic programming equations for the same derived. The approach is illustrated by obtaining the HJB equation for the infinite-horizon discounted cost optimal control problem. Two examples are presented. Future applications of the approach to approximate dynamic programming are briefly discussed.