This paper develops an online learning algorithm to find optimal control solutions for partially-unknown continuous-time systems subject to input constraints. The input constraints are encoded into the optimal control problem through a nonquadratic performance functional. An online policy iteration algorithm that uses integral reinforcement knowledge is developed to learn the solution to the optimal control problem online without knowing the full dynamics model. The policy iteration algorithm is implemented on an actor-critic structure, where two neural network approximators are tuned online and simultaneously to generate the optimal control law. A novel technique based on experience replay is introduced to retain past data in updating the neural network weights. This uses the recorded data concurrently with current data for adaptation of the critic neural network weights. Concurrent learning provides an easy-to-check real-time condition for persistence of excitation that is sufficient to guarantee convergence to a near optimal control law. Stability of the proposed feedback control law is shown and its performance is evaluated through simulations.