In this paper, we consider discrete-time dynamic games of the mean-field type with a finite number, N, of agents subject to an infinite-horizon discounted-cost optimality criterion. The state space of each agent is a locally compact Polish space. At each time, the agents are coupled through the empirical distribution of their states, which affects both the agents' individual costs and their state transition probabilities. We introduce the solution concept of Markov-Nash equilibrium, under which a policy is player-by-player optimal in the class of all Markov policies. Under mild assumptions, we demonstrate the existence of a mean-field equilibrium in the infinite-population limit, N → 1, and then show that the policy obtained from the mean-field equilibrium is approximately Markov-Nash when the number of agents N is sufficiently large.