We consider a scenario in which interacting agents cooperate through an iterative process of 1) forming empirical models of the behavior of other agents and 2) selfishly optimizing a local strategy based on these models. In each iteration, an agent revises its models of other agents. Selfish optimization according to these revised models alters the behavior of a each agent. This, in turn, leads to a new round of revised models of other agents. The implication of convergence is a consistency condition. Namely, each agent's behavior is consistent with how the agent is modeled by others. Furthermore, each agent's local strategy is optimal with respect to how it models other agents. We consider a particular instance of this framework that is motivated by the "Roboflag drill" coordination scenario. This paper derives conditions for convergence, provides illustrative simulations, and establishes a connection to related work in evolutionary games.