Research in AI is slowly maturing, and body of accepted techniques for reasoning and for representing knowledge in simple, circumscribed domains now exists. But with the maturity of AI has come a growing awareness of the severe limitations of current techniques for constructing more complex problem solving or interpretation systems. We currently have inadequate means to gather, represent, store, organize, access, and manipulate the huge collections of knowledge required for complex problem solving. Existing systems can't reconfigure themselves in changing situations, nor can they incrementally adjust to new knowledge or new techniques. Large scale problem solvers (e.g. factory automation systems,) cannot in principle completely model the world in which they exist, and must face problems of inconsistency, asynchrony, control and geographic distribution, etc. - they will have to work in "open systems." Many solutions under consideration rely on concurrent computation, using either very fine grained "connectionist," "neural computing" or "d a t a parallel" approaches, or using larger grain collections of "objects," "agents," or "problem solving nodes" - techniques collectively termed 'Distributed AI." In this paper we characterize the needs for concurrency and paralleli'sm in AI, with special attention to building medium to large grain adaptive problem solvers in open systems. In these systems the overriding concern is organizing the problem solving system's behavior - the "coordination problem." Co,sventional distributed computing and parallel algorithms a;sproaches allow a programmer to solve the coordination problem, and provide language constructs and concurrency 4:ontrol mechanismc with which a program can enact his solution. In Distributed AI, we attempt to improve adaptability by designing problem solvers which can both solve the coordination problem and enact the solution themselves.