This paper presents an approach to solve a class of distributed parametric learning problems in a multi-agent network. Each agent acquires its private streaming data to establish a local learning model. The goal is for each agent to converge to a common global learning model, defined as the average of all local ones, by communicating only with its neighbors. Neighbor relationships are described by a time-dependent undirected graph whose vertices correspond to agents and whose edges depict neighbor relationships. It is shown that for any sequence of repeatedly jointly connected graphs, the approach leads all agents to asymptotically converge to the common global learning model, and the worst-case convergence rate is determined by the speed of local learning. A distributed linear regression problem and a distributed belief averaging problem are presented as illustrative examples.