An application for traffic signal control using reinforcement learning agents was implemented for small traffic networks with volumes close to saturation. The state observed by the agents and the rewards included information from the intersection being controlled and also from adjacent intersections. Communication between neighboring agents resulted in emergent coordination between agents and ultimately in better handling of traffic. Lower average, maximum and minimum delay values were found for the two tested networks compared to optimal pre-timed settings. Trends indicate that if the minor intersecting streets are one way, increased organization and coordination of traffic are expected as the network size increases. There is potential for reinforcement learning agents in traffic control applications as they can provide control in real time with flexible timing settings. Further research is being conducted with variable volume, bigger networks, and adjusting the agents' parameters.