Embedded services and applications that interact with the real world often, over time, need to run on different kinds of hardware (low-cost microcontrollers to powerful multicore processors). It is difficult to write one program that would work reliably on such a wide range of devices. This is especially true when the application must be temporally predictable and robust, which is usually be the case since the physical world works in real-time. Thus, any application interacting with such a system, must also work in real-time. In this paper we introduce a representation of the temporal behavior of distributed real-time applications as colored graphs that capture the timing of temporally continuous sections of execution and dependencies between them, creating a partial order. We then introduce a method of extracting the graph from existing applications using a combination of analysis techniques. Once the graph has been created, we introduce a number of graph transformations that extract "meaning" from the graph. The knowledge thus gained, can be utilized for scheduling and for adjusting the level of parallelism suitable to the specific hardware, for identifying hot spots, false parallelism, or even candidates for additional concurrency. The importance of these contributions is evident when we see that such graphs can be sequentialized to our parti-ture model and can then be used as input for offline, online, or even distributed real-time scheduling. Finally we present results from analysis of a complete TCP/IP stack in addition to smaller test applications which show that our use of different analysis models result in a reduction of the complexities of graphs. An important outcome is that increasing the expression of concurrency can reduce the level of parallelism required, saving memory on deeply embedded platforms, while keeping the program parallelizable whenever complete serializability is not required. We also show that applications which were previously considered to be too complex for characterization of their worst-case behavior are now analyzable due to the combination of analysis techniques that we utilize.