TY - GEN
T1 - Benefits of cache-affinity scheduling in shared-memory multiprocessors
T2 - 1993 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, SIGMETRICS 1993
AU - Torrellas, Josep
AU - Tucker, Andrew
AU - Gupta, Anoop
N1 - Publisher Copyright:
© 1993 ACM.
PY - 1993/6/1
Y1 - 1993/6/1
N2 - An interesting and common class of workloads for shared-memory multiprocessors is multiprogrammed workloads. Because these workloads generally contain more processes than there are processors in the machine, there are two factors that increase the number of cache misses. First, several processes are forced to time-share the same cache, resulting in one process displacing the cache state previously built up by a second one. Consequently, when the second process runs again, it generates a stream of misses as it rebuilds ita cache state. Second since an idle processor simply selects the highest priority runnable process, a given process often moves from one CPU to another. This frequent migration results in the process having to continuously reload its state into new caches, producing streams of cache misses. To reduce the number of misses in these workloads, processes should reuse their cached state more. One way to encourage this is to schedule each process based on its affinity to individual caches, that is, based on the amount of state that the process has accumulated in an individual cache. This technique is called i cache affinity scheduling.
AB - An interesting and common class of workloads for shared-memory multiprocessors is multiprogrammed workloads. Because these workloads generally contain more processes than there are processors in the machine, there are two factors that increase the number of cache misses. First, several processes are forced to time-share the same cache, resulting in one process displacing the cache state previously built up by a second one. Consequently, when the second process runs again, it generates a stream of misses as it rebuilds ita cache state. Second since an idle processor simply selects the highest priority runnable process, a given process often moves from one CPU to another. This frequent migration results in the process having to continuously reload its state into new caches, producing streams of cache misses. To reduce the number of misses in these workloads, processes should reuse their cached state more. One way to encourage this is to schedule each process based on its affinity to individual caches, that is, based on the amount of state that the process has accumulated in an individual cache. This technique is called i cache affinity scheduling.
UR - http://www.scopus.com/inward/record.url?scp=84890447107&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84890447107&partnerID=8YFLogxK
U2 - 10.1145/166955.167038
DO - 10.1145/166955.167038
M3 - Conference contribution
AN - SCOPUS:84890447107
T3 - Proceedings of the 1993 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, SIGMETRICS 1993
SP - 272
EP - 274
BT - Proceedings of the 1993 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, SIGMETRICS 1993
PB - Association for Computing Machinery
Y2 - 10 May 1993 through 14 May 1993
ER -