TY - GEN
T1 - Optimizing memory locality using a locality-aware page table
AU - Cruz, Eduardo H.M.
AU - Diener, Matthias
AU - Alves, Marco A.Z.
AU - Pilla, Laércio L.
AU - Navaux, Philippe O.A.
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/12/1
Y1 - 2014/12/1
N2 - One of the main challenges for modern parallel shared-memory architectures are accesses to main memory. In current systems, the performance and energy efficiency of memory accesses depend on their locality: accesses to remote caches and NUMA nodes are more expensive than accesses to local ones. Increasing the locality requires knowledge about how the threads of a parallel application access memory pages. With this information, pages can be migrated to the NUMA nodes that access them (data mapping), as well as threads that access the same pages can be migrated to the same node such that locality can be improved even further (thread mapping). In this paper, we propose LAPT, a mechanism to store the memory access pattern of parallel applications in the page table, which is updated by the hardware during TLB misses. This information is used by the operating system to perform an optimized thread and data mapping during the execution of the parallel application. In contrast to previous work, LAPT does not require any previous information about the behavior of the applications, or changes to the application or runtime libraries. Extensive experiments with the NAS Parallel Benchmarks (NPB) and PARSEC showed performance and energy efficiency improvements of up to 19.2% and 15.7%, respectively, (6.7% and 5.3% on average).
AB - One of the main challenges for modern parallel shared-memory architectures are accesses to main memory. In current systems, the performance and energy efficiency of memory accesses depend on their locality: accesses to remote caches and NUMA nodes are more expensive than accesses to local ones. Increasing the locality requires knowledge about how the threads of a parallel application access memory pages. With this information, pages can be migrated to the NUMA nodes that access them (data mapping), as well as threads that access the same pages can be migrated to the same node such that locality can be improved even further (thread mapping). In this paper, we propose LAPT, a mechanism to store the memory access pattern of parallel applications in the page table, which is updated by the hardware during TLB misses. This information is used by the operating system to perform an optimized thread and data mapping during the execution of the parallel application. In contrast to previous work, LAPT does not require any previous information about the behavior of the applications, or changes to the application or runtime libraries. Extensive experiments with the NAS Parallel Benchmarks (NPB) and PARSEC showed performance and energy efficiency improvements of up to 19.2% and 15.7%, respectively, (6.7% and 5.3% on average).
UR - http://www.scopus.com/inward/record.url?scp=84919430331&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84919430331&partnerID=8YFLogxK
U2 - 10.1109/SBAC-PAD.2014.22
DO - 10.1109/SBAC-PAD.2014.22
M3 - Conference contribution
AN - SCOPUS:84919430331
T3 - Proceedings - Symposium on Computer Architecture and High Performance Computing
SP - 198
EP - 205
BT - Proceedings - IEEE 26th International Symposium
PB - IEEE Computer Society
T2 - 26th International Symposium on Computer Architecture and High Performance Computing, SBAC-PAD 2014
Y2 - 22 October 2014 through 24 October 2014
ER -