Abstract
The performance and energy efficiency of modern architectures depend on memory locality, which can be improved by thread and data mappings considering the memory access behavior of parallel applications. In this article, we propose intense pages mapping, a mechanism that analyzes the memory access behavior using information about the time the entry of each page resides in the translation lookaside buffer. It provides accurate information with a very low overhead. We present experimental results with simulation and real machines, with average performance improvements of 13.7% and energy savings of 4.4%, which come from reductions in cache misses and interconnection traffic.
Original language | English (US) |
---|---|
Article number | 28 |
Journal | ACM Transactions on Architecture and Code Optimization |
Volume | 13 |
Issue number | 3 |
DOIs | |
State | Published - Sep 2016 |
Externally published | Yes |
Keywords
- Cache memory
- Communication
- Data mapping
- Data sharing
- NUMA
- Thread mapping
ASJC Scopus subject areas
- Software
- Information Systems
- Hardware and Architecture