Skip to main navigation Skip to search Skip to main content

Exploring Cache Size and Core Count Tradeoffs in Systems with Reduced Memory Access Latency

  • Paulo C. Santos
  • , Marco A.Z. Alves
  • , Matthias Diener
  • , Luigi Carro
  • , Philippe O.A. Navaux

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

One of the main challenges for computer architects is how to hide the high average memory access latency from the processor. In this context, Hybrid Memory Cubes (HMCs) can provide substantial energy and bandwidth improvements compared to traditional memory organizations. However, it is not clear how this reduced average memory access latency will impact the LLC. For applications with high cache miss ratios, the latency to search for the data inside the cache memory will impact negatively on the performance. The importance of this overhead depends on the memory access latency. In this paper, we present an evaluation of the L3 cache importance on a high performance processor using HMC also exploring chip area tradeoffs between the cache size and number of processor cores. We show that the high bandwidth provided by HMC memories can eliminate the need for L3 caches, removing hardware and making room for more processing power. Our evaluations show that performance increased 37% and the EDP improved 12% while maintaining the same original chip area in a wide range of parallel applications, when compared to DDR3 memories.

Original languageEnglish (US)
Title of host publicationProceedings - 24th Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, PDP 2016
EditorsYiannis Cotronis, Masoud Daneshtalab, George Angelos Papadopoulos
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages388-392
Number of pages5
ISBN (Electronic)9781467387750
DOIs
StatePublished - Mar 31 2016
Externally publishedYes
Event24th Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, PDP 2016 - Heraklion, Crete, Greece
Duration: Feb 17 2016Feb 19 2016

Publication series

NameProceedings - 24th Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, PDP 2016

Conference

Conference24th Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, PDP 2016
Country/TerritoryGreece
CityHeraklion, Crete
Period2/17/162/19/16

Keywords

  • Cache memories
  • chip area tradeoff
  • HMC

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Hardware and Architecture
  • Software
  • Control and Optimization

Fingerprint

Dive into the research topics of 'Exploring Cache Size and Core Count Tradeoffs in Systems with Reduced Memory Access Latency'. Together they form a unique fingerprint.

Cite this