Dramatic increases in the main-memory size of computers is allowing some applications to shift their main data storage area from disk to main memory and, as a result, increase their performance. This trend is at work in some databases, resulting in what is called memory-resident databases. However, because of the increasing gap between processor and main memory speed, in these systems, effective use of the cache hierarchy is crucial to high performance. Unfortunately, there has been relatively little work on building cache-friendly database systems. In this paper, we present several cache-oriented optimizations to enable effective exploitation of caches in memory-resident decision support databases. The main optimization involves developing a query optimizer that includes the cost of cache misses in its cost metrics. The other optimizations are sophisticated data blocking and software prefetching. These optimizations require no custom-designed hardware support and are effective for the more complicated TPC-D queries. In a simple database, these queries run about 13% faster with the cache-oriented optimizer and blocking, and a total of 31% faster if, in addition, we add prefetching. The effectiveness of these optimizations is stable across a range of cache sizes, cache line sizes, and miss penalties.
|Original language||English (US)|
|Number of pages||9|
|Journal||Proceedings - IEEE International Conference on Computer Design: VLSI in Computers and Processors|
|State||Published - 1999|
ASJC Scopus subject areas
- Hardware and Architecture
- Electrical and Electronic Engineering