This paper presents an analytical model to predict the performance of general-purpose applications on a GPU architecture. The model is designed to provide performance information to an auto-tuning compiler and assist it in narrowing down the search to the more promising implementations. It can also be incorporated into a tool to help programmers better assess the performance bottlenecks in their code. We analyze each GPU kernel and identify how the kernel exercises major GPU microarchitecture features. To identify the performance bottlenecks accurately, we introduce an abstract interpretation of a GPU kernel, work flow graph, based on which we estimate the execution time of a GPU kernel. We validated our performance model on the NVIDIA GPUs using CUDA (Compute Unified Device Architecture). For this purpose, we used data parallel benchmarks that stress different GPU microarchitecture events such as uncoalesced memory accesses, scratch-pad memory bank conflicts, and control flow divergence, which must be accurately modeled but represent challenges to the analytical performance models. The proposed model captures full system complexity and shows high accuracy in predicting the performance trends of different optimized kernel implementations. We also describe our approach to extracting the performance model automatically from akernel code.

Original languageEnglish (US)
Pages (from-to)105-114
Number of pages10
JournalACM SIGPLAN Notices
Issue number5
StatePublished - May 2010


  • Analytical model
  • GPU
  • Parallel programming
  • Performance estimation

ASJC Scopus subject areas

  • Computer Science(all)


Dive into the research topics of 'An adaptive performance modeling tool for GPU architectures'. Together they form a unique fingerprint.

Cite this