DeepStore: In-storage acceleration for intelligent queries

Vikram Sharma Mailthody, Zaid Qureshi, Weixin Liang, Ziyan Feng, Simon Garcia De Gonzalo, Youjie Li, Hubertus Franke, Jinjun Xiong, Jian Huang, Wen Mei Hwu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recent advancements in deep learning techniques facilitate intelligentquery support in diverse applications, such as content-based image retrieval and audio texturing. Unlike conventional key-based queries, these intelligent queries lack efficient indexing and require complex compute operations for feature matching. To achieve highperformance intelligent querying against massive datasets, modern computing systems employ GPUs in-conjunction with solid-state drives (SSDs) for fast data access and parallel data processing. However, our characterization with various intelligent-query workloads developed with deep neural networks (DNNs), shows that the storage I/O bandwidth is still the major bottleneck that contributes 56%-90% of the query execution time. To this end, we present DeepStore, an in-storage accelerator architecture for intelligent queries. It consists of (1) energy-efficient in-storage accelerators designed specifically for supporting DNNbased intelligent queries, under the resource constraints in modern SSD controllers; (2) a similarity-based in-storage query cache to exploit the temporal locality of user queries for further performance improvement; and (3) a lightweight in-storage runtime system working as the query engine, which provides a simple software abstraction to support different types of intelligent queries. DeepStore exploits SSD parallelisms with design space exploration for achieving the maximal energy efficiency for in-storage accelerators. We validate DeepStore design with an SSD simulator, and evaluate it with a variety of vision, text, and audio based intelligent queries. Compared with the state-of-the-art GPU+SSD approach, DeepStore improves the query performance by up to 17.7, and energy-efficiency by up to 78.6.

Original languageEnglish (US)
Title of host publicationMICRO 2019 - 52nd Annual IEEE/ACM International Symposium on Microarchitecture, Proceedings
PublisherIEEE Computer Society
Pages224-238
Number of pages15
ISBN (Electronic)9781450369381
DOIs
StatePublished - Oct 12 2019
Event52nd Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2019 - Columbus, United States
Duration: Oct 12 2019Oct 16 2019

Publication series

NameProceedings of the Annual International Symposium on Microarchitecture, MICRO
ISSN (Print)1072-4451

Conference

Conference52nd Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2019
CountryUnited States
CityColumbus
Period10/12/1910/16/19

    Fingerprint

Keywords

  • Hardware accelerators
  • In-storage computing
  • Information retrieval
  • Intelligent query
  • Solid-state drive

ASJC Scopus subject areas

  • Hardware and Architecture

Cite this

Mailthody, V. S., Qureshi, Z., Liang, W., Feng, Z., Gonzalo, S. G. D., Li, Y., Franke, H., Xiong, J., Huang, J., & Hwu, W. M. (2019). DeepStore: In-storage acceleration for intelligent queries. In MICRO 2019 - 52nd Annual IEEE/ACM International Symposium on Microarchitecture, Proceedings (pp. 224-238). (Proceedings of the Annual International Symposium on Microarchitecture, MICRO). IEEE Computer Society. https://doi.org/10.1145/3352460.3358320