Widely used benchmarks, such as High Performance Linpack (HPL), do not always provide direct insights are notoriously poor indicators of into the actual application performance of systems. When real applications are used, and there have been are criticisms indicating that the performance of simplified benchmarks such as HPL no longer strongly correlate to real application performance. In contrast, performance evaluations based on real or mini applications may give a direct estimation into application performance. The Sustained System Performance (SSP) metric, which is used to evaluate systems based on the performance at scale of various applications, has been successfully adopted to procure systems at the National Energy Research Scientific Computing Center (NERSC), the National Center for Supercomputing Applications (NCSA) and other facilities. However, significant effort is required to tune and optimize several mini applications for each of systems. In this paper, we propose a new performance metric-the Simplified Sustained System Performance (SSSP) metric-based on a suite of simple benchmarks, which enables performance projection that correlates with full applications, but use onto a suite of mini applications. While the SSP metric is calculated over a set of applications, the SSSP metric applies its methodology to a set of benchmarks. Preliminary weighting factors for benchmarks are introduced to approximate the original SSP metric more accurately by the SSSP metric. To define the weighting factors, we perform a simple learning algorithm. Our preliminary experiments show that even though our metric is still easy to measure because it is based on a combination of simple benchmarks, it can provide projections of the performance of applications.