Docs

Measuring what matters: A scalable framework for application-level quantum benchmarking

Measuring what matters: A scalable framework for application-level quantum benchmarking
Table of contents

IonQ benchmark white paper introduces a structured, application-centric framework for evaluating quantum computing systems.

The framework covers 13 benchmarks across optimization, quantum chemistry, machine learning, data loading, simulation, and foundational algorithms. Results are reported on IonQ hardware, but the framework is built to support evaluation across any quantum system using metrics that connect directly to the value of the solution obtained.

Inspired by MLPerf, the established standard for AI benchmarking, the framework uses two benchmark categories. Closed benchmarks fix the implementation so that cross-platform comparison is a fair test of the system, not the algorithm. Open benchmarks fix the success criterion and permit algorithmic innovation, allowing teams to demonstrate advances without disclosing proprietary methods.

The primary metrics are solution quality and Time-to-Solution (TTS). TTS measures total wall time from job submission to a result that meets a predefined quality threshold, covering the full pipeline from pre-processing through post-processing. Energy-to-Solution (ETS) and Cost-to-Solution (CTS) are defined within the framework as forward-looking metrics to be incorporated in future releases.

Key highlights:

  • Application-level measurement: Benchmarks evaluate the full system stack across workloads relevant to finance, pharmaceuticals, materials science, defense, and more. Component-level specs describe parts. These benchmarks measure the complete system.
  • Independently validated: Comparative results across IonQ and non-IonQ systems were independently validated by Kearney.
  • Reproducible and open: All benchmark code is publicly available in Qiskit at github.com/ionq-publications/apps-benchmark, enabling independent verification by any partner, customer, or third party.
  • Built to be honest: The framework publishes a solved criterion for VQE that IonQ's own hardware has not yet reached. Results are presented as a function of problem size and circuit depth so the noise regime of each system is visible, not averaged away.

Read the full white paper for detailed benchmark descriptions, methodology, and system comparisons across all 13 benchmarks.

{{pdf}}

min
Read the docs
Text LinkText Link
Measuring what matters: A scalable framework for application-level quantum benchmarking

As quantum computing systems continue to mature, there is an increasing need for benchmarking methodologies that capture performance in terms of meaningful, application-level metrics. In this work, we present a scalable framework for application-level quantum benchmarking that is designed to support internal system evaluation and cross-platform comparison across technology providers.

Download PDF