Abstract
Measurement errors are a significant obstacle to achieving scalable quantum computation. To counteract systematic readout errors, researchers have developed postprocessing techniques known as measurement error mitigation methods. However, these methods face a tradeoff between scalability and returning nonnegative probabilities. In this paper, we present a solution to overcome this challenge. Our approach focuses on iterative Bayesian unfolding, a standard mitigation technique used in high-energy physics experiments, and implements it in a scalable way. We demonstrate our method on experimental Greenberger-Horne-Zeilinger state preparation on up to 127 qubits and on the Bernstein-Vazirani algorithm on up to 26 qubits. Compared to state-of-the-art methods (such as M3), our implementation guarantees valid probability distributions, returns comparable or better-mitigated results, and does so without a noticeable time and memory overhead.
4 More- Received 17 March 2023
- Accepted 1 September 2023
DOI:https://doi.org/10.1103/PhysRevResearch.6.013187
Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
Published by the American Physical Society