Abstract
Demonstrating quantum advantage requires experimental implementation of a computational task that is hard to achieve using state-of-the-art classical systems. One approach is to perform sampling from a probability distribution associated with a certain class of highly entangled many-body wave functions. It has been suggested that such a quantum advantage can be certified with the linear cross-entropy benchmark (XEB). We critically examine this notion. First, we consider a “benign” setting, where an honest implementation of a noisy quantum circuit is assumed, and characterize the conditions under which the XEB approximates the fidelity of quantum dynamics. Second, we assume an “adversarial” setting, where all possible classical algorithms are considered for comparisons, and show that achieving relatively high XEB values does not imply faithful simulation of quantum dynamics. Specifically, we present an efficient classical algorithm that achieves high XEB values, namely 5–12% of those obtained in the state-of-the-art experiments, within just a few seconds using a single GPU machine. This is made possible by identifying and exploiting several vulnerabilities of the XEB, which allows us to achieve high XEB values without simulating a full quantum circuit. Remarkably, our algorithm features better scaling with the system size than a noisy quantum device for commonly studied random circuit ensembles in various architecture. We quantitatively explain the success of our algorithm and the limitations of the XEB by using a theoretical framework, in which the dynamics of the average XEB and fidelity are mapped to classical statistical mechanics models. Using this framework, we illustrate the relation between the XEB and the fidelity for quantum circuits in various architectures, with different choices of gate sets, and in the presence of noise. Taken together, our results demonstrate that XEB’s utility as a proxy for fidelity hinges on several conditions, which should be independently checked in the benign setting, but cannot be assumed in the general adversarial setting. Therefore, the XEB on its own has a limited utility as a benchmark for quantum advantage. We briefly discuss potential ways to overcome these limitations.
6 More- Received 28 January 2022
- Revised 24 July 2023
- Accepted 20 November 2023
DOI:https://doi.org/10.1103/PRXQuantum.5.010334
Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
Published by the American Physical Society
Physics Subject Headings (PhySH)
Popular Summary
Unequivocally demonstrating that a quantum computer can significantly outperform any existing classical computers will be a milestone in quantum science and technology. Recently, groups at Google and at the University of Science and Technology of China (USTC) announced that they have achieved such quantum computational advantages. The central quantity of interest behind their claims is the linear cross-entropy benchmark (XEB), which has been claimed and used to approximate the fidelity of their quantum experiments and to certify the correctness of their computation results. However, such claims rely on several assumptions, some of which are implicitly assumed. Hence, it is critical to understand when and how XEB can be used for quantum advantage experiments. By combining various tools from computer science, statistical physics, and quantum information, we critically examine the properties of XEB and show that XEB bears several intrinsic vulnerabilities, limiting its utility as a benchmark for quantum advantage.
Concretely, we introduce a novel framework to identify and exploit several vulnerabilities of XEB, which leads to an efficient classical algorithm getting comparable XEB values to Google’s and USTC’s quantum devices (2% 12% of theirs) with just one GPU within 2 s. Furthermore, its performance features better scaling with the system size than that of a noisy quantum device. We observe that this is made possible because the XEB can highly overestimate the fidelity, which implies the existence of “shortcuts” to achieve high XEB values without simulating the system. This is in contrast to the intuition of the hardness of achieving high XEB values by all possible classical algorithms.
Furthermore, our framework provides conditions and potential ways to overcome these vulnerabilities of XEB, thus guiding the next generation of quantum advantage experiments. Our results also inspire the study of designing new figures of merit to benchmark near-term quantum devices and certify quantum advantage.