🌐
Google
quantumai.google › cirq › cross-entropy benchmarking theory
Cross-Entropy Benchmarking Theory | Cirq | Google Quantum AI
Cross-Entropy Benchmarking (XEB) requires sampled bitstrings from the device being benchmarked as well as the true probabilities from a noiseless simulation.

benchmarking protocol to show quantum supremacy, which runs a random 𝑛‐qubit quantum circuit many times with samples 𝑥ᵢ; then 2ⁿ⟨𝑃(𝑥)⟩−1, where 𝑃(𝑥ᵢ) is the probability of the bitstring 𝑥ᵢ, is 1 for a quantum computer

Cross-entropy benchmarking (also referred to as XEB) is a quantum benchmarking protocol which can be used to demonstrate quantum supremacy. In XEB, a random quantum circuit is executed on a quantum computer … Wikipedia
🌐
Wikipedia
en.wikipedia.org › wiki › Cross-entropy_benchmarking
Cross-entropy benchmarking - Wikipedia
September 27, 2025 - Cross-entropy benchmarking (also referred to as XEB) is a quantum benchmarking protocol which can be used to demonstrate quantum supremacy.
Videos
December 8, 2021
396
January 15, 2021
748
🌐
arXiv
arxiv.org › pdf › 2206.08293 pdf
Linear Cross Entropy Benchmarking with Clifford Circuits
been developed, most notably linear cross-entropy benchmarking (linear XEB). Linear XEB was · originally proposed for the “quantum supremacy” experiment [1], where it was used to characterize · increasingly larger quantum circuits so as to extrapolate the error of the 20-cycle Sycamore circuit.
🌐
ADS
ui.adsabs.harvard.edu › abs › 2020arXiv200502421B › abstract
Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits - ADS
The linear cross-entropy benchmark (Linear XEB) has been used as a test for procedures simulating quantum circuits. Given a quantum circuit $C$ with $n$ inputs and outputs and purported simulator whose output is distributed according to a ...
🌐
arXiv
arxiv.org › abs › 2005.02421
[2005.02421] Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits
May 5, 2020 - The linear cross-entropy benchmark (Linear XEB) has been used as a test for procedures simulating quantum circuits. Given a quantum circuit $C$ with $n$ inputs and outputs and purported simulator whose output is distributed according to a ...
🌐
arXiv
arxiv.org › abs › 2206.08293
[2206.08293] Linear Cross Entropy Benchmarking with Clifford Circuits
June 16, 2022 - Linear cross-entropy benchmarking (XEB) has been used extensively for systems with $50$ or more qubits but is fundamentally limited in scale due to the exponentially large computational resources required for classical simulation.
🌐
Stack Exchange
quantumcomputing.stackexchange.com › questions › 8427 › quantum-supremacy-some-questions-on-cross-entropy-benchmarking
experimental realization - Quantum Supremacy: Some questions on cross-entropy benchmarking - Quantum Computing Stack Exchange

After some further consideration I think it's quite clear that the only probability mass function evaluated in the computation of is that of the classically computed ideal distribution, denoted in the main paper.

This leads me to the conclusion that the phrasing of the following excerpt from section IV.C of the Supplemental Information (and especially the part underlined in red) is a bit unfortunate/misleading:

Just because the empirically measured bitstrings are coming from the uniform distribution doesn't mean that is suddenly for all . , as it goes into the calculation of the , is still the probability of sampling bitstring from the classically computed ideal distribution. This is in general not .

The correct reasoning is that the fact that will be (and ) when bitstrings are sampled from the uniform distribution follows from the definitions of expectation and probability mass function:

The definition of expected value is the following sum where is the probability of bitstring being sampled from the classically computed ideal quantum circuit, is the probability of being sampled from the non-ideal empirical distribution, and the sum runs over all possible bitstrings.

When bitstrings are coming from the uniform distribution will always be so can be broken out of the sum: When you sum any probability mass function (of which is one example) over all the possible outcomes you by definition get 1, and thus:

Answer from Björn Smedman on quantumcomputing.stackexchange.com
🌐
arXiv
arxiv.org › abs › 2305.04954
[2305.04954] A sharp phase transition in linear cross-entropy benchmarking
May 8, 2023 - Demonstrations of quantum computational advantage and benchmarks of quantum processors via quantum random circuit sampling are based on evaluating the linear cross-entropy benchmark (XEB). A key question in the theory of XEB is whether it approximates the fidelity of the quantum state preparation.
Find elsewhere
🌐
arXiv
arxiv.org › abs › 1910.12085
[1910.12085] On the Classical Hardness of Spoofing Linear Cross-Entropy Benchmarking
February 6, 2020 - Abstract:Recently, Google announced the first demonstration of quantum computational supremacy with a programmable superconducting processor. Their demonstration is based on collecting samples from the output distribution of a noisy random quantum circuit, then applying a statistical test to those samples called Linear Cross-Entropy Benchmarking (Linear XEB).
🌐
Dagstuhl
drops.dagstuhl.de › opus › volltexte › 2021 › 13569 › pdf › LIPIcs-ITCS-2021-30.pdf pdf
Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum ...
XEB). The computational hardness assumption underlying the experiment is that no efficient · classical algorithm can achieve a similar score. In this paper we investigate this assumption, giving a new classical algorithm for “spoofing” this benchmark in certain regimes.
🌐
Stack Exchange
quantumcomputing.stackexchange.com › questions › 31507 › is-there-something-wrong-with-cross-entropy-benchmarking-or-is-it-still-conside
classical computing - Is there something wrong with cross-entropy benchmarking, or is it still considered as a reasonable path towards quantum supremacy? - Quantum Computing Stack Exchange

I think that the original rationale for using the linear cross-entropy (XEB) score as a metric to claim quantum computational supremacy was valid, but we may now be at a point now where the continued use of linear XEB for random circuit sampling on transmon qubit architectures to score and claim quantum advantage is not as justified as it was maybe in 2019, at least for two reasons:

  1. It was known from the beginning that classical verification of cross-entropy scores scales exponentially with the number of qubits. This is still true today as it was in 2019. We still have no way to efficiently verify the output from a set of strings generated by a quantum computer (or, for that matter, by the algorithms of Aharanov et al.) But, with about 60 or so qubits, this was hoped and expected to be in the goldilocks zone of not being too hard verify efficiently nor too easy to be classically spoofed. If we used much more than that (say, with more than 100 qubits), we cannot even classically calculate the linear XEB score.

  2. Much of the work you mentioned, for example IBM's initial response, required exponential resources not just to verify but also to even generate samples - whereas a quantum computer (even with a dilution refrigerator) would use exponentially fewer resources to generate the samples. But, what Aharanov et al. showed was that a classical computer could generate noisy samples from random circuit sampling where the resources to generate these samples grow polynomially - even though it takes exponential resources to verify and calculate the score.

There might be a handful of remaining loopholes to consider - for example, if we could keep the depth of our RCS algorithm constant the Aharanov et al. paper might not carry through. I also don't know the implications of the recent work for Boson Sampling experiments.

Another frustration is that without cross-entropy benchmarking, we don't know the best answer to what other ways do we have to prove that we've gone beyond classical with our computational resources in the NISQ era? Shor's algorithm is out, as it requires error correction. Some neat approaches of Kahanamoku-Meyer et al. might eventually be viable, but there's perhaps a long way to go.

I also like the new results of Chen et al. on the NISQ complexity class, suggesting that there likely still is exponential advantage for some carefully chosen problems even in the presence of noise - but instantiating these problems seems a bit tough now. For example, the Bernstein-Vazirani problem requires $O(1)$ quantum, but $O(n)$ classical queries (using perfect qubits); this is changed to $O(\log n)$ NISQ queries - still an exponential separation.

Answer from Mark Spinelli on quantumcomputing.stackexchange.com
🌐
American Physical Society
link.aps.org › doi › 10.1103 › PhysRevA.108.052613
Phys. Rev. A 108, 052613 (2023) - Linear cross-entropy benchmarking ...
November 20, 2023 - With the advent of quantum processors exceeding 100 qubits and the high engineering complexities involved, there is a need for holistically benchmarking the processor to have quality assurance. Linear cross-entropy benchmarking (XEB) has been used extensively for systems with 50 or more qubits but ...
🌐
American Physical Society
link.aps.org › doi › 10.1103 › PRXQuantum.5.010334
Limitations of Linear Cross-Entropy as a Measure for Quantum ...
February 29, 2024 - Recently, groups at Google and at the University of Science and Technology of China (USTC) announced that they have achieved such quantum computational advantages. The central quantity of interest behind their claims is the linear cross-entropy benchmark (XEB), which has been claimed and used to approximate the fidelity of their quantum experiments and to certify the correctness of their computation results.
🌐
arXiv
arxiv.org › abs › 2206.08293v1
[2206.08293v1] Linear Cross Entropy Benchmarking with Clifford Circuits
June 16, 2022 - Linear cross-entropy benchmarking (XEB) has been used extensively for systems with $50$ or more qubits but is fundamentally limited in scale due to the exponentially large computational resources required for classical simulation.
🌐
arXiv
arxiv.org › abs › 2405.00789
[2405.00789] Classically Spoofing System Linear Cross Entropy Score Benchmarking
May 1, 2024 - A notable first claim by Google Quantum AI revolves around a metric called the Linear Cross Entropy Benchmarking (Linear XEB), which has been used in multiple quantum supremacy experiments since.
🌐
Theoryofcomputing
theoryofcomputing.org › articles › v016a011
On the Classical Hardness of Spoofing Linear Cross-Entropy Benchmarking: Theory of Computing: An Open Access Electronic Journal in Theoretical Computer Science
November 2, 2020 - Recently, Google announced the first demonstration of quantum computational supremacy with a programmable superconducting processor. Their demonstration is based on collecting samples from the output distribution of a noisy random quantum circuit, then applying a statistical test to those samples called Linear Cross-Entropy Benchmarking (Linear XEB).
🌐
arXiv
arxiv.org › html › 2502.09015v1
Generalized Cross-Entropy Benchmarking for Random Circuits with Ergodicity
February 13, 2025 - In particular, our framework recovers Google’s result on estimating the circuit fidelity via linear cross-entropy benchmarking (XEB), and we give rigorous criteria on the noise model characterizing when such estimation is valid, and thus contributes to the research of technical aspects of XEB [16, 17, 62, 56, 57, 58, 59].
🌐
ResearchGate
researchgate.net › publication › 341202066_Spoofing_Linear_Cross-Entropy_Benchmarking_in_Shallow_Quantum_Circuits
Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits
May 5, 2020 - Download Citation | Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits | The linear cross-entropy benchmark (Linear XEB) has been used as a test for procedures simulating quantum circuits.