🌐
Google
quantumai.google β€Ί cirq β€Ί cross-entropy benchmarking theory
Cross-Entropy Benchmarking Theory | Cirq | Google Quantum AI
Cross-Entropy Benchmarking (XEB) requires sampled bitstrings from the device being benchmarked as well as the true probabilities from a noiseless simulation.

benchmarking protocol to show quantum supremacy, which runs a random 𝑛‐qubit quantum circuit many times with samples π‘₯α΅’; then 2βΏβŸ¨π‘ƒ(π‘₯)βŸ©βˆ’1, where 𝑃(π‘₯α΅’) is the probability of the bitstring π‘₯α΅’, is 1 for a quantum computer

Cross-entropy benchmarking (also referred to as XEB) is a quantum benchmarking protocol which can be used to demonstrate quantum supremacy. In XEB, a random quantum circuit is executed on a quantum computer … Wikipedia
🌐
Wikipedia
en.wikipedia.org β€Ί wiki β€Ί Cross-entropy_benchmarking
Cross-entropy benchmarking - Wikipedia
September 27, 2025 - Cross-entropy benchmarking (also referred to as XEB) is a quantum benchmarking protocol which can be used to demonstrate quantum supremacy.
Top answer
1 of 3
4

After some further consideration I think it's quite clear that the only probability mass function evaluated in the computation of is that of the classically computed ideal distribution, denoted in the main paper.

This leads me to the conclusion that the phrasing of the following excerpt from section IV.C of the Supplemental Information (and especially the part underlined in red) is a bit unfortunate/misleading:

Just because the empirically measured bitstrings are coming from the uniform distribution doesn't mean that is suddenly for all . , as it goes into the calculation of the , is still the probability of sampling bitstring from the classically computed ideal distribution. This is in general not .

The correct reasoning is that the fact that will be (and ) when bitstrings are sampled from the uniform distribution follows from the definitions of expectation and probability mass function:

The definition of expected value is the following sum where is the probability of bitstring being sampled from the classically computed ideal quantum circuit, is the probability of being sampled from the non-ideal empirical distribution, and the sum runs over all possible bitstrings.

When bitstrings are coming from the uniform distribution will always be so can be broken out of the sum: When you sum any probability mass function (of which is one example) over all the possible outcomes you by definition get 1, and thus:

2 of 3
4

That seems to restrict the output probability distributions of all quantum circuits to rather high entropy distributions.

The output of a typical randomly chosen quantum circuit is rather high entropy. That doesn't mean you can't construct circuits that have low entropy outputs (you can), it just means that picking random gates is a bad strategy for achieving that goal.

how can i equal when the bitstrings are sampled from the uniform distribution?

How could it equal anything else? The probabilities of the target distribution have to add up to one, and you're picking each element of the time. For example, if there was a single element with all the probability, you'd score . You always score on average when picking randomly.

How can the value of correspond to "the probability that no error has occurred while running the circuit"?

When the paper says "the probability that no error occurs", what it means is "In the systemwide depolarizing error model, which is a decent approximation to the real physical error model at least for random circuits, the linear xeb score corresponds to the probability of sampling from the correct distribution instead of the uniform distribution.".

Physically, it is obviously not the case that either a single error happens or no error happens. For example, every execution of the circuit is going to have some amount of over-rotation or under-rotation error due to imperfect control. But that's all very complicated. To keep things simple you can model the performance of the system as if your errors were from simpler models, such as each gate have a probability of introducing a Pauli error or such as you either sample from the correct distribution or the uniform distribution.

Simplified models actually do a decent job of predicting system performance, particularly on random circuits. For example, consider the way the fidelity decays as the number of qubits and number of layers are increased. The fidelity decay curve from the paper matches what you would predict if every operation had some fixed probability of introducing a Pauli error.

🌐
arXiv
arxiv.org β€Ί pdf β€Ί 2206.08293 pdf
Linear Cross Entropy Benchmarking with Clifford Circuits
been developed, most notably linear cross-entropy benchmarking (linear XEB). Linear XEB was Β· originally proposed for the β€œquantum supremacy” experiment [1], where it was used to characterize Β· increasingly larger quantum circuits so as to extrapolate the error of the 20-cycle Sycamore circuit.
🌐
ADS
ui.adsabs.harvard.edu β€Ί abs β€Ί 2020arXiv200502421B β€Ί abstract
Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits - ADS
The linear cross-entropy benchmark (Linear XEB) has been used as a test for procedures simulating quantum circuits. Given a quantum circuit $C$ with $n$ inputs and outputs and purported simulator whose output is distributed according to a ...
🌐
arXiv
arxiv.org β€Ί abs β€Ί 2206.08293
[2206.08293] Linear Cross Entropy Benchmarking with Clifford Circuits
June 16, 2022 - Linear cross-entropy benchmarking (XEB) has been used extensively for systems with $50$ or more qubits but is fundamentally limited in scale due to the exponentially large computational resources required for classical simulation.
🌐
arXiv
arxiv.org β€Ί html β€Ί 2502.09015v1
Generalized Cross-Entropy Benchmarking for Random Circuits with Ergodicity
February 13, 2025 - In particular, our framework recovers Google’s result on estimating the circuit fidelity via linear cross-entropy benchmarking (XEB), and we give rigorous criteria on the noise model characterizing when such estimation is valid, and thus contributes to the research of technical aspects of XEB [16, 17, 62, 56, 57, 58, 59].
🌐
arXiv
arxiv.org β€Ί abs β€Ί 2005.02421
[2005.02421] Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits
May 5, 2020 - The linear cross-entropy benchmark (Linear XEB) has been used as a test for procedures simulating quantum circuits. Given a quantum circuit $C$ with $n$ inputs and outputs and purported simulator whose output is distributed according to a ...
Find elsewhere
🌐
arXiv
arxiv.org β€Ί abs β€Ί 2305.04954
[2305.04954] A sharp phase transition in linear cross-entropy benchmarking
May 8, 2023 - Demonstrations of quantum computational advantage and benchmarks of quantum processors via quantum random circuit sampling are based on evaluating the linear cross-entropy benchmark (XEB). A key question in the theory of XEB is whether it approximates the fidelity of the quantum state preparation.
🌐
Dagstuhl
drops.dagstuhl.de β€Ί opus β€Ί volltexte β€Ί 2021 β€Ί 13569 β€Ί pdf β€Ί LIPIcs-ITCS-2021-30.pdf pdf
Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum ...
XEB). The computational hardness assumption underlying the experiment is that no efficient Β· classical algorithm can achieve a similar score. In this paper we investigate this assumption, giving a new classical algorithm for β€œspoofing” this benchmark in certain regimes.
🌐
American Physical Society
link.aps.org β€Ί doi β€Ί 10.1103 β€Ί PRXQuantum.5.010334
Limitations of Linear Cross-Entropy as a Measure for Quantum ...
February 29, 2024 - If the difference between the classical and quantum resources needed to achieve a certain value of the benchmark scales exponentially with the system size, this demonstrates that quantum devices have an exponential computational advantage even in the regime where the gates are too noisy to allow for quantum error correction. A prominent example of such a benchmark is the linear cross-entropy benchmark (XEB) [5] defined as
🌐
arXiv
ar5iv.labs.arxiv.org β€Ί html β€Ί 2206.08293
[2206.08293] Linear Cross Entropy Benchmarking with Clifford Circuits
March 11, 2024 - It has been experimentally and numerically observed that this measure exponentially decays with the number of cycles for a noisy circuit, and this decay exponent is proposed as a measure of gate quality [1, 2, 5]. Although originally conceived to support the β€œquantum supremacy” claim, linear XEB has become a benchmarking scheme in its own right [1, 2, 6, 7]. Linear XEB has the advantage of requiring only a shallow circuit, which is easy to implement on current processors.
🌐
Emergent Mind
emergentmind.com β€Ί topics β€Ί cross-entropy-benchmark-fidelity
Cross-Entropy Benchmark Fidelity
Cross-Entropy Benchmark Fidelity is a metric that links cross-entropy estimators to true model performance in domains such as quantum circuit sampling, likelihood-free inference, and classification.
🌐
arXiv
arxiv.org β€Ί abs β€Ί 2405.00789
[2405.00789] Classically Spoofing System Linear Cross Entropy Score Benchmarking
May 1, 2024 - A notable first claim by Google Quantum AI revolves around a metric called the Linear Cross Entropy Benchmarking (Linear XEB), which has been used in multiple quantum supremacy experiments since.
🌐
arXiv
arxiv.org β€Ί abs β€Ί 2206.08293v1
[2206.08293v1] Linear Cross Entropy Benchmarking with Clifford Circuits
June 16, 2022 - Linear cross-entropy benchmarking (XEB) has been used extensively for systems with $50$ or more qubits but is fundamentally limited in scale due to the exponentially large computational resources required for classical simulation.
🌐
Theoryofcomputing
theoryofcomputing.org β€Ί articles β€Ί v016a011
On the Classical Hardness of Spoofing Linear Cross-Entropy Benchmarking: Theory of Computing: An Open Access Electronic Journal in Theoretical Computer Science
November 2, 2020 - Recently, Google announced the first demonstration of quantum computational supremacy with a programmable superconducting processor. Their demonstration is based on collecting samples from the output distribution of a noisy random quantum circuit, then applying a statistical test to those samples called Linear Cross-Entropy Benchmarking (Linear XEB).
🌐
ScienceDirect
sciencedirect.com β€Ί science β€Ί article β€Ί pii β€Ί S2709472325000012
Generalized cross-entropy benchmarking for random circuits with ergodicity - ScienceDirect
January 16, 2025 - For a quadratic postprocessing function, our framework recovered Google's result on estimating the circuit fidelity via linear cross-entropy benchmarking (XEB), and we gave a sufficient condition on the noise model characterizing when such estimation is valid.
🌐
EITCA
eitca.org β€Ί home β€Ί what is cross-entropy benchmarking, and how is it used to evaluate the performance of quantum gates on the sycamore processor?
What is cross-entropy benchmarking, and how is it used to evaluate the performance of quantum gates on the Sycamore processor? - EITCA Academy
June 11, 2024 - Cross-entropy benchmarking (XEB) is a critical technique employed to evaluate the performance of quantum gates, particularly on quantum processors such as Google's Sycamore processor. This benchmarking method is instrumental in the field of quantum computing, where it serves as a robust tool ...
Top answer
1 of 1
2

I think that the original rationale for using the linear cross-entropy (XEB) score as a metric to claim quantum computational supremacy was valid, but we may now be at a point now where the continued use of linear XEB for random circuit sampling on transmon qubit architectures to score and claim quantum advantage is not as justified as it was maybe in 2019, at least for two reasons:

  1. It was known from the beginning that classical verification of cross-entropy scores scales exponentially with the number of qubits. This is still true today as it was in 2019. We still have no way to efficiently verify the output from a set of strings generated by a quantum computer (or, for that matter, by the algorithms of Aharanov et al.) But, with about 60 or so qubits, this was hoped and expected to be in the goldilocks zone of not being too hard verify efficiently nor too easy to be classically spoofed. If we used much more than that (say, with more than 100 qubits), we cannot even classically calculate the linear XEB score.

  2. Much of the work you mentioned, for example IBM's initial response, required exponential resources not just to verify but also to even generate samples - whereas a quantum computer (even with a dilution refrigerator) would use exponentially fewer resources to generate the samples. But, what Aharanov et al. showed was that a classical computer could generate noisy samples from random circuit sampling where the resources to generate these samples grow polynomially - even though it takes exponential resources to verify and calculate the score.

There might be a handful of remaining loopholes to consider - for example, if we could keep the depth of our RCS algorithm constant the Aharanov et al. paper might not carry through. I also don't know the implications of the recent work for Boson Sampling experiments.

Another frustration is that without cross-entropy benchmarking, we don't know the best answer to what other ways do we have to prove that we've gone beyond classical with our computational resources in the NISQ era? Shor's algorithm is out, as it requires error correction. Some neat approaches of Kahanamoku-Meyer et al. might eventually be viable, but there's perhaps a long way to go.

I also like the new results of Chen et al. on the NISQ complexity class, suggesting that there likely still is exponential advantage for some carefully chosen problems even in the presence of noise - but instantiating these problems seems a bit tough now. For example, the Bernstein-Vazirani problem requires $O(1)$ quantum, but $O(n)$ classical queries (using perfect qubits); this is changed to $O(\log n)$ NISQ queries - still an exponential separation.

🌐
Google Patents
patents.google.com β€Ί patent β€Ί US20230385679A1
US20230385679A1 - Polynomial-time linear cross-entropy benchmarking - Google Patents
Benchmarking can be used to determine the fidelity of a set of gates implemented on a quantum computational device. Conventional linear cross-entropy benchmarking (XEB) can be performed using shallow quantum circuits but is unsuitable for benchmarking more complicated quantum circuits.