arrow left

Data Qubits and Error Rate

calender icon
December 11, 2023
clock icon
min read
Technology
Share

A quantum computing qubit, the fundamental unit of quantum information, is extremely sensitive to its surrounding environment. This sensitivity, while potentially useful for applications in quantum sensing and quantum metrology, allows errors to occur during quantum computation. These errors can accumulate and cause the final results to be inaccurate. Consequently, high error rates greatly limit the commercial usefulness of Noisy Intermediate-Scale Quantum (NISQ) computers.

Reducing these error rates to tolerable levels is the goal of quantum error correction (QEC). QEC codes (QECC) encode logical qubits out of large numbers of physical qubits. Within a logical qubit, the quantum information is distributed across many of the physical qubits, and those are called the data qubits. Other physical qubits within the logical qubit are called syndrome qubits, and these auxiliary qubits are used to detect the errors on their neighboring data qubits. Because the measurement process is inherently destructive, measuring the syndrome qubits preserves the quantum information encoded on the data qubits.

When errors are detected and identified by syndrome measurements, classical logic considers the QECC and determines which data qubits have been affected. Classical logic then applies the appropriate corrective gates to the affected data qubits. This not only corrects the errors locally but, more importantly, stops the errors from spreading throughout the system. Each logical qubit preserves its quantum information locally, ensuring the spread of correct quantum information through transversal operations—the gate operations that act upon logical qubits.

Logical qubit data can still be corrupted, however. If enough data qubits experience errors, the logical qubit will be in error. However, the solution to this is to encode logical qubits with more physical qubits. The increase in the number of data qubits will increase the number of data qubits that need to be in error in order for the logical qubit to be in error, thus lowering the error rate of the logical qubit. This stresses the need to detect and correct errors quickly, though, before the threshold can be reached and logical errors can spread.

In summary, the relationship between data qubits and error rate is that higher data qubit counts within logical qubits, if local errors are detected and corrected expeditiously, reduces the logical error rates of those respective logical qubits. It is these logical error rates that must fall under a certain threshold in order to usher in the age of commercially-viable, large-scale, fault-tolerant quantum computing.

What are Data Qubits

Data qubits, whether independent or within logical qubits, encode the quantum information that a quantum algorithm performs computation with. All qubits share a few fundamental properties:

  • A qubit, by any implementation, is a two-level system such that upon measurement the outcome is always either a 0 or a 1.
  • Before measurement, a qubit may be in a quantum superposition, during which time it has some probability of measuring 0 and some probability of measuring 1.
  • Qubit entanglement can result from the interactions of two or more qubits, after which the qubits can no longer be described independently and can only be described as a whole system.
  • The entanglement of qubits enables the exponential compression of classical data, as well as the potential for exponential computational speedups over classical algorithms.  

The qubits themselves can belong to one or more different modalities. The common feature of all these modalities, though, is that they exhibit quantum mechanical properties. Other than that, they can differ wildly. For example, neutral atoms are not limited to being qubits, which are two-level systems. The use of higher atomic energy levels results in qudits that can measure as 0, 1, 2, or higher states. In fact, the Rydberg states commonly used with Aquila would measure 65, since 0 would indicate that the valence electron is in the 5th orbital while R would indicate that the valence electron is in the 70th orbital.

Error Rates in Quantum Computing

Quantum error rates vary greatly from modality to modality, and even from provider to provider with the same modality. With fabricated qubits—qubits that are not found within nature—the situation is rather chaotic: the error rates of even adjacent qubits can vary wildly, as can the error rates of their connections. The error rate calculation is the number of errors divided by the number of executions, and rates in the NISQ era are way too high to allow calculations for commercial applications. Assuming an average error rate of 10-3 across the industry, one estimated target is actually 10-15, which is several technological advances away. However, that is the ballpark error rate that will be necessary to enable practical quantum computation.

One of the challenges in reporting breakthroughs in error rates, as important as they all are, is that they often involve very limited numbers of qubits. Lower error rates, however, will have to be applicable to entire quantum processors as they scale in size. Fortunately, a research team from Harvard University, the Massachusetts Institute of Technology (MIT), and QuEra Computing has demonstrated 99.5% fidelity with two-qubit entangling gates on 60 neutral atom qubits. This fidelity, which surpasses the error-correcting threshold for achieving fault tolerance, is in important step in this direction because 60 qubits are already beyond the capabilities of classical simulation. The paper is titled “High-fidelity parallel entangling gates on a neutral-atom quantum computer.”

An answer to a question on Stack Exchange explains the difference between error rates and fidelities succinctly. An error rate is the probability that an undesired change will occur, while fidelity measures the difference in outcomes between the ideal result and the actual result. Two common fidelity measures are state fidelity, which pertains to quantum states, and gate fidelity, which pertains to the execution of quantum operations.

Beyond error rates, fault tolerance will be required. Just a few operations that can be faulty include state preparation, gate operations, and measurements. Even if low-error logical qubits could be perfectly shielded from the environment, these operations would still need to execute properly in order for the calculations to be accurate.  

Striding Towards Quantum Resilience: Future Perspectives

Quantum resilience will require the implementation of three distinct-but-complementary strategies. The first strategy is quantum error suppression, which aims to prevent errors from occurring in the first place. The second strategy is quantum error correction, which aims to detect and correct errors on the data qubits. This process preserves the correct quantum information within logical qubits, enabling fault-tolerant quantum computation through transversal gate operations on the logical qubits. The third strategy is quantum error mitigation, which aims to reduce the rate at which failure increases as quantum circuits expand in width and depth.

Progress is being made. A team of researchers from Harvard University, QuEra Computing Inc., the Massachusetts Institute of Technology (MIT), the National Institute of Standards and Technology (NIST), and the University of Maryland (UMD) has demonstrated the application of 200+ two-qubit transversal gates on 48 logical qubits. It is important to keep in mind that in order for such experiments to be successful, logical error rates must be sufficiently low. And in order for logical error rates to be that low, more data qubits need to encode the correct quantum information than the number of data qubits that are in error.


machine learning
with QuEra

Listen to the podcast
No items found.