Noisy Intermediate-Scale Quantum (NISQ) computers suffer from very high error rates, which limits their usefulness. Bringing these error rates down to acceptable levels will require three distinct-but-complementary strategies: quantum error suppression, quantum error correction, and quantum error mitigation. Suppression aims to prevent as many errors as possible from occurring in the first place, correction aims to detect and correct the errors that occur during execution anyway, and mitigation aims to correct the errors that persist after execution has completed and after measurement results have been returned.
For each of these strategies, researchers have already discovered multiple techniques for their implementation. For mitigation, specifically, these techniques include:
- Learning-based methods, which involve substituting classically-simulatable gates into circuit variations that provide training data for models that can compensate for errors
- Quasiprobability methods, which combine error models with extra “shots” to improve the accuracy of the adjustments that are made to the final measurements
- Noise modeling, which characterizes the noise on the target quantum computer during execution, and then makes adjustments based on this current noise model
- Leveraging problem symmetries, which involves looking for patterns and then correcting errant patterns based on successful patterns.
It’s impossible to speculate what other techniques researchers might be investigating, but these could be considered the most widely-known of them.
Understanding Quantum Errors
Quantum information is notoriously fragile. Not only can errors arise from a number of different sources, they can propagate through a circuit very quickly. These errors can accumulate until the final measurement results of the circuit are of little-to-no value.
Despite the prevalence of errors, there are actually only two types: the bit flip quantum error and the phase flip quantum error. A bit flip is often described as switching a 0 to a 1 and vice versa. However, it actually swaps the probability of measuring 0 with the probability of measuring 1. A phase flip does the same, but in a different basis. It can be thought of as flipping + and -, and vice versa.
For quantum error detection and correction methods to be fault-tolerant, known as Quantum Error Correction Codes (QECC), they must be able to detect and correct both types of errors. Quantum information is distributed across multiple physical qubits, which encode a single logical qubit. This paradigm not only allows for the detection and correction of errors within logical qubits, it also keeps the errors localized. In other words, the errors do not have a chance to cascade across the other logical qubits, and the correct quantum information is preserved.
It is important to note that the term “approximate quantum error correction” does not apply to quantum computation. The term instead applies to quantum communication, which also requires the detection and correction of errors.
Sources of Quantum Errors
Both bit flip and phase flip errors can result from any or all of the following:
- Random environmental forces, such as cosmic radiation and electromagnetism, which may cause certain qubit modalities to require physical shielding from such forces
- Decoherence, when the time needed to execute all the operations on a qubit exceeds the length of time that the qubit can maintain quantum information
- Crosstalk, when a gate operation that is intended for one specific qubit inadvertently affects one or more neighboring qubits
- Imprecision, by the control systems, in the execution of quantum operations; one common issue, for example, is the precise timing of the operations
- Incorrect instructions sent to the control systems, whether due to poor algorithm design or due to classical bugs in the transpilation and/or compilation of the instructions
- Defects in the fabrication of qubits not found in nature, as well as in the manufacture of other system components, such as circuit boards, connections, wires, and so forth
It is important to note that different qubit modalities can be more or less prone to certain errors than other qubit modalities. For example, neutral atoms are not fabricated; they are perfectly identical to each other and, therefore, do not experience errors related to imperfections. For another example, neutral atoms have naturally long coherence times, and thus many more operations can be performed in comparison to other modalities before decoherence becomes an issue. And for one more example, neutral atoms can minimize the risk of crosstalk by physically shuttling qubits around.
Strategies in Quantum Error Mitigation
As previously noted, multiple mitigation techniques are actively being researched and experimentally demonstrated. Some of the specific error mitigation methods that have been proposed include:
- Measurement-error mitigation, which corrects the errors that can arise specifically during the process of taking final measurements
- Symmetry verification, which corrects errors in one part of a circuit based on the successful implementation of the same subroutine elsewhere in the circuit
- Zero-Noise Extrapolation (ZNE), which executes circuits with multiple levels of intentionally-added noise so that the results with zero noise can be extrapolated
- Dual-state purification, which involves preparing two states that constructively interfere in such a way so as to improve the accuracy of the final results
- Learning-based, which uses classical machine learning and variations of a circuit to compensate for errors in the original circuit
It is important to note that using two or more mitigation techniques together can be more effective than using any one of these techniques by itself.
For a deep dive into quantum error mitigation, be sure to read the NTT Technical Review paper titled “Quantum Error Mitigation and Its Progress” by Suguru Endo. This paper includes both mathematics and illustrations.
Future Directions in Quantum Error Mitigation
Error strategies can be combined, and therefore it is impossible to discuss the future of quantum error mitigation in the absence of quantum error suppression and quantum error correction. Research continues into all three of these strategies, with the ultimate goal being to reduce error rates below a threshold that would signal the advent of useful quantum computation.
It is impossible to be aware of all the error mitigation methods currently under research. We can assume that novel methods will continue to be discovered as the existing methods continue to be tested. As these methods can be combined, they also need to be tested in different combinations.
In addition to all of these strategies and methods and ensembles of strategies and methods, application-specific techniques can help to further reduce errors. For example, researchers may use either Bloqade or Bloqade-Python to solve Maximum Independent Set (MIS) problems with neutral atoms. Classical algorithms may then be employed after measurements are taken to verify that Aquila’s solution actually obeys the constraints of an MIS problem. In other words, we can take one more step to verify that the solution is not in error, and that we have a legitimate solution to the problem.
Due to the high error rates of NISQ devices, this is what the future needs to hold so that we may someday realize the benefits of Fault-Tolerant Quantum Computing (FTQC). It’s going to take a holistic approach that considers all the possible sources of errors, as well as all the possible solutions. But with the demonstration of 48 logical qubits with 200+ two-qubit transversal gates by a team of Harvard University, QuEra Computing Inc., Massachusetts Institute of Technology (MIT), National Institute of Standards and Technology (NIST), and University of Maryland (UMD) researchers, this day is closer now than it ever has been.