Quantum computers have the potential to solve certain problems that cannot otherwise be solved with classical computers. The obstacle to solving these problems today is less about the size of the quantum computers we have available, which are relatively small but nonetheless large enough to run some of these problems, and more about the high error rates they suffer from. These error rates are so high, in fact, that there are three distinct strategies for reducing them:

- Quantum error suppression refers to techniques intended to prevent errors from occurring in the first place; it is much easier to correct errors if there are as few of them as possible.
- Quantum error correction (QEC) is the most widely familiar of the three strategies, referring to techniques intended to detect and correct the bit flip and phase flip errors that nonetheless occur throughout the computation despite error suppression.
- Quantum error mitigation is quickly gaining in familiarity and refers to techniques intended to further reduce the errors that occur despite using the best suppression and correction efforts.

A Q-CTRL article titled “Differentiating quantum error correction, suppression, and mitigation” lists some of the lesser-known sources of errors, such as electromagnetic signals and the Earth’s magnetic field. They are often referred to simply as environmental noise. More importantly, the article discusses a holistic approach to solving this error problem by leveraging all three strategies simultaneously to achieve greater result accuracy- a better outcome than what's possible using any of these strategies alone.

## What is Quantum Error Correction?

The term “quantum error correction,” or QEC for short, is often associated with the entire field of protecting quantum information, despite the aforementioned distinct strategies. This protection refers to all sources of errors, from gate errors to decoherence to environmental noise. The term is often also synonymous with fault-tolerant quantum computation (FTQC), even though their definitions have a little more separation.

From a user’s perspective, the appeal of the broader definition makes sense. Executing an algorithm produces inaccurate results, or errors, whereas the user would like to have useful results. It takes a relatively long time to say or write “suppression, detection, correction, and mitigation,” so QEC and FTQC are both shortcuts that convey the desired goal, which is useful quantum computation. Technically, QEC is only the middle part of that phrase - the detection and correction of errors- but the intent is understood.

The following articles are recommended for further reading:

- “Quantum error correction” discusses the encoding of logical qubits using physical qubits, as well as the preservation of quantum information.
- “Introduction to Quantum Error Correction” summarizes and links to a The Quantum Insider article which discusses the types of quantum errors, error mitigation versus error correction, logical qubits, prototypical QEC codes, the current state of the art, scalability concerns, and the future prospects of QEC.

## Quantum Error Suppression Techniques

The first strategy is to suppress errors, where the detection and correction of fewer errors is more manageable. This strategy closely interacts with the actual hardware. A sampling of these techniques includes:

**Dynamical Decoupling (DD)**- which involves the periodic application of sequences of control pulses that are intended to negate undesirable noise.**Derangement circuits**- which involve the preparation of multiple, identical copies of quantum states, which are protected by continuously rearranging the qubits.**Quantum feedback control**- which uses closed-loop classical feedback to control the dynamics of the quantum system.**Topological codes**- which protect quantum information by spreading it across qubits and then moving the qubits around.**Quantum error avoidance**- which involves codes that anticipate specific errors, essentially correcting them in advance.

Again, this is not an exhaustive list. Also, dynamical decoupling is not just one technique, but is an entire class of techniques that includes spin echoes. This list demonstrates that considerable research is going into quantum error suppression.

## Quantum Error Correction

One of the foundational concepts of QEC, or more accurately QEDC for quantum error detection and correction, is the encoding of quantum information across multiple physical qubits. If quantum information is encoded onto only one physical qubit and that qubit experiences an error, that error can destructively cascade throughout the entire system. Even if the error does not spread, the final measurement results are still likely to be wrong.

Now imagine that the same quantum information is encoded across multiple highly entangled physical qubits. Together, they are referred to as a logical qubit. An error on any of the individual physical qubits that constitute the logical qubit can be detected and corrected, thus preserving and protecting the quantum information within the logical qubit. One of the earliest quantum error correction codes (QECC), the Shor Code, encodes a logical qubit with only nine physical qubits. Another popular code, the Steane Code, encodes a logical qubit with only seven physical qubits. The current estimates to lower error rates enough to make quantum computing useful, however, usually use a ratio of 1,000 physical qubits per 1 logical qubit.

QEC is thought to be essential to realizing fault-tolerant quantum computing (FTQC). Errors can arise from faulty state preparation, faulty gates, faulty measurements, and even cosmic rays. In other words, there is no shortage of error sources, and therefore errors are all but inevitable without QEC. The selected code actually influences the full stack of its quantum computer, and vice versa. Not only do the physical layouts of the qubits need to be aligned with the code, for example, but also circuit optimization strategies have to consider the topology at the software level.

Peter Shor, although best known for his namesake factoring algorithm, is also attributed with pioneering this field. His Shor Code was the first to encode a single logical qubit out of a highly entangled state of multiple qubits. In the case of Shor’s code, nine physical qubits are used, which is sufficient to detect and correct both bit-flip and phase-flip errors.

One of the principles of quantum error correction is called syndrome measurements, which use auxiliary qubits that may be called ancilla qubits or even syndrome qubits. These syndrome measurements are able to detect errors by measuring the qubits that are adjacent to the physical qubits within the logical qubit. By not directly measuring the logical qubit’s constituent qubits, the quantum information encoded within the logical qubit is preserved. If an error is detected through the syndrome measurements, appropriate corrective gates can be applied to the affected physical qubits.

The use of syndrome measurements can be thought of as a type of self-correcting code. The quantum algorithm has a specific set of gates to execute. Conditional on the results of the measurements, however, additional operations might be applied during runtime to correct the errors that have been detected. The algorithm is essentially updating itself in real-time in order to ensure accurate final measurements.

## Quantum Error Mitigation

Out of the three strategies, quantum error mitigation is of most interest to quantum coders. Error suppression and error correction take place at the hardware level, whereas error mitigation is applied classically after a circuit has executed. Depending on the software in use, mitigation may be applied automatically after execution, or it may have to be deliberately applied after the measurement results have been returned.

One way to think about quantum error mitigation, which is also called measurement error mitigation, is that the software looks at the error rates of the qubits and their connections and develops a noise model. When the measurement results are returned, the software applies the noise model in reverse. In other words, because certain errors are statistically expected, they are assumed to have occurred and they are subsequently corrected. The results have been very promising.

One specific example of a measurement error mitigation technique is called Zero-Noise Extrapolation (ZNE), although it does work a little differently than was just described. ZNE purposely increases the noise coming from a quantum circuit, and then works backward to determine what the results might be without any noise at all. One method that has been proposed for increasing the noise within a circuit is the redundant application of gates. Where in subsequent circuits all gates are repeated three, then five, then seven, then however many times. The growth of the noise is then extrapolated and used to infer what the results should be with each gate executed only one time. Obviously, there are other implementations actively being researched.