Here is a list of all classical and quantum codes that have notable decoders.
Name | Decoder(s) |
---|---|
3D surface code | Flip decoder and its modification p-flip [1]. |
Alternant code | Variation of the Berlekamp-Welch algorithm [2].Guruswami-Sudan list decoder [3]. |
Analog stabilizer code | Homodyne measurement of nullifiers yields real-valued syndromes, and recovery can be performed by displacements conditional on the syndromes. |
Analog surface code | Shift-based decoder [4]. |
Approximate quantum error-correcting code (AQECC) | Given an encoding, a decoder that yields the optimal entanglement fidelity can be obtained by solving a semi-definite program [5,6] (see also Ref. [7]).The Petz recovery map (a.k.a. the transpose map) [8,9], a quantum channel determined by the codespace and noise channel, recovers information perfectly for strictly correctable noise and yields an infidelity of recovery that is at most twice away from the infidelity of the best possible recovery [10]. The infidelity of a modified Petz recovery map can be bounded using relative entropies between uncorrupted and corrupted code states on countably infinite Hilbert spaces [11]. |
Approximate secret-sharing code | Decoding is analagous to reconstruction in a secret sharing scheme and is done in polynomial time. The only required operations are verification of quantum authentication, which is a pair of polynomial-time quantum algorithms that check if the fidelity of the received state is close to 1, and erasure correction for a stabilizer code, which involves solving a system of linear equations. |
B-code | Efficient decoding algorithm against erasures [12]. |
BPSK c-q code | Linear-optical quantum receiver [13].Kennedy receiver [14,15].Photon-number resolving detector [16].Non-Gaussian near-optimal receiver [15].Multi-stage quantum receiver [17]. |
Bacon-Shor code | While check operators are few-body, stabilizer weights scale with the number of qubits, and stabilizer expectation values are obtained by taking products of gauge-operator expectation values. It is thus not clear how to extract stabilizer values in a fault-tolerant manner [18,19]. |
Balanced code | Efficient decoder [20–22]. |
Balanced product (BP) code | BP-OSD decoder [23]. |
Binary BCH code | Peterson decoder with runtime of order \(O(n^3)\) [24,25] (see exposition in Ref. [26]).Berlekamp-Massey decoder with runtime of order \(O(n^2)\) [27,28] and modification by Burton [29]; see also [30,31].Sugiyama et al. modification of the extended Euclidean algorithm [32,33].Guruswami-Sudan list decoder [3]. |
Binary Varshamov-Tenengolts (VT) code | Decoder based on checksums \(\sum_{i=1}^n i~x_i^{\prime}\) of corrupted codewords \(x_i^{\prime}\) [34,35]. |
Binary code | For few-bit codes (\(n\) is small), decoding can be based on a lookup table. For infinite code families, the size of such a table scales exponentially with \(n\), so approximate decoding algorithms scaling polynomially with \(n\) have to be used. The decoder determining the most likely error given a noise channel is called the maximum-likelihood decoder.Given a received string \(x\) and an error bound \(e\), a list decoder returns a list of all codewords that are at most \(e\) from \(x\) in Hamming distance. The number of codewords in a neighborhood of \(x\) has to be polynomial in \(n\) in order for this decoder to run in time polynomial in \(n\). |
Binary quantum Goppa code | Farran algorithm [36]. |
Binomial code | Photon loss and dephasing errors can be detected by measuring the phase-space rotation \(\exp\left(2\pi\mathrm{i} \hat{n} / (S+1)\right)\) and the check operator \(J_x/J\) in the spin-coherent state language, where \(J\) is the total angular momentum and \(J_x\) is the angular momentum in the \(x\) direction [37]. This type of error correction fails for errors that are products of photon loss/gain and dephasing errors. However, for certain \((N,S)\) instances of the binomial code, detection of these types of errors can be done.Recovery can be done via projective measurements and unitary operations [37,38]. |
Block code | Decoding an error-correcting code is equivalent to finding the ground state of some statistical mechanical model [39]. |
Bose–Chaudhuri–Hocquenghem (BCH) code | Berlekamp-Massey decoder with runtime of order \(O(n^2)\) [27,28,40] and modification by Burton [29]; see also [30,31].Gorenstein-Peterson-Zierler decoder with runtime of order \(O(n^3)\) [24,41] (see exposition in Ref. [26]).Sugiyama et al. modification of the extended Euclidean algorithm [32,33].Guruswami-Sudan list decoder [3] and modification by Koetter-Vardy for soft-decision decoding [42]. |
Bosonic c-q code | Joint-detection receiver that can attain channel capacity [43].Various near-optimal receiver designs that can handle arbitrary constellations of coherent states with possible degeneracies [44]. |
Bosonic rotation code | One can distinguish (destructively) the codewords by performing a Fock-state number measurement. If a Fock state state \(|n\rangle\) is measured, then one rounds to the nearest integer of the form \((kq+j)/N\), and deduces that the true state was \(|\overline{j}\rangle\).One can distinguish states in the dual basis by performing phase estimation on \(\mathrm{e}^{\mathrm{i} \theta \hat n}\). One then rounds the resulting \(\theta\) to the nearest number \(2\pi j / qN\) in order to determine which dual basis state \(j \in \mathbb Z_q\) it came from.Autonomous quantum error correction schemes for \(S=1\) codes [45]. |
Braunstein five-mode code | Error correction can be done using linear-optical elements and feedback [46]. |
Cat code | Measuring the Fock-state number modulo \(2S\) can be used to determine if photon loss or excitation errors occurred. For \(S=1\), this is the occupation number parity. |
Chuang-Leung-Yamamoto (CLY) code | Destructive decoding with a photon number measurement on each mode.State can be decoded with a network of beamsplitters, phase shifters, and Kerr media. |
Classical Goppa code | Algebraic decoding algorithms [47]. If \( \text{deg} G(x) = 2t \) , then there exists a \(t\)-correcting algebraic decoding algorithm for \( \Gamma(L,G) \).Sugiyama et al. modification of the extended Euclidean algorithm [32,33].Guruswami-Sudan list decoder [3].Binary Goppa codes can be decoded using a RS-based decoder [48]. |
Cluster-state code | MBQC syndrome extraction is performed by multiplying certain single-qubit \(X\)-type measurements, which yield syndrome values. |
Codeword stabilized (CWS) code | There is no known efficient algorithm to decode non-additive (non-stabilizer) CWS codes. |
Coherent FSK (CFSK) c-q code | Bondurant receiver [49].Cyclic receiver [50].Time-resolving receiver [51–53].Bayesian inference [51]. |
Color code | Projection decoder [54].Matching decoder gives low logical failure rate [55].Integer-program-based decoder [56].Restriction decoder [57].Cellular-automaton decoder for the \(XYZ\) color code [58].MaxSAT-based decoder [59]. |
Concatenated code | Generalized minimum-distance decoder [60]. |
Convolutional code | Decoders based on the Viterbi algorithm (trellis decoding) were developed first, which result in the most-likely codeword for the encoded bits [61]. Following, other trellis decoders such as the BCJR decoding algorithm [62] were developed later. |
Cyclic linear \(q\)-ary code | Meggitt decoder [63].Information set decoding (ISD) [64], a probabilistic decoding strategy that essentially tries to guess \(k\) correct positions in the received word, where \(k\) is the size of the code. Then, an error vector is constructed to map the received word onto the nearest codeword, assuming the \(k\) positions are error free. When the Hamming weight of the error vector is low enough, that codeword is assumed to be the intended transmission.Permutation decoding [65]. |
Cyclic linear binary code | Meggitt decoder [63].Information set decoding (ISD) [64], a probabilistic decoding strategy that essentially tries to guess \(k\) correct positions in the received word, where \(k\) is the size of the code. Then, an error vector is constructed to map the received word onto the nearest codeword, assuming the \(k\) positions are error free. When the Hamming weight of the error vector is low enough, that codeword is assumed to be the intended transmission.Permutation decoding [65]. |
Cyclic quantum code | Adapted from the Berlekamp decoding algorithm for classical BCH codes [66]. |
Dinur-Hsieh-Lin-Vidick (DHLV) code | Linear-time decoder utilizing the small set-flip decoder [67] for \(Z\) errors and a reconstruction procedure for \(X\) errors [68]. |
EA qubit stabilizer code | Optical implementation of a minimal code using hyper-entangled states [69]. |
Eigenstate thermalization hypothesis (ETH) code | An explicit universal recovery channel for the ETH code is given in [70]. |
Evaluation AG code | Generalization of plane-curve decoder [71,72]. Another decoder [73] was later showed to be equivalent in Ref. [74]. Application of several algorthims in parallel can be used to decode up to half the minimum distance [75,76]. Computational procedure implementing these decoders is based on an extension of the Berlekamp-Massey algorithm by Sakata [77–79].Decoder based on majority voting of unknown syndromes [80] decodes up to half of the minimum distance [81].List decoders generalizing Sudan's RS decoder by Shokrollahi-Wasserman [82] and Guruswami-Sudan [3]. |
Expander LP code | Linear-time decoder [83].Logarithmic-time subroutine [84]. |
Expander code | Decoding can be done in \(O(n)\) runtime using a greedy flip decoder [85]. The algorithm consists of flipping a bit of the received word if it will result in a greater number of satisfied parity checks. This is repeated until a codeword is reached. |
Fiber-bundle code | Greedy algorithm can be used to efficiently decode \(X\) errors, but no known efficient decoding of \(Z\) errors yet [18]. |
Fibonacci code | An efficient algorithm base on minimum-weight perfect matching [86], which can correct high-weight errors that span the lattice, with failure rate decaying super-exponentially with \(L\). |
Fibonacci string-net code | Clustering decoder (provides best known threshold for this code) [87–89].Fusion-aware iterative minimum-weight perfect matching decoder. Note that ordinary MWPM decoders do not produce a threshold with this code [89].Cellular automaton decoder [90]. |
Finite-dimensional error-correcting code (ECC) | Capacity-achieving Guessing Random Additive Noise Decoding (GRAND) [91] (see also [92]). |
Finite-dimensional quantum error-correcting code | The operation \(\cal{D}\) in the definition of this code is called the decoder. However, the term decoder can sometimes be used for the inverse of an encoder, which does not correct errors.Quantum machine-learning based decoders such as quantum convolutional neural networks [93] and quantum autoencoders [94]. |
Five-qubit perfect code | Combined dynamical decoupling and error correction protocol on individually-controlled qubits with always-on Ising couplings [95].Symmetric decoder correcting all weight-one Pauli errors. The resulting logical error channel after coherent noise has been explicitly derived [96]. |
Folded RS (FRS) code | Guruswami and Rudra [97,98] achieved list-decoding up to \(1-\frac{k}{n}-\epsilon\) fraction of errors using the Parvaresh-Vardy algorithm [99]; see Ref. [100] for a randomized construction.Folded RS codes, concatenated with suitable inner codes, can be efficiently list-decoded up to the Blokh-Zyablov bound [97,101]. |
Folded quantum Reed-Solomon (FQRS) code | Quantum list decodable [102]. |
Fountain code | Invert the fragment generator matrix resulting from the continuous encoding process. If exactly \(K\) packets are received, then the probability of decoding correctly is \(0.289\). Extra packets increase this probability exponentially. The decoding runtime is dominated by the matrix inversion step, which takes order \(O(n^3)\) time. |
Frobenius code | Adapted from the Berlekamp decoding algorithm for classical BCH codes. There exists a polynomial time quantum algorithm to correct errors of weight at most \(\tau\), where \(\delta=2\tau+1\) is the BCH distance of the code [103]. |
GKP-stabilizer code | Syndromes can be read off using ancilla modes, yielding partial information about noise in the logical modes that can then be used in an efficient ML decoding procedure [104]. |
Gabidulin code | Fast decoder based on a transform-domain approach [105]. |
Galois-field \(q\)-ary code | For small \(n\), decoding can be based on a lookup table. For infinite code families, the size of such a table scales exponentially with \(n\), so approximate decoding algorithms scaling polynomially with \(n\) have to be used. The decoder determining the most likely error given a noise channel is called the maximum-likelihood decoder.Given a received string \(x\) and an error bound \(e\), a list decoder returns a list of all codewords that are at most \(e\) from \(x\). The number of codewords in a neighborhood of \(x\) has to be polynomial in \(n\) in order for this decoder to run in time polynomial in \(n\). |
Galois-qudit code | For few-qudit codes (\(n\) is small), decoding can be based on a lookup table. For infinite code families, the size of such a table scales exponentially with \(n\), so approximate decoding algorithms scaling polynomially with \(n\) have to be used. The decoder determining the most likely error given a noise channel is called the maximum-likelihood decoder. |
Galois-qudit non-stabilizer code | The decoding circuit involves the application of phase estimation. |
Generalized RS (GRS) code | The decoding process of GRS codes reduces to the solution of a polynomial congruence equation, usually referred to as the key equation. Decoding schemes are based on applications of the Euclid algorithm to solve the key equation.Berlekamp-Massey decoder with runtime of order \(O(n^2)\) [27,28,40].Guruswami-Sudan list decoder [3] and modification by Koetter-Vardy for soft-decision decoding [42]. |
Generalized bicycle (GB) code | BP-OSD decoder [23]. |
Generalized surface code | Improved BP-OSD decoder [106]. |
Golay code | Majority decoding for the extended Golay code [107].Decoder for the extended Golay code using the hexacode [108].Both Golay codes have a trellis representation and can thus be decoded using trellis decoding [109,110].Bounded-distance decoder requiring at most 121 real operations [111]. |
Gold code | General decoding is done by building a sparse parity check matrix, followed by applying an iterative message passing alogirithm. [112]. |
Gottesman-Kitaev-Preskill (GKP) code | The MLD decoder for Gaussian displacement errors is realized by evaluating a lattice theta function, and in general the decision can be approximated by either solving (approximating) the closest vector problem (CVP) or by using other effective iterative schemes when e.g. the lattice represents a concatenated GKP code [4,113–115].Closest lattice point decoding [116]. |
Heavy-hexagon code | Any graph-based decoder can be used, such as MWPM and Union Find. However, edge weights must be dynamically renormalized using flag-qubit measurement outcomes after each syndrome measurement round.Machine-learning decoder [117]. |
Hermitian code | Unique decoding using syndromes and error locator ideals for polynomial evaluations. Note that Hermitian codes are linear codes so we can compute the syndrome of a received vector. Moreover, akin to the error-locator ideals found in decoding Reed-Solomon codes, for the multivariate case we must define an error locator ideal \(\Lambda \) such that the variety of this ideal over \(\mathbb{F}^{2}_{q}\) is exactly the set of errors. The Sakata algorithm uses these two ingredients to get a unique decoding procedure [77]. |
Hexacode | Bounded-distance decoder requiring at most 34 real operations [111]. |
Hierarchical code | 2D geometrically local syndrome extraction circuits of depth \(O(\sqrt{n}/R)\) that utilize Clifford and SWAP gates of range \(R\) and that require order \(O(n)\) data and ancilla qubits. Such parameters are possible because the code parameters are such that previous bounds no longer apply [118]. |
High-dimensional expander (HDX) code | For 2D simplicial complexes, cycle code decoder admitting a polynomial-time decoding algorithm can be used [119]. |
Homological bosonic code | Decoding requires a different circuit for each possible erasure error, with no general circuit decoding any possible erasure error. Every circuit relies on a generalized conditional rotation, which Ref. [120] calls the QND Gate and which is defined as \(QND_c | x , y \rangle = |x + c y, y \rangle\). |
Homological product code | Union-find [121]. |
Honeycomb Floquet code | The ISG has a static subgroup for all time steps \(r\geq 3\) – that is, a subgroup which remains a subgroup of the ISG for all future times – given by so-called plaquette stabilizers. These are stabilizers consisting of products of check operators around homologically trivial paths. The syndrome bits correspond to the eigenvalues of the plaquette stabilizers. Because of the structure of the check operators, only one-third of all plaquettes are measured each round. The syndrome bits must therefore be represented by a lattice in spacetime, to reflect when and where the outcome was obtained. |
Hypergraph product (HGP) code | ReShape decoder that uses minimum weight decoders for the classical codes used in the hypergraph construction [122].2D geometrically local syndrome extraction circuits with depth order \(O(\sqrt{n})\) using order \(O(n)\) ancilla qubits [118].Improved BP-OSD decoder [106].Erasure-correction can be implemented approximately with \(O(n^2)\) operations with quantum generalizations [123] of the peeling and pruned peeling decoders [124], with a probabilistic version running in \(O(n^{1.5})\) operations. |
Interleaved RS (IRS) code | Decoder that corrects up to \(1-\frac{2k+n}{3n}\) fraction of random errors [125].Decoder that corrects up to \(1-(\frac{k}{n})^{2/3}\) fraction of random errors [126]. |
Irregular repeat-accumulate (IRA) code | Linear-time decoder [127]. |
Justesen code | Generalized minimum distance decoding [128]. |
Kitaev surface code | Degenerate maximum-likelihood (ML) [129], which takes time of order \(O(n^2)\) under independent \(X,Z\) noise for the surface code [130].Minimum weight perfect-matching (MWPM) [129,131] (based on work by Edmonds on finding a matching in a graph [132,133]), which takes time up to polynomial in \(n\) for the surface code. For the case of the surface code, minimum-weight decoding reduces to MWPM [129,132,134].Modified MWPM decoders: pipeline MWPM (accounting for correlations between events) [135,136], parity blossom MWPM and fusion blossom MWPM [137], and a modification utilizing the continuous signal obtained in the physical implementation of the stabilizer measurement (as opposed to discretizing the signal into a syndrome bit) [138].Belief perfect matching is a combination of belief-propagation and MWPM [139].Renormalization group (RG) [140–142].Markov-chain Monte Carlo [143].Tensor network [130].Cellular automaton [144,145].Neural network [146–149] and reinforcement learning [150].Union-find [151]. A subsequent modification utilizes the continuous signal obtained in the physical implementation of the stabilizer measurement (as opposed to discretizing the signal into a syndrome bit) [138]. Belief union find is a combination of belief-propagation and union-find [139].Decoders can be augmented with a pre-decoder [152,153], which can allow for some processing to be done inside the cryogenic environment of the quantum system [154].Sliding-window [155,156] and parallel-window [155] parallelizable decoders can be combined with many inner decoders, such as MWPM or union-find.Generalized belief propagation (GBP) [157] based on a classical version [158]. |
Linear \(q\)-ary code | Maximum likelihood (ML) decoding. This algorithm decodes a received word to the most likely sent codeword based on the received word. ML decoding of reduced complexity is possible for virtually all \(q\)-ary linear codes [159].Optimal symbol-by-symbol decoding rule [160].Information set decoding (ISD) [161], a probabilistic decoding strategy that essentially tries to guess \(k\) correct positions in the received word, where \(k\) is the size of the code. Then, an error vector is constructed to map the received word onto the nearest codeword, assuming the \(k\) positions are error free. When the Hamming weight of the error vector is low enough, that codeword is assumed to be the intended transmission.Generalized minimum-distance decoder [60].Soft-decision maximum-likelihood trellis-based decoder [162].Random linear codes over large fields are list-recoverable and list-decodable up to near-optimal rates [163].Extensions of algebraic-geometry decoders to linear codes [164,165]. |
Linear binary code | Decoding an arbitary linear binary code is \(NP\)-complete [166].Slepian's standard-array decoding [167].Recursive maximum likelihood decoding [168].Transformer neural net for soft decoding [169]. |
Locally decodable code (LDC) | LDCs admit local decoders, i.e., decoders whose runtime scales polylogarithmically with \(n\). |
Low-density parity-check (LDPC) code | Message-passing algorithm called belief propagation (BP) [170–173].Soft-decision Sum-Product Algorithm (SPA) [170,171,174] and its simplification the Min-Sum Algorithm (MSA) [175].Linear programming [176–178].Iterative LDPC decoders can get stuck at stopping sets of their Tanner graphs [179], with decoder performance improving with the size of the smallest stopping set; see [180; Sec. 21.3.1] for more details. The smallest stopping set size can reach the minimum distance of the code [181].Ensembles of random LDPC codes under iterative decoders are subject to the concentration theorem [171,182]; see [180; Thm. 21.7.1] for the case of the BEC.Reinforcement learning [183]. |
Low-rank parity-check (LRPC) code | Efficient probabilistic decoder [184].Mixed decoder [185]. |
Luby transform (LT) code | Sum-Product Algorithm (SPA), often called a peeling decoder [186,187], similar to belief propagation [188]. |
MacKay-Neal LDPC (MN-LDPC) code | Free-energy minimization and a BP decoder [189]. |
Matrix-product code | Decoder up to half of the minimum distance for NSC codes [190]. |
Melas code | Algebraic decoder [191]. |
Modular-qudit code | For few-qudit codes (\(n\) is small), decoding can be based on a lookup table. For infinite code families, the size of such a table scales exponentially with \(n\), so approximate decoding algorithms scaling polynomially with \(n\) have to be used. The decoder determining the most likely error given a noise channel is called the maximum-likelihood decoder. |
Modular-qudit stabilizer code | The structure of stabilizer codes allows for syndrome-based decoding, where errors are corrected based on the results of stabilizer measurements (syndromes).Trellis decoder for prime-dimensional qudits, which builds a compact representation of the algebraic structure of the normalizer \(\mathsf{N(S)}\) [192]. |
Modular-qudit surface code | Renormalization-group decoder [193]. |
Monitored random-circuit code | The recovery operation is the reverse unitary transformation with access to the measurement record (for dynamically generated codes with a strong purification transition) [194] |
NTRU-GKP code | Efficient decoder against stochastic displacement noise because the decoding problem is equivalent to decrypting the NTRU cryptosystem. |
Newman-Moore code | Efficient decoder [195]. |
Niset-Andersen-Cerf code | Optical decoder using three beam splitters, electronic gain detectors, and two phase-insensitive amplifiers as described in Ref. [196]. |
Number-phase code | Modular phase measurement done in the logical \(X\), or dual, basis has zero uncertainty in the case of ideal number phase codes. This is equivalent to a quantum measurement of the spectrum of the phase operator [197]. Approximate number-phase codes are characterized by vanishing phase uncertainty. Such measurements can be utilized for Knill error correction (a.k.a. telecorrection [198]), which is based on teleportation [199,200]. This type of error correction avoids the complicated correction procedures typical in Fock-state codes, but requires a supply of clean codewords [201]. Performance of this method was analyzed in Ref. [202].Number measurement can be done by extracting modular number information using a CROT gate \(\mathrm{e}^{(2\pi \mathrm{i} / NM) \hat n \otimes \hat n}\) and performing phase measurements [203,204] on an ancillary mode. See Section 4.B.1 of Ref. [201]. |
On-off keyed (OOK) c-q code | Dolinar receiver [205].Superconducting transition edge sensor (TES) photon-number resolving detector [206]. |
Orthogonal Spacetime Block Code (OSTBC) | Maximum-likelihood decoding can be achieved with only linear processing [207]. |
PPM c-q code | Conditional pulse nulling (CPN) receiver [208]. |
PSK c-q code | Multi-stage quantum receivers [209–214].Bayesian inference [51]. |
Pair-cat code | Lindbladian-based dissipative encoding utilizing two-mode two-photon absorption [215]. Encoding passively protects against cavity dephasing, suppressing dephasing noise exponentially with \(\gamma^2\). |
Parvaresh-Vardy (PV) code | PV codes can be list-decoded up to \(1-(t k/n)^{1/(t+1)}\) fraction of errors. This result improves over the Guruswami-Sudan algorithm for ordinary RS codes, which list-decodes up to \(1-\sqrt{k/n}\) fraction of errors. |
Pastawski-Yoshida-Harlow-Preskill (HaPPY) code | Greedy algorithm for decoding specified in Ref. [216]. |
Permutation spherical code | Efficient maximum-likelihood decoder determining the Voronoi region of an error word. |
Permutation-invariant code | For a family of codes, using projection, probability amplitude rebalancing, and gate teleportation can be done in \(O(N^2)\) [217]. |
Plane-curve code | Generalization of the Peterson algorithm for BCH codes [218]. |
Polar c-q code | Quantum-limited successive-cancellation (SC) joint-detection receiver [219]. |
Polar code | Successive cancellation (SC) decoder [220].Successive cancellation list (SCL) decoder [221] and a modification utilizing sequence repetition (SR-List) [222].Soft cancellation (SCAN) decoder [223,224].Belief propagation (BP) decoder [225].Noisy quantum gate-vased decoder [226]. |
Quantum Lego code | The decoder is created by creating a decoding quantum circuit with dangling legs replaced with input/output wires, and tensors converted to unitary gates. Maximum likelihood decoding can be used when the tensors are stabilizer codes.Tensor-network decoder when the tensor network is contractible via stabilizer isometries [227].Tensor-network-based decoder when the encoding unitary is known [228]. |
Quantum Tanner code | Linear-time decoder similar to the small-set-flip decoder for quantum expander codes [67].Linear-time decoder [83].Logarithmic-time decoder [84]. |
Quantum convolutional code | ML decoder [229]. |
Quantum expander code | Small set-flip linear-time decoder, which corrects \(\Omega(n^{1/2})\) adversarial errors [230].Log-time decoder [231].Constant-time decoder [232].2D geometrically local syndrome extraction circuits acting on a patch of \(N\) physical qubits have to be of depth at least \(\Omega(n/\sqrt{N})\) [118]. |
Quantum low-density parity-check (QLDPC) code | Belief-propagation (BP) decoder [233] and neural BP decoder [234] for qubit codes.Non-binary decoding algorithm for CSS-type QLDPC codes [235].BP-OSD decoder adds a post-processing step based on ordered statistics decoding (OSD) to the belief propogation (BP) decoder [23].2D geometrically local syndrome extraction circuits with bounded depth using order \(O(n^2)\) ancilla qubits [118].Soft (i.e., analog) syndrome iterative belief propagation for CSS-type QLDPC codes, utilizing the continuous signal obtained in the physical implementation of the stabilizer measurement (as opposed to discretizing the signal into a syndrome bit) [236].Message-passing decoder utilizing stabilizer inactivation (MP-SI) for CSS-type QLDPC qubit codes [237].Extension of the union-find decoder for qubit QLDPC codes, as well as a related heuristic decoder [238]. |
Quantum polar code | Quantum successive-cancellation list decoder (SCL-E) for quantum polar codes that do not need entanglement assistance [239]. |
Quantum repetition code | Automaton-like decoders for the repetition code on a 2D lattice, otherwise known as the classical 2D Ising model, were developed by Toom [240,241]. An automaton by Gacs yields a decoder for a 1D lattice [242].Machine learning algorithm to implement continuous error-correction for the three-qubit quantum repetition code [243]. |
Quantum-double code | For any solvable group \(G\), topological charge measurements can be done with an adaptive constant-depth circuit with geometrically local gates and measurements throughout [244]. |
Qubit CSS code | Coherent decoders allow for measurement-free error correction [245]. One method is table/multi-control decoding [246], which scales exponentially with the number of ancillas used in syndrome measurement. Another method, the Ising-based decoder, utilizes the mapping of the effect of the noise to a statistical mechanical model [129,247] such that the decoding problem maps to preparation of the ground state of an Ising model.Decoders based on neural networks [248]. |
Qubit code | The decoder determining the most likely error given a noise channel is called the maximum-likelihood decoder. For few-qubit codes (\(n\) is small), maximum-likelihood decoding can be based by creating a lookup table. For infinite code families, the size of such a table scales exponentially with \(n\), so approximate decoding algorithms scaling polynomially with \(n\) have to be used. |
Qubit stabilizer code | The structure of stabilizer codes allows for syndrome-based decoding, where errors are corrected based on the results of stabilizer measurements (syndromes). The size of the circuit extracting the syndrome depends on the weight of its corresponding stabilizer generator. Maximum-likelihood (ML) decoding, i.e., the process of finding the most likely error, is \(NP\)-complete in general [249,250]. If the noise model is such that the most likely error is the lowest-weight error, then ML decoding is called minimum-weight decoding. Degenerate maximum-likelihood decoding, i.e., the process of finding the most likely error class (up to degeneracy of errors), is \(\#P\)-complete in general [251].Trellis decoder, which builds a compact representation of the algebraic structure of the normalizer \(\mathsf{N(S)}\) [252].Quantum extension of GRAND decoder [253].Deep neural-network probabilistic decoder [254].Generalized belief propagation (GBP) [157] based on a classical version [158]. |
Qudit-into-oscillator code | Given an encoding of a finite-dimensional code, a decoder that yields the optimal entanglement fidelity can be obtained by solving a semi-definite program [5,6] (see also Ref. [7]). This approximate QEC technique can be adapted to bosonic codes as long as they are restricted to a finite-dimensional subspace of the oscillator Hilbert space [37]. |
Random code | Ball-collision decoding [255].Information set decoding (ISD) [256] and Finiasz and Sendrier (FS-ISD) decoding [257]. |
Rank-metric code | Polynomial-reconstruction Berlekamp-Welch based decoder [258].Berlekamp-Massey based decoder [259]. |
Raptor (RAPid TORnado) code | Raptor codes can be decoded using inactivation decoding [260], a combination of belief-propogation and Gaussian elimination decoding. |
Raussendorf-Bravyi-Harrington (RBH) cluster-state code | MBQC syndrome extraction consists of single-qubit measurements and classical post-processing. The six \(X\)-measurements of qubits on the faces of a cube of the bcc lattice multiply to the product of the six cluster-state stabilizers whose vertices are on the faces of the cube. Such measurements, if done on a 2D slice, also yield \(Z\)-type syndromes on the next slice.Minimum weight perfect-matching (MWPM) [129,131] (based on work by Edmonds on finding a matching in a graph [132,133]). |
Reed-Muller (RM) code | Reed decoder with \(r+1\)-step majority decoding corrects \(\frac{1}{2}(2^{m-r}-1)\) errors [261] (see also Ch. 13 of Ref. [262]).Sequential code-reduction decoding [263].First-order (\(r=1\)) RM codes admit specialized decoders [264]. |
Reed-Solomon (RS) code | Although using iFFT has its counterpart iNNT for finite fields, the decoding is usually standard polynomial interpolation in \(k=O(n\log^2 n)\). However, in erasure decoding, encoded values are only erased in \(r\) points, which is a specific case of polynomial interpolation and can be done in \(O(n\log n)\) by computing product of the received polynomial and an erasure locator polynomial and using long division to find an original polynomial. The long division step can be omitted to increase speed further by only dividing the derivative of the product polynomial, and derivative of erasure locator polynomial evaluated at erasure locations.Berlekamp-Massey decoder with runtime of order \(O(n^2)\) [27,28].Gorenstein-Peterson-Zierler decoder with runtime of order \(O(n^3)\) [24,41] (see exposition in Ref. [26]).Berlekamp-Welch decoder with runtime of order \(O(n^3)\) [265] (see exposition in Ref. [266]), assuming that \(t \geq (n+k)/2\).Gao decoder using extended Euclidean algorithm [267].Fast-Fourier-transform decoder with runtime of order \(O(n \text{polylog}n)\) [268].List decoders try to find a low-degree bivariate polynomial \(Q(x,y)\) such that evaluation of \(Q\) at \((\alpha_i,y_i)\) is zero. By choosing proper degrees, it can be shown such polynomial exists by drawing an analogy between evaluation of \(Q(\alpha_i,y_i)\) and solving a homogenous linear equation (interpolation). Once this is done, one lists roots of \(y\) that agree at \(\geq t\) points. The breakthrough Sudan list-decoding algorithm corrects up to \(1-\sqrt{2R}\) fraction of errors [269]. Roth and Ruckenstein proposed a modified key equation that allows for correction of more than \(\left\lfloor (n-k)/2 \right\rfloor\) errors [270]. The Guruswami-Sudan algorithm improved the Sudan algorithm to \(1-\sqrt{R}\) [3]; see Ref. [271] for bounds. A further modification by Koetter and Vardy is used for soft-decision decoding [42] (see also Ref. [272]). |
Regular binary Tanner code | Parallel decoding algorithm corrects a fraction \(\delta_0^2/48\) of errors for Tanner codes [85]. A modification of said algorithm improves the fraction to \(\delta_0^2/4\) with no extra cost to complexity [273]. |
Repetition code | Calculate the Hamming weight \(d_H\) of the code. If \(d_H\leq \frac{n-1}{2}\), decode the code as 0. If \(d_H\geq \frac{n+1}{2}\), decode the code as 1.Automaton-like decoders for the repetition code on a 2D lattice, otherwise known as the classical 2D Ising model, were developed by Toom [240,241]. An automaton by Gacs yields a decoder for a 1D lattice [242]. |
Rotated surface code | Local neural-network using 3D convolutions, combined with a separate global decoder [274]. |
Simplex code | Due to the small size, it can be decoded according to maximum likelihood.Some faster decoders for the \(q=2\) case: [275,276]A quantum decoder for the \(q=2\) case: [277]. |
Single parity-check (SPC) code | If the receiver finds that the parity information of a codeword disagrees with the parity bit, then the receiver will discard the information and request a resend.Wagner's rule yields a procedure that is linear in \(n\) [278] (see [279; Sec. 29.7.2] for a description). |
Singleton-bound approaching AQECC | Quantum list decodable [102]. |
Skew-cyclic code | Only given for skew-BCH codes, adapted froom standard BCH codes. |
Spacetime circuit code | Efficient decoders can be constructed for some circuits [280]. |
Sphere packing | Each signal point is assigned its own Voronoi cell, and a received point is mapped back to the center of the Voronoi cell that it is located upon reception. |
Square-lattice GKP code | Syndrome measurement can be done by applying a controlled-displacement controlled by an ancilla qubit. The syndrome information can be obtained by measuring the ancilla qubit after controlled-displacement opearation. See Section. 2D in [281].Pauli \(X\),\(Y\) and \(Z\) measurements can be performed by measuring \(-\hat{p},\hat{x}-\hat{p}\) and \(\hat{x}\) repectively. If the measurement outcome is closed to an even multiple of \(\sqrt{\pi}\), then the outcome is +1. If the measurement outcome is closed to an odd multiple of \(\sqrt{\pi}\), then the outcome is -1. See Section. 2D in [281].Reinforcement learning decoder that uses only one ancilla qubit [282]. |
String-net code | Fusing nonabelian anyons cannot be done in one step [283].Syndrome measurement circuits analyzed in Ref. [284].Clustering decoder [88]. |
Subsystem QPC | In a \([[n_1n_2, k_1k_2, min(d_1, d_2)]]\) QPC, error correction is achieved by measuring \((n_1−k_1)n_2+(n_2−k_2)\) stabilizer generators [285]. The subsystem QPC achieves the same degree of correctability, but requires only \((n_1−k_1)k_2+k_1(n_2−k_2)\) stabilizer measurements. |
Subsystem modular-qudit stabilizer code | Syndrome measurements are obtained by first measuring gauge operators of the code and taking their products, which give the stabilizer measurement outcomes. The order in which gauge operators are measured is important since they do not commute. There is a sufficient condition for inferring the stabilizer syndrome from the measurements of the gauge generators [286; Appendix]. |
Surface-17 code | Lookup table [287]. |
Ta-Shma zigzag code | Unique and list decoders [288]. |
Tanner code | Min-sum and sum-product iterative decoders for binary Tanner codes [289,290]; see also [33,291]. These decoders can be improved using a probabilistic message-passing schedule [292].Any code can be put into normal form without significantly altering the underlying graph or the decoding complexity [293]. For an algebraic viewpoint on decoding, see [294].Iterative decoding is optimal for Tanner graphs that are free of cycles [290]. However, codes that admit cycle-free representations have bounds on their distances [295,296]; see [180,297]. |
Tensor-product code | The simple decoding algorithm (first decode all columns with \(C_1\), then all rows with \(C_2\)) corrects up to \((d_A d_B-1)/4 \) errors.Algorithms such as generalized minimum-distance decoding [60] or the min-sum algorithm can decode all errors of weight up to \((d_A d_B-1)/2\). Error location may be coupled with Viterbi decoding for every faulty sub-block [298]. |
Ternary Golay code | Decoder for the extended ternary Golay code using the tetracode [108]. |
Tetron Majorana code | Qubit readout can be done by charge sensing [299–302]. |
Three-qutrit code | The quantum information (the secret) can be recovered from a unitary transformation acting on only two qutrits, \( U_{ij} \otimes I \), where \(U_{ij}\) acts on qutrits \(i,j\) and \(I\) is the identity on the remaining qutrit. By the cyclic structure of the codewords, this unitary transformation performs a permutation that recovers the information and stores it in one of the two qutrits involved in recovery. |
Tornado code | Linear-time peeling decoder [124]. This decoder either terminates when it has removed a given erasure pattern or when it is stuch in a stopping set. |
Torus-layer spherical code (TLSC) | Efficiently decodable [303]. |
Translationally invariant stabilizer code | Clustering decoder [144,304]. |
Two-component cat code | All-optical decoder [305] based on Knill error correction (a.k.a. telecorrection [198]), which is based on teleportation [199,200]. |
Two-dimensional hyperbolic surface code | Due to the symmetries of hyperbolic surface codes, optimal measurement schedules of the stabilizers can be found [306]. |
Very small logical qubit (VSLQ) code | Logical qubit can be measured with physical qubit measurements along \(X\). Can be implemented by engineering a coupling of one of the qubits to a readout cavity via the interaction \(\sigma_x (a+a^\dagger)\) [307]. This results in an \(X\)-dependent shift of the readout cavity resonance which can be measured.Star-code autonomous correction scheme [308]. |
Wasilewski-Banaszek code | Destructive measurement with photon number measurements on each mode. |
X-cube model code | Parallelized matching decoder [309]. |
XYZ\(^2\) hexagonal stabilizer code | Maximum-likelihood decoding using the EWD decoder [310]. |
XZZX surface code | Minimum-weight perfect matching decoder, which can be used for \(X\) and \(Z\) noise. For \(Y\) noise, a variant of the matching decoder could be used like it is used for the XY code in Ref. [311]. |
Zetterberg code | Kallquist first described an algebraic decoding theorem [312]. A faster version was later provided in Ref. [313] and further modified in Ref. [314]. |
\([[2^r-1, 2^r-2r-1, 3]]\) Hamming-based CSS code | Efficient decoder [315]. |
\([[9,1,3]]\) Shor code | Bit- and phase-flip circuits utilize CNOT and Hadamard gates ([316], Fig. 10.6). |
\(\chi^{(2)}\) code | Linear optics and \(\chi^{(2)}\) interactions. |