Here is a list of classical codes that have notable decoders.

Name | Decoder(s) |
---|---|

Alternant code | Variation of the Berlekamp-Welch algorithm [1].Euclidean algorithm; see [2; Ch. 12] for more details.Guruswami-Sudan list decoder [3,4]. |

B-code | Efficient decoding algorithm against erasures [5]. |

Balanced code | Efficient decoder [6–8]. |

Binary BCH code | Peterson decoder with runtime of order \(O(n^3)\) [9,10] (see exposition in Ref. [11]).Berlekamp-Massey decoder with runtime of order \(O(n^2)\) [12,13] and modification by Burton [14]; see also [15,16].Sugiyama et al. modification of the extended Euclidean algorithm [17,18].Guruswami-Sudan list decoder [3,4]. |

Binary code | For few-bit codes (\(n\) is small), decoding can be based on a lookup table. For infinite code families, the size of such a table scales exponentially with \(n\), so approximate decoding algorithms scaling polynomially with \(n\) have to be used. The decoder determining the most likely error given a noise channel is called the maximum-likelihood decoder.Given a received string \(x\) and an error bound \(e\), a list decoder returns a list of all codewords that are at most \(e\) from \(x\) in Hamming distance. The number of codewords in a neighborhood of \(x\) has to be polynomial in \(n\) in order for this decoder to run in time polynomial in \(n\). |

Binary quadratic-residue (QR) code | Algebraic decoder [19]. |

Block code | Decoding an error-correcting code is equivalent to finding the ground state of some statistical mechanical model [20]. |

Bose–Chaudhuri–Hocquenghem (BCH) code | Berlekamp-Massey decoder with runtime of order \(O(n^2)\) [12,13,21] and modification by Burton [14]; see also [15,16].Gorenstein-Peterson-Zierler decoder with runtime of order \(O(n^3)\) [9,22] (see exposition in Ref. [11]).Sugiyama et al. modification of the extended Euclidean algorithm [17,18]. |

Concatenated code | Generalized minimum-distance decoder [23]. |

Convolutional code | Decoders based on the Viterbi algorithm (trellis decoding) were developed first, which result in the most-likely codeword for the encoded bits [24].BCJR decoder, also a trellis-based decoder [25]. |

Cyclic linear \(q\)-ary code | Meggitt decoder [26].Information set decoding (ISD) [27], a probabilistic decoding strategy that essentially tries to guess \(k\) correct positions in the received word, where \(k\) is the size of the code. Then, an error vector is constructed to map the received word onto the nearest codeword, assuming the \(k\) positions are error free. When the Hamming weight of the error vector is low enough, that codeword is assumed to be the intended transmission.Permutation decoding [28]. |

Cyclic linear binary code | Meggitt decoder [26].Information set decoding (ISD) [27], a probabilistic decoding strategy that essentially tries to guess \(k\) correct positions in the received word, where \(k\) is the size of the code. Then, an error vector is constructed to map the received word onto the nearest codeword, assuming the \(k\) positions are error free. When the Hamming weight of the error vector is low enough, that codeword is assumed to be the intended transmission.Permutation decoding [28]. |

Cyclic redundancy check (CRC) code | GRAND [29]. |

Delsarte-Goethals (DG) code | Since the equivalent \(\mathbb{Z_4}\) codes are extended cyclic codes, efficient encoding and decoding is possible [30]. |

Determinant code | For exact repair, the interior points of the storage-bandwidth trade-off curve can be shown to be the convex hull of \(k\) corner points described by \((\alpha_m,\beta_m)= (\binom{k}{m},\binom{k-1}{m-1})\) for \(m\in\{1,2,\cdots,k\}\). |

EVENODD code | Efficient decoding algorithm against two erasures [31]. |

Error-correcting output code (ECOC) | Standard Hamming-distance decoding [32].Inverse Hamming decoding [33].Euclidean-distance decoding or attenuated Euclidean decoding [34].Loss-based decoding [35].Probabilistic-based decoding [36]. |

Evaluation AG code | Generalization of plane-curve decoder [37]. Another decoder [38] was later showed to be equivalent in Ref. [39]. Application of several algorithms in parallel can be used to decode up to half the minimum distance [40,41]. Computational procedure implementing these decoders is based on an extension of the Berlekamp-Massey algorithm by Sakata [42–44].Decoder based on majority voting of unknown syndromes [45] decodes up to half of the minimum distance [46].List decoders generalizing Sudan's RS decoder by Shokrollahi-Wasserman [47] and Guruswami-Sudan [4]. |

Expander code | Decoding can be done in \(O(n)\) runtime using a greedy flip decoder [48] (see also [49]). The algorithm consists of flipping a bit of the received word if it will result in a greater number of satisfied parity checks. This is repeated until a codeword is reached.'Find erasures and Decode' a.k.a. Viderman's algorithm correcting order \(\Omega(n)\) errors in order \(O(n)\) time [50]. |

Fibonacci code | An efficient algorithm base on minimum-weight perfect matching [51], which can correct high-weight errors that span rows and columns of the 2D lattice, with failure rate decaying super-exponentially with \(L\). |

Finite-dimensional error-correcting code (ECC) | Capacity-achieving Guessing Random Additive Noise Decoding (GRAND) [52] (see also [53]). |

Folded RS (FRS) code | Guruswami and Rudra [54,55] achieved list-decoding up to \(1-\frac{k}{n}-\epsilon\) fraction of errors using the Parvaresh-Vardy algorithm [56]; see Ref. [57] for a randomized construction.List-decoding works up to the Johnson bound using the Guruswami-Sudan algorithm [58].Folded RS codes, concatenated with suitable inner codes, can be efficiently list-decoded up to the Blokh-Zyablov bound [54,59]. |

Fountain code | Invert the fragment generator matrix resulting from the continuous encoding process. If exactly \(K\) packets are received, then the probability of decoding correctly is \(0.289\). Extra packets increase this probability exponentially. The decoding runtime is dominated by the matrix inversion step, which takes order \(O(n^3)\) time. |

Gabidulin code | Fast decoder based on a transform-domain approach [60].Algebraic list decoder that decodes up to the Singleton bound [61]. |

Generalized RS (GRS) code | The decoding process of GRS codes reduces to the solution of a polynomial congruence equation, usually referred to as the key equation. Decoding schemes are based on applications of the Euclid algorithm to solve the key equation.Berlekamp-Massey decoder with runtime of order \(O(n^2)\) [12,13,21].Guruswami-Sudan list decoder [3,4] and modification by Koetter-Vardy for soft-decision decoding [62].Hard-decision decoder for errors within the Singleton bound [63]. |

Golay code | Majority decoding for the extended Golay code [64].Decoder for the extended Golay code using the hexacode [65].Both Golay codes have a trellis representation and can thus be decoded using trellis decoding [66,67].Bounded-distance decoder requiring at most 121 real operations [68]. |

Gold code | General decoding is done by building a sparse parity check matrix, followed by applying an iterative message passing alogirithm. [69]. |

Goppa code | Algebraic decoding algorithms [70]. If \( \text{deg} G(x) = 2t \) , then there exists a \(t\)-correcting algebraic decoding algorithm for \( \Gamma(L,G) \).Sugiyama et al. modification of the extended Euclidean algorithm [17,18].Binary Goppa codes can be decoded using an RS-based decoder [71].List decoder for binary Goppa codes [72]. |

Hergert code | Since the equivalent \(\mathbb{Z_4}\) codes are extended cyclic codes, efficient encoding and decoding is possible. [30,73]. |

Hermitian code | Unique decoding using syndromes and error locator ideals for polynomial evaluations. Note that Hermitian codes are linear codes so we can compute the syndrome of a received vector. Moreover, akin to the error-locator ideals found in decoding RS codes, for the multivariate case we must define an error locator ideal \(\Lambda \) such that the variety of this ideal over \(\mathbb{F}^{2}_q\) is exactly the set of errors. The Sakata algorithm uses these two ingredients to get a unique decoding procedure [42]. |

Hexacode | Bounded-distance decoder requiring at most 34 real operations [68]. |

Interleaved RS (IRS) code | Decoder that corrects up to \(1-\frac{2k+n}{3n}\) fraction of random errors [74].Decoder that corrects up to \(1-(\frac{k}{n})^{2/3}\) fraction of random errors [75]. |

Irregular repeat-accumulate (IRA) code | Linear-time decoder [76]. |

Justesen code | Generalized minimum distance decoding [77]. |

Kerdock code | Soft decision decoding involves extending the Fast Hadamard Transform decoding algorithm for the first-order RM code to Kerdock code [30].Complexity of soft decision decoding algorithm: \(4^m\) multiplications and \(m4^m\) additions [30,78]. |

Lattice-based code | Spherical decoder [79,80]. |

Linear STC | Sphere decoder [81–83]. |

Linear \(q\)-ary code | Maximum likelihood (ML) decoding. This algorithm decodes a received word to the most likely sent codeword based on the received word. ML decoding of reduced complexity is possible for virtually all \(q\)-ary linear codes [84].Optimal symbol-by-symbol decoding rule [85].Information set decoding (ISD) [86], a probabilistic decoding strategy that essentially tries to guess \(k\) correct positions in the received word, where \(k\) is the size of the code. Then, an error vector is constructed to map the received word onto the nearest codeword, assuming the \(k\) positions are error free. When the Hamming weight of the error vector is low enough, that codeword is assumed to be the intended transmission.Generalized minimum-distance decoder [23].Soft-decision maximum-likelihood trellis-based decoder [87].Random linear codes over large fields are list-recoverable and list-decodable up to near-optimal rates [88].Extensions of algebraic-geometry decoders to linear codes [89,90]. |

Linear binary code | Decoding an arbitary linear binary code is \(NP\)-complete [91].Slepian's standard-array decoding [92].Recursive maximum likelihood decoding [93].Deep learning [94] and a transformer graph neural net (GNN) for soft decoding [95].Chase decoding, which uses channel measurement information [96]. |

Linear code with complementary dual (LCD) | The decoding problem reduces to finding the nearest codeword in \(C\) given a word in \(C^{\perp}\) [97]. |

Linearized RS code | Berlekamp-Welch-type decoder [98] and its sum-rank version [99]. |

Locally decodable code (LDC) | LDCs admit local decoders, i.e., decoders whose runtime scales polylogarithmically with \(n\). |

Low-density parity-check (LDPC) code | Message-passing algorithm called belief propagation (BP) [100–102] (see also [103–105]).Soft-decision Sum-Product Algorithm (SPA) [100,103,106] and its simplification the Min-Sum Algorithm (MSA) [107].Linear programming [108–110].Iterative LDPC decoders can get stuck at stopping sets of their Tanner graphs [111], with decoder performance improving with the size of the smallest stopping set; see [112; Sec. 21.3.1] for more details. The smallest stopping set size can reach the minimum distance of the code [113].Ensembles of random LDPC codes under iterative decoders are subject to the concentration theorem [103,114]; see [112; Thm. 21.7.1] for the case of the BEC.Reinforcement learning [115]. |

Low-rank parity-check (LRPC) code | Efficient probabilistic decoder [116].Mixed decoder [117]. |

Luby transform (LT) code | Sum-Product Algorithm (SPA), often called a peeling decoder [118,119], similar to belief propagation [120]. |

MacKay-Neal LDPC (MN-LDPC) code | Free-energy minimization and a BP decoder [121]. |

Matrix-product code | Decoder up to half of the minimum distance for NSC codes [122]. |

Melas code | Algebraic decoder [123]. |

Multiplicity code | Multivariate multiplicity codes can be decoded up to half of the minimum distance in polynomial time [124,125].Univariate [126] and multivariate [124] multiplicity codes can be list-decoded up to the Johnson bound. Certain univariate code families achieve the list-decoding capacity for sufficiently large field characteristic [124,127]. |

Newman-Moore code | Efficient decoder [128]. |

Orthogonal Spacetime Block Code (OSTBC) | Maximum-likelihood decoding can be achieved with only linear processing [129]. |

Parvaresh-Vardy (PV) code | PV codes can be list-decoded up to \(1-(t k/n)^{1/(t+1)}\) fraction of errors. This result improves over the Guruswami-Sudan algorithm for ordinary RS codes, which list-decodes up to \(1-\sqrt{k/n}\) fraction of errors. |

Permutation spherical code | Efficient maximum-likelihood decoder determining the Voronoi region of an error word. |

Plane-curve code | Generalization of the Peterson algorithm for BCH codes [37,130,131]. |

Polar code | Successive cancellation (SC) decoder [132].Successive cancellation list (SCL) decoder [133] and a modification utilizing sequence repetition (SR-List) [134].Soft cancellation (SCAN) decoder [135,136].Belief propagation (BP) decoder [137].Noisy quantum gate-vased decoder [138]. |

Preparata code | Preparata Codes can be decoded using a syndrome calculation based algorithm to correct all error patterns of Lee weight atmost 2 and detect all/ some error patterns of Lee weight 3/ 4 [30,78]. |

Random code | Ball-collision decoding [139].Information set decoding (ISD) [140] and Finiasz and Sendrier (FS-ISD) decoding [141]. |

Rank-metric code | Polynomial-reconstruction Berlekamp-Welch based decoder [142].Berlekamp-Massey based decoder [143]. |

Raptor (RAPid TORnado) code | Raptor codes can be decoded using inactivation decoding [144], a combination of belief-propogation and Gaussian elimination decoding. |

Reed-Muller (RM) code | Reed decoder with \(r+1\)-step majority decoding corrects \(\frac{1}{2}(2^{m-r}-1)\) errors [145] (see also Ch. 13 of Ref. [2]).Sequential code-reduction decoding [146].Matrix factorization can be used to decode an RM\((n,n-3)\) code [147]; see [148]. |

Reed-Solomon (RS) code | Decoding general RS codes is \(NP\)-hard [149].Although using iFFT has its counterpart iNNT for finite fields, the decoding is usually standard polynomial interpolation in \(k=O(n\log^2 n)\). However, in erasure decoding, encoded values are only erased in \(r\) points, which is a specific case of polynomial interpolation and can be done in \(O(n\log n)\) by computing product of the received polynomial and an erasure locator polynomial and using long division to find an original polynomial. The long division step can be omitted to increase speed further by only dividing the derivative of the product polynomial, and derivative of erasure locator polynomial evaluated at erasure locations.Berlekamp-Massey decoder with runtime of order \(O(n^2)\) [12,13].Gorenstein-Peterson-Zierler decoder with runtime of order \(O(n^3)\) [9,22] (see exposition in Ref. [11]).Berlekamp-Welch decoder with runtime of order \(O(n^3)\) [150] (see exposition in Ref. [151]), assuming that \(t \geq (n+k)/2\).Gao decoder using extended Euclidean algorithm [152].Fast-Fourier-transform decoder with runtime of order \(O(n \text{polylog}n)\) [153].List decoders try to find a low-degree bivariate polynomial \(Q(x,y)\) such that evaluation of \(Q\) at \((\alpha_i,y_i)\) is zero. By choosing proper degrees, it can be shown such polynomial exists by drawing an analogy between evaluation of \(Q(\alpha_i,y_i)\) and solving a homogenous linear equation (interpolation). Once this is done, one lists roots of \(y\) that agree at \(\geq t\) points. The breakthrough Sudan list-decoding algorithm corrects up to \(1-\sqrt{2R}\) fraction of errors asymptotically in \(n\) [154]. Roth and Ruckenstein proposed a modified key equation that allows for correction of more than \(\left\lfloor (n-k)/2 \right\rfloor\) errors [155]. The Guruswami-Sudan algorithm improved the Sudan algorithm to \(1-\sqrt{R}\) [4], meaning that RS codes achieve list-decoding capacity; see Ref. [156] for bounds. It was later shown that generic RS codes achieve list-decoding capacity [157]. A modification of the Guruswami-Sudan algorithm by Koetter and Vardy is used for soft-decision decoding [62] (see also Ref. [158]). Subcodes of RS codes whose evaluation points lie in a subfield can be decoded up to the \(1-R\) [61].The ubiquity of RS codes has yielded off-the-shelf VLSI intergrated-circuit decoding hardware [159] (see also Ref. [160], Ch. 5 and 10). |

Regenerating code (RGC) | If the recovered symbols are exactly equal to the erased symbols, the repair is called an exact repair.If the recovered symbols are not exactly equal to the erased symbols but still preserve the code properties, the repair is called a functional repair. |

Regular binary Tanner code | Parallel decoding algorithm corrects a fraction \(\delta_0^2/48\) of errors for Tanner codes [48]. A modification of said algorithm improves the fraction to \(\delta_0^2/4\) with no extra cost to complexity [161].Soft-decision linear-time decoder correcting errors almost up to half of the Blokh-Zyablov bound [162]. |

Repetition code | Calculate the Hamming weight \(d_H\) of the code. If \(d_H\leq \frac{n-1}{2}\), decode the code as 0. If \(d_H\geq \frac{n+1}{2}\), decode the code as 1.Automaton-like decoders for the repetition code on a 2D lattice, otherwise known as the classical 2D Ising model, were developed by Toom [163,164]. An automaton by Gacs yields a decoder for a 1D lattice [165]. |

Single parity-check (SPC) code | If the receiver finds that the parity information of a codeword disagrees with the parity bit, then the receiver will discard the information and request a resend.Wagner's rule yields a procedure that is linear in \(n\) [166] (see [167; Sec. 29.7.2] for a description). |

Sphere packing | Each signal point is assigned its own Voronoi cell, and a received point is mapped back to the center of the Voronoi cell in which it is located upon reception. |

Subspace code | List decoding up to the Singleton bound [61]. |

Ta-Shma zigzag code | Unique and list decoders [168]. |

Tamo-Barg code | Polynomial evaluation over \(r\) points [169]. |

Tanner code | Min-sum and sum-product iterative decoders for binary Tanner codes [170,171]; see also [18,172]. These decoders can be improved using a probabilistic message-passing schedule [173].Any code can be put into normal form without significantly altering the underlying graph or the decoding complexity [174]. For an algebraic viewpoint on decoding, see [175].Iterative decoding is optimal for Tanner graphs that are free of cycles [171]. However, codes that admit cycle-free representations have bounds on their distances [176,177]; see [112,178]. |

Tensor-product code | The simple decoding algorithm (first decode all columns with \(C_1\), then all rows with \(C_2\)) corrects up to \((d_A d_B-1)/4 \) errors.Algorithms such as generalized minimum-distance decoding [23] or the min-sum algorithm can decode all errors of weight up to \((d_A d_B-1)/2\). Error location may be coupled with Viterbi decoding for every faulty sub-block [179]. |

Ternary Golay code | Decoder for the extended ternary Golay code using the tetracode [65]. |

Tornado code | Linear-time peeling decoder [180]. This decoder either terminates when it has removed a given erasure pattern or when it is stuck in a stopping set. |

Torus-layer spherical code (TLSC) | Efficiently decodable [181]. |

Turbo code | Turbo decoder [182], an instance of BP decoding [183].Maximum A Posteriori (MAP) decoder [184] and a soft output derivative [185]. The use of soft outputs can improve code performance [186].List decoding [187].VLSI intergrated-circuit decoding hardware [188].Autoencoder [189]. |

Varshamov-Tenengolts (VT) code | Decoder based on checksums \(\sum_{i=1}^n i~x_i^{\prime}\) of corrupted codewords \(x_i^{\prime}\) [190,191]. |

Zetterberg code | Kallquist first described an algebraic decoding theorem [192]. A faster version was later provided in Ref. [193] and further modified in Ref. [194]. |

\([2^m,m+1,2^{m-1}]\) First-order RM code | First-order RM codes admit specialized decoders, such as the Fast Hadamard Transform decoder [195]. |

\([2^m-1,m,2^{m-1}]\) simplex code | Serial orthogonal decoder [196,197]Quantum decoder [198]. |

\(q\)-ary code | For small \(n\), decoding can be based on a lookup table. For infinite code families, the size of such a table scales exponentially with \(n\), so approximate decoding algorithms scaling polynomially with \(n\) have to be used. The decoder determining the most likely error given a noise channel is called the maximum-likelihood decoder.Given a received string \(x\) and an error bound \(e\), a list decoder returns a list of all codewords that are at most \(e\) from \(x\). The number of codewords in a neighborhood of \(x\) has to be polynomial in \(n\) in order for this decoder to run in time polynomial in \(n\). |

\(q\)-ary simplex code | Permutation decoder [199] and MacDonald [200] codes. |

## References

- [1]
- H. Helgert, “Decoding of alternant codes (Corresp.)”, IEEE Transactions on Information Theory 23, 513 (1977) DOI
- [2]
- F. J. MacWilliams and N. J. A. Sloane. The theory of error correcting codes. Elsevier, 1977.
- [3]
- V. Guruswami and M. Sudan, “Improved decoding of Reed-Solomon and algebraic-geometry codes”, IEEE Transactions on Information Theory 45, 1757 (1999) DOI
- [4]
- V. Guruswami and M. Sudan, “Improved decoding of Reed-Solomon and algebraic-geometric codes”, Proceedings 39th Annual Symposium on Foundations of Computer Science (Cat. No.98CB36280) DOI
- [5]
- M. Blaum and R. M. Roth, “New array codes for multiple phased burst correction”, IEEE Transactions on Information Theory 39, 66 (1993) DOI
- [6]
- D. Knuth, “Efficient balanced codes”, IEEE Transactions on Information Theory 32, 51 (1986) DOI
- [7]
- S. Al-Bassam and B. Bose, “On balanced codes”, IEEE Transactions on Information Theory 36, 406 (1990) DOI
- [8]
- K. A. Schouhamer Immink and J. H. Weber, “Very Efficient Balanced Codes”, IEEE Journal on Selected Areas in Communications 28, 188 (2010) DOI
- [9]
- W. Peterson, “Encoding and error-correction procedures for the Bose-Chaudhuri codes”, IEEE Transactions on Information Theory 6, 459 (1960) DOI
- [10]
- S. Arimoto, "Encoding and decoding of p-ary group codes and the correction system," Information Processing in Japan (in Japanese), vol. 2, pp. 320-325, Nov. 1961.
- [11]
- R.E. Blahut, Theory and practice of error-control codes, Addison-Wesley 1983.
- [12]
- J. Massey, “Shift-register synthesis and BCH decoding”, IEEE Transactions on Information Theory 15, 122 (1969) DOI
- [13]
- E. R. Berlekamp, Algebraic Coding Theory, McGraw-Hill, 1968
- [14]
- H. Burton, “Inversionless decoding of binary BCH codes”, IEEE Transactions on Information Theory 17, 464 (1971) DOI
- [15]
- W. W. Peterson and E. J. Weldon, Error-correcting codes. MIT press 1972.
- [16]
- R. Gallager, Information Theory and Reliable Communication (Springer Vienna, 1972) DOI
- [17]
- Y. Sugiyama et al., “A method for solving key equation for decoding goppa codes”, Information and Control 27, 87 (1975) DOI
- [18]
- R. McEliece, The Theory of Information and Coding (Cambridge University Press, 2002) DOI
- [19]
- Chen, Y. H., Truong, T. K., Chang, Y., Lee, C. D., & Chen, S. H. (2007). Algebraic decoding of quadratic residue codes using Berlekamp-Massey algorithm. Journal of information science and engineering, 23(1), 127-145.
- [20]
- N. Sourlas, “Spin-glass models as error-correcting codes”, Nature 339, 693 (1989) DOI
- [21]
- E. Berlekamp, “Nonbinary BCH decoding (Abstr.)”, IEEE Transactions on Information Theory 14, 242 (1968) DOI
- [22]
- D. Gorenstein and N. Zierler, “A Class of Error-Correcting Codes in \(p^m \) Symbols”, Journal of the Society for Industrial and Applied Mathematics 9, 207 (1961) DOI
- [23]
- G. Forney, “Generalized minimum distance decoding”, IEEE Transactions on Information Theory 12, 125 (1966) DOI
- [24]
- A. Viterbi, “Error bounds for convolutional codes and an asymptotically optimum decoding algorithm”, IEEE Transactions on Information Theory 13, 260 (1967) DOI
- [25]
- L. Bahl et al., “Optimal decoding of linear codes for minimizing symbol error rate (Corresp.)”, IEEE Transactions on Information Theory 20, 284 (1974) DOI
- [26]
- J. Meggitt, “Error correcting codes and their implementation for data transmission systems”, IEEE Transactions on Information Theory 7, 234 (1961) DOI
- [27]
- E. Prange, “The use of information sets in decoding cyclic codes”, IEEE Transactions on Information Theory 8, 5 (1962) DOI
- [28]
- J. Macwilliams, “Permutation Decoding of Systematic Codes”, Bell System Technical Journal 43, 485 (1964) DOI
- [29]
- W. An, M. Medard, and K. R. Duffy, “CRC Codes as Error Correction Codes”, ICC 2021 - IEEE International Conference on Communications (2021) arXiv:2104.13663 DOI
- [30]
- A. R. Hammons et al., “The Z/sub 4/-linearity of Kerdock, Preparata, Goethals, and related codes”, IEEE Transactions on Information Theory 40, 301 (1994) DOI
- [31]
- M. Blaum et al., “EVENODD: an efficient scheme for tolerating double disk failures in RAID architectures”, IEEE Transactions on Computers 44, 192 (1995) DOI
- [32]
- Nilsson, Nils J. "Learning machines." (1965).
- [33]
- T. Windeatt and R. Ghaderi, “Coding and decoding strategies for multi-class learning problems”, Information Fusion 4, 11 (2003) DOI
- [34]
- O. Pujol, S. Escalera, and P. Radeva, “An incremental node embedding technique for error correcting output codes”, Pattern Recognition 41, 713 (2008) DOI
- [35]
- Allwein, Erin L., Robert E. Schapire, and Yoram Singer. "Reducing multiclass to binary: A unifying approach for margin classifiers." Journal of machine learning research 1.Dec (2000): 113-141.
- [36]
- A. Passerini, M. Pontil, and P. Frasconi, “New Results on Error Correcting Output Codes of Kernel Machines”, IEEE Transactions on Neural Networks 15, 45 (2004) DOI
- [37]
- J. Justesen et al., “Construction and decoding of a class of algebraic geometry codes”, IEEE Transactions on Information Theory 35, 811 (1989) DOI
- [38]
- S. C. Porter, B.-Z. Shen, and R. Pellikaan, “Decoding geometric Goppa codes using an extra place”, IEEE Transactions on Information Theory 38, 1663 (1992) DOI
- [39]
- D. Ehrhard, “Decoding Algebraic-Geometric Codes by solving a key equation”, Lecture Notes in Mathematics 18 (1992) DOI
- [40]
- R. Pellikaan, “On a decoding algorithm for codes on maximal curves”, IEEE Transactions on Information Theory 35, 1228 (1989) DOI
- [41]
- S. Vladut, “On the decoding of algebraic-geometric codes over F/sub q/ for q&lt;or=16”, IEEE Transactions on Information Theory 36, 1461 (1990) DOI
- [42]
- S. Sakata, “Finding a minimal set of linear recurring relations capable of generating a given finite two-dimensional array”, Journal of Symbolic Computation 5, 321 (1988) DOI
- [43]
- S. Sakata, “Extension of the Berlekamp-Massey algorithm to N dimensions”, Information and Computation 84, 207 (1990) DOI
- [44]
- S. Sakata, “Decoding binary 2-D cyclic codes by the 2-D Berlekamp-Massey algorithm”, IEEE Transactions on Information Theory 37, 1200 (1991) DOI
- [45]
- G.-L. Feng and T. R. N. Rao, “Decoding algebraic-geometric codes up to the designed minimum distance”, IEEE Transactions on Information Theory 39, 37 (1993) DOI
- [46]
- D. Ehrhard, “Achieving the designed error capacity in decoding algebraic-geometric codes”, IEEE Transactions on Information Theory 39, 743 (1993) DOI
- [47]
- M. A. Shokrollahi and H. Wasserman, “List decoding of algebraic-geometric codes”, IEEE Transactions on Information Theory 45, 432 (1999) DOI
- [48]
- M. Sipser and D. A. Spielman, “Expander codes”, IEEE Transactions on Information Theory 42, 1710 (1996) DOI
- [49]
- J. Feldman et al., “LP Decoding Corrects a Constant Fraction of Errors”, IEEE Transactions on Information Theory 53, 82 (2007) DOI
- [50]
- M. Viderman, “Linear-time decoding of regular expander codes”, ACM Transactions on Computation Theory 5, 1 (2013) DOI
- [51]
- G. M. Nixon and B. J. Brown, “Correcting Spanning Errors With a Fractal Code”, IEEE Transactions on Information Theory 67, 4504 (2021) arXiv:2002.11738 DOI
- [52]
- K. R. Duffy, J. Li, and M. Medard, “Capacity-Achieving Guessing Random Additive Noise Decoding”, IEEE Transactions on Information Theory 65, 4023 (2019) arXiv:1802.07010 DOI
- [53]
- K. R. Duffy, J. Li, and M. Medard, “Guessing noise, not code-words”, 2018 IEEE International Symposium on Information Theory (ISIT) (2018) DOI
- [54]
- V. Guruswami and A. Rudra, “Explicit Codes Achieving List Decoding Capacity: Error-correction with Optimal Redundancy”, (2007) arXiv:cs/0511072
- [55]
- Atri Rudra. List Decoding and Property Testing of Error Correcting Codes. PhD thesis, University of Washington, 8 2007.
- [56]
- F. Parvaresh and A. Vardy, “Correcting Errors Beyond the Guruswami-Sudan Radius in Polynomial Time”, 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05) DOI
- [57]
- V. Guruswami, “Linear-Algebraic List Decoding of Folded Reed-Solomon Codes”, 2011 IEEE 26th Annual Conference on Computational Complexity (2011) arXiv:1106.0436 DOI
- [58]
- S. Bhandari et al., “Ideal-Theoretic Explanation of Capacity-Achieving Decoding”, (2021) arXiv:2103.07930 DOI
- [59]
- V. Guruswami and A. Rudra, “Better Binary List Decodable Codes Via Multilevel Concatenation”, IEEE Transactions on Information Theory 55, 19 (2009) DOI
- [60]
- D. Silva and F. R. Kschischang, “Fast encoding and decoding of Gabidulin codes”, 2009 IEEE International Symposium on Information Theory (2009) arXiv:0901.2483 DOI
- [61]
- V. Guruswami and C. Xing, “List decoding reed-solomon, algebraic-geometric, and gabidulin subcodes up to the singleton bound”, Proceedings of the forty-fifth annual ACM symposium on Theory of Computing (2013) DOI
- [62]
- R. Koetter and A. Vardy, “Algebraic soft-decision decoding of reed-solomon codes”, IEEE Transactions on Information Theory 49, 2809 (2003) DOI
- [63]
- Berman, A., Dor, A., Shany, Y., Shapir, I., and Doubchak, A. (2023). U.S. Patent No. 11,855,658. Washington, DC: U.S. Patent and Trademark Office.
- [64]
- J.-M. Goethals, “On the Golay perfect binary code”, Journal of Combinatorial Theory, Series A 11, 178 (1971) DOI
- [65]
- V. Pless, “Decoding the Golay codes”, IEEE Transactions on Information Theory 32, 561 (1986) DOI
- [66]
- A. J. VITERBI, “Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm”, The Foundations of the Digital Wireless World 41 (2009) DOI
- [67]
- B. Honary and G. Markarian, “New simple encoder and trellis decoder for Golay codes”, Electronics Letters 29, 2170 (1993) DOI
- [68]
- A. Vardy, “Even more efficient bounded-distance decoding of the hexacode, the Golay code, and the Leech lattice”, IEEE Transactions on Information Theory 41, 1495 (1995) DOI
- [69]
- O. W. Yeung and K. M. Chugg, “An Iterative Algorithm and Low Complexity Hardware Architecture for Fast Acquisition of Long PN Codes in UWB Systems”, Journal of VLSI signal processing systems for signal, image and video technology 43, 25 (2006) DOI
- [70]
- N. Patterson, “The algebraic decoding of Goppa codes”, IEEE Transactions on Information Theory 21, 203 (1975) DOI
- [71]
- Daniel J. Bernstein, "Understanding binary-Goppa decoding." Cryptology ePrint Archive (2022).
- [72]
- P. Beelen et al., “On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes”, IEEE Transactions on Information Theory 59, 3269 (2013) DOI
- [73]
- T. Helleseth and P. V. Kumar, “The algebraic decoding of the Z/sub 4/-linear Goethals code”, IEEE Transactions on Information Theory 41, 2040 (1995) DOI
- [74]
- D. Bleichenbacher, A. Kiayias, and M. Yung, “Decoding interleaved Reed–Solomon codes over noisy channels”, Theoretical Computer Science 379, 348 (2007) DOI
- [75]
- D. Coppersmith and M. Sudan, “Reconstructing curves in three (and higher) dimensional space from noisy data”, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing (2003) DOI
- [76]
- Jin, Hui, Aamod Khandekar, and Robert McEliece. "Irregular repeat-accumulate codes." Proc. 2nd Int. Symp. Turbo codes and related topics. 2000.
- [77]
- J. Justesen, “Class of constructive asymptotically good algebraic codes”, IEEE Transactions on Information Theory 18, 652 (1972) DOI
- [78]
- A. R. Hammons Jr. et al., “The Z_4-Linearity of Kerdock, Preparata, Goethals and Related Codes”, (2002) arXiv:math/0207208
- [79]
- U. Fincke and M. Pohst, “Improved methods for calculating vectors of short length in a lattice, including a complexity analysis”, Mathematics of Computation 44, 463 (1985) DOI
- [80]
- C. P. Schnorr and M. Euchner, “Lattice basis reduction: Improved practical algorithms and solving subset sum problems”, Mathematical Programming 66, 181 (1994) DOI
- [81]
- O. Damen, A. Chkeif, and J.-C. Belfiore, “Lattice code decoder for space-time codes”, IEEE Communications Letters 4, 161 (2000) DOI
- [82]
- B. Hassibi and H. Vikalo, “On the sphere-decoding algorithm I. Expected complexity”, IEEE Transactions on Signal Processing 53, 2806 (2005) DOI
- [83]
- E. Viterbo and J. Bouros, “A universal lattice code decoder for fading channels”, IEEE Transactions on Information Theory 45, 1639 (1999) DOI
- [84]
- I. Dumer, “Maximum likelihood decoding with reduced complexity”, Proceedings of IEEE International Symposium on Information Theory DOI
- [85]
- C. Hartmann and L. Rudolph, “An optimum symbol-by-symbol decoding rule for linear codes”, IEEE Transactions on Information Theory 22, 514 (1976) DOI
- [86]
- C. Peters, “Information-Set Decoding for Linear Codes over F q”, Post-Quantum Cryptography 81 (2010) DOI
- [87]
- J. Wolf, “Efficient maximum likelihood decoding of linear block codes using a trellis”, IEEE Transactions on Information Theory 24, 76 (1978) DOI
- [88]
- A. Rudra and M. Wootters, “Average-radius list-recovery of random linear codes: it really ties the room together”, (2017) arXiv:1704.02420
- [89]
- R. Kotter. A unified description of an error locating procedure for linear codes. In D. Yorgov, editor, Proc. 3rd International Workshop on Algebraic and Combinatorial Coding Theory, pages 113–117, Voneshta Voda, Bulgaria, June 1992. Hermes.
- [90]
- R. Pellikaan, “On decoding by error location and dependent sets of error positions”, Discrete Mathematics 106–107, 369 (1992) DOI
- [91]
- E. Berlekamp, R. McEliece, and H. van Tilborg, “On the inherent intractability of certain coding problems (Corresp.)”, IEEE Transactions on Information Theory 24, 384 (1978) DOI
- [92]
- D. Slepian, “Some Further Theory of Group Codes”, Bell System Technical Journal 39, 1219 (1960) DOI
- [93]
- Y. S. Han et al., “Maximum-likelihood Soft-decision Decoding for Binary Linear Block Codes Based on Their Supercodes”, (2014) arXiv:1408.1310
- [94]
- E. Nachmani, Y. Beery, and D. Burshtein, “Learning to Decode Linear Codes Using Deep Learning”, (2016) arXiv:1607.04793
- [95]
- Y. Choukroun and L. Wolf, “Error Correction Code Transformer”, (2022) arXiv:2203.14966
- [96]
- D. Chase, “Class of algorithms for decoding block codes with channel measurement information”, IEEE Transactions on Information Theory 18, 170 (1972) DOI
- [97]
- J. L. Massey, “Linear codes with complementary duals”, Discrete Mathematics 106–107, 337 (1992) DOI
- [98]
- S. Liu, F. Manganiello, and F. R. Kschischang, “Construction and decoding of generalized skew-evaluation codes”, 2015 IEEE 14th Canadian Workshop on Information Theory (CWIT) (2015) DOI
- [99]
- U. Martinez-Penas and F. R. Kschischang, “Reliable and Secure Multishot Network Coding Using Linearized Reed-Solomon Codes”, IEEE Transactions on Information Theory 65, 4785 (2019) arXiv:1805.03789 DOI
- [100]
- R. Gallager, Low-density parity check codes. 1963. PhD thesis, MIT Cambridge, MA.
- [101]
- Pearl, J. (2022). Reverend Bayes on inference engines: A distributed hierarchical approach. In Probabilistic and Causal Inference: The Works of Judea Pearl (pp. 129-138).
- [102]
- Probabilistic Reasoning in Intelligent Systems (Elsevier, 1988) DOI
- [103]
- T. J. Richardson and R. L. Urbanke, “The capacity of low-density parity-check codes under message-passing decoding”, IEEE Transactions on Information Theory 47, 599 (2001) DOI
- [104]
- S. Lin and D. J. Costello, Error Control Coding, 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 2004.
- [105]
- K. Shimizu et al., “A parallel LSI architecture for LDPC decoder improving message-passing schedule”, 2006 IEEE International Symposium on Circuits and Systems DOI
- [106]
- F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, “Factor graphs and the sum-product algorithm”, IEEE Transactions on Information Theory 47, 498 (2001) DOI
- [107]
- J. Chen et al., “Reduced-Complexity Decoding of LDPC Codes”, IEEE Transactions on Communications 53, 1288 (2005) DOI
- [108]
- J. Feldman. Decoding Error-Correcting Codes via Linear Programming. PhD thesis, Massachusetts Institute of Technology, 2003.
- [109]
- J. Feldman, M. J. Wainwright, and D. R. Karger, “Using Linear Programming to Decode Binary Linear Codes”, IEEE Transactions on Information Theory 51, 954 (2005) DOI
- [110]
- J. Feldman, “LP Decoding”, Encyclopedia of Algorithms 1177 (2016) DOI
- [111]
- Changyan Di et al., “Finite-length analysis of low-density parity-check codes on the binary erasure channel”, IEEE Transactions on Information Theory 48, 1570 (2002) DOI
- [112]
- C. A. Kelley, "Codes over Graphs." Concise Encyclopedia of Coding Theory (Chapman and Hall/CRC, 2021) DOI
- [113]
- M. Schwartz and A. Vardy, “On the stopping distance and the stopping redundancy of codes”, IEEE Transactions on Information Theory 52, 922 (2006) DOI
- [114]
- T. J. Richardson, M. A. Shokrollahi, and R. L. Urbanke, “Design of capacity-approaching irregular low-density parity-check codes”, IEEE Transactions on Information Theory 47, 619 (2001) DOI
- [115]
- S. Habib, A. Beemer, and J. Kliewer, “RELDEC: Reinforcement Learning-Based Decoding of Moderate Length LDPC Codes”, (2023) arXiv:2112.13934
- [116]
- Gaborit, P., Murat, G., Ruatta, O., & Zemor, G. (2013, April). Low rank parity check codes and their application to cryptography. In Proceedings of the Workshop on Coding and Cryptography WCC (Vol. 2013).
- [117]
- P. Gaborit et al., “RankSign: An Efficient Signature Algorithm Based on the Rank Metric”, Post-Quantum Cryptography 88 (2014) DOI
- [118]
- T. Richardson and R. Urbanke, Modern Coding Theory (Cambridge University Press, 2008) DOI
- [119]
- David J. C. MacKay. 2002. Information Theory, Inference & Learning Algorithms. Cambridge University Press, USA
- [120]
- J. Pearl, “Reverend Bayes on Inference Engines: A Distributed Hierarchical Approach”, Probabilistic and Causal Inference 129 (2022) DOI
- [121]
- D. J. C. MacKay and R. M. Neal, “Good codes based on very sparse matrices”, Cryptography and Coding 100 (1995) DOI
- [122]
- F. Hernando, K. Lally, and D. Ruano, “Construction and decoding of matrix-product codes from nested codes”, Applicable Algebra in Engineering, Communication and Computing 20, 497 (2009) DOI
- [123]
- A. Alahmadi et al., “On the lifted Melas code”, Cryptography and Communications 8, 7 (2015) DOI
- [124]
- S. Kopparty, Theory of Computing 11, 149 (2015) DOI
- [125]
- S. Bhandari et al., “Decoding multivariate multiplicity codes on product sets”, Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing (2021) arXiv:2012.01530 DOI
- [126]
- Rasmus R. Nielsen. List decoding of linear block codes. PhD thesis, Technical University of Denmark, 2001
- [127]
- V. Guruswami and C. Wang, “Optimal rate list decoding via derivative codes”, (2011) arXiv:1106.3951
- [128]
- D. R. Chowdhury et al., “Design of CAECC - cellular automata based error correcting code”, IEEE Transactions on Computers 43, 759 (1994) DOI
- [129]
- V. Tarokh, H. Jafarkhani, and A. R. Calderbank, “Space-time block coding for wireless communications: performance results”, IEEE Journal on Selected Areas in Communications 17, 451 (1999) DOI
- [130]
- A. N. Skorobogatov and S. G. Vladut, “On the decoding of algebraic-geometric codes”, IEEE Transactions on Information Theory 36, 1051 (1990) DOI
- [131]
- V. Yu. Krachkovskii, "Decoding of codes on algebraic curves," (in Russian), Conference Odessa, 1988.
- [132]
- E. Arikan, “Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels”, IEEE Transactions on Information Theory 55, 3051 (2009) DOI
- [133]
- I. Tal and A. Vardy, “List Decoding of Polar Codes”, IEEE Transactions on Information Theory 61, 2213 (2015) DOI
- [134]
- Y. Ren et al., “A Sequence Repetition Node-Based Successive Cancellation List Decoder for 5G Polar Codes: Algorithm and Implementation”, IEEE Transactions on Signal Processing 70, 5592 (2022) arXiv:2205.08857 DOI
- [135]
- U. U. Fayyaz and J. R. Barry, “Low-Complexity Soft-Output Decoding of Polar Codes”, IEEE Journal on Selected Areas in Communications 32, 958 (2014) DOI
- [136]
- U. U. Fayyaz and J. R. Barry, “Polar codes for partial response channels”, 2013 IEEE International Conference on Communications (ICC) (2013) DOI
- [137]
- E. Arkan, “A performance comparison of polar codes and Reed-Muller codes”, IEEE Communications Letters 12, 447 (2008) DOI
- [138]
- S. Kasi et al., “Decoding Polar Codes via Noisy Quantum Gates: Quantum Circuits and Insights”, (2022) arXiv:2210.10854
- [139]
- D. J. Bernstein, T. Lange, and C. Peters, “Smaller Decoding Exponents: Ball-Collision Decoding”, Advances in Cryptology – CRYPTO 2011 743 (2011) DOI
- [140]
- A. Becker et al., “Decoding Random Binary Linear Codes in 2 n/20: How 1 + 1 = 0 Improves Information Set Decoding”, Advances in Cryptology – EUROCRYPT 2012 520 (2012) DOI
- [141]
- M. Finiasz and N. Sendrier, “Security Bounds for the Design of Code-Based Cryptosystems”, Advances in Cryptology – ASIACRYPT 2009 88 (2009) DOI
- [142]
- P. Loidreau, “A Welch–Berlekamp Like Algorithm for Decoding Gabidulin Codes”, Coding and Cryptography 36 (2006) DOI
- [143]
- G. Richter and S. Plass, “Fast decoding of rank-codes with rank errors and column erasures”, International Symposium onInformation Theory, 2004. ISIT 2004. Proceedings. DOI
- [144]
- F. Lazaro, G. Liva, and G. Bauch, “Inactivation Decoding of LT and Raptor Codes: Analysis and Code Design”, IEEE Transactions on Communications 1 (2017) arXiv:1706.05814 DOI
- [145]
- D. E. Muller, “Application of Boolean algebra to switching circuit design and to error detection”, Transactions of the I.R.E. Professional Group on Electronic Computers EC-3, 6 (1954) DOI
- [146]
- L. Rudolph and C. Hartmann, “Decoding by sequential code reduction”, IEEE Transactions on Information Theory 19, 549 (1973) DOI
- [147]
- G. Seroussi and A. Lempel, “Factorization of Symmetric Matrices and Trace-Orthogonal Bases in Finite Fields”, SIAM Journal on Computing 9, 758 (1980) DOI
- [148]
- E. T. Campbell and M. Howard, “Unified framework for magic state distillation and multiqubit gate synthesis with reduced resource cost”, Physical Review A 95, (2017) arXiv:1606.01904 DOI
- [149]
- V. Guruswami and A. Vardy, “Maximum-likelihood decoding of Reed-Solomon Codes is NP-hard”, (2004) arXiv:cs/0405005
- [150]
- E. R. Berlekamp and L. Welch, Error Correction of Algebraic Block Codes. U.S. Patent, Number 4,633,470 1986.
- [151]
- P. Gemmell and M. Sudan, “Highly resilient correctors for polynomials”, Information Processing Letters 43, 169 (1992) DOI
- [152]
- S. Gao, “A New Algorithm for Decoding Reed-Solomon Codes”, Communications, Information and Network Security 55 (2003) DOI
- [153]
- I. Reed et al., “The fast decoding of Reed-Solomon codes using Fermat theoretic transforms and continued fractions”, IEEE Transactions on Information Theory 24, 100 (1978) DOI
- [154]
- M. Sudan, “Decoding of Reed Solomon Codes beyond the Error-Correction Bound”, Journal of Complexity 13, 180 (1997) DOI
- [155]
- R. M. Roth and G. Ruckenstein, “Efficient decoding of Reed-Solomon codes beyond half the minimum distance”, IEEE Transactions on Information Theory 46, 246 (2000) DOI
- [156]
- V. Guruswami and A. Rudra, “Limits to List Decoding Reed–Solomon Codes”, IEEE Transactions on Information Theory 52, 3642 (2006) DOI
- [157]
- J. Brakensiek, S. Gopi, and V. Makam, “Generic Reed-Solomon Codes Achieve List-decoding Capacity”, (2024) arXiv:2206.05256
- [158]
- A. Vardy and Y. Be’ery, “Bit-level soft-decision decoding of Reed-Solomon codes”, IEEE Transactions on Communications 39, 440 (1991) DOI
- [159]
- D. V. Sarwate and N. R. Shanbhag, “High-speed architectures for Reed-Solomon decoders”, IEEE Transactions on Very Large Scale Integration (VLSI) Systems 9, 641 (2001) DOI
- [160]
- S. B. Wicker and V. K. Bhargava, Reed-Solomon Codes and Their Applications (IEEE, 1999) DOI
- [161]
- G. Zemor, “On expander codes”, IEEE Transactions on Information Theory 47, 835 (2001) DOI
- [162]
- A. Barg and G. Zemor, “Concatenated Codes: Serial and Parallel”, IEEE Transactions on Information Theory 51, 1625 (2005) DOI
- [163]
- A. L. Toom, “Nonergodic Multidimensional System of Automata”, Probl. Peredachi Inf., 10:3 (1974), 70–79; Problems Inform. Transmission, 10:3 (1974), 239–246
- [164]
- L. F. Gray, “Toom’s Stability Theorem in Continuous Time”, Perplexing Problems in Probability 331 (1999) DOI
- [165]
- P. Gács, Journal of Statistical Physics 103, 45 (2001) DOI
- [166]
- R. Silverman and M. Balser, “Coding for constant-data-rate systems”, Transactions of the IRE Professional Group on Information Theory 4, 50 (1954) DOI
- [167]
- A. Lapidoth, A Foundation in Digital Communication (Cambridge University Press, 2017) DOI
- [168]
- F. G. Jeronimo et al., “Unique Decoding of Explicit \(ε\)-balanced Codes Near the Gilbert-Varshamov Bound”, (2020) arXiv:2011.05500
- [169]
- I. Tamo and A. Barg, “A Family of Optimal Locally Recoverable Codes”, IEEE Transactions on Information Theory 60, 4661 (2014) arXiv:1311.3284 DOI
- [170]
- N. Wiberg, H. Loeliger, and R. Kotter, “Codes and iterative decoding on general graphs”, European Transactions on Telecommunications 6, 513 (1995) DOI
- [171]
- Niclas Wiberg, Codes and decoding on general graphs. 1996. PhD thesis, Linköping University, Linköping, Sweden
- [172]
- Brendan J. Frey. Graphical models for machine learning and digital communication. MIT press, 1998.
- [173]
- Yongyi Mao and A. H. Banihashemi, “Decoding low-density parity-check codes with probabilistic schedule”, 2001 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (IEEE Cat. No.01CH37233) DOI
- [174]
- G. D. Forney, “Codes on graphs: normal realizations”, 2000 IEEE International Symposium on Information Theory (Cat. No.00CH37060) DOI
- [175]
- E. Soljanin and E. Offer, “LDPC codes: a group algebra formulation”, Electronic Notes in Discrete Mathematics 6, 148 (2001) DOI
- [176]
- T. Etzion, A. Trachtenberg, and A. Vardy, “Which codes have cycle-free Tanner graphs?”, IEEE Transactions on Information Theory 45, 2173 (1999) DOI
- [177]
- R. Tanner, “A recursive approach to low complexity codes”, IEEE Transactions on Information Theory 27, 533 (1981) DOI
- [178]
- D. K. Kythe and P. K. Kythe, “Algebraic and Stochastic Coding Theory”, (2017) DOI
- [179]
- P. Chaichanavong and P. H. Siegel, “Tensor-product parity code for magnetic recording”, IEEE Transactions on Magnetics 42, 350 (2006) DOI
- [180]
- M. G. Luby et al., “Efficient erasure correcting codes”, IEEE Transactions on Information Theory 47, 569 (2001) DOI
- [181]
- C. Torezzan, S. I. R. Costa, and V. A. Vaishampayan, “Spherical codes on torus layers”, 2009 IEEE International Symposium on Information Theory (2009) DOI
- [182]
- C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1”, Proceedings of ICC ’93 - IEEE International Conference on Communications DOI
- [183]
- R. J. McEliece, D. J. C. MacKay, and Jung-Fu Cheng, “Turbo decoding as an instance of Pearl’s “belief propagation” algorithm”, IEEE Journal on Selected Areas in Communications 16, 140 (1998) DOI
- [184]
- H. R. Sadjadpour, “<title>Maximum a posteriori decoding algorithms for turbo codes</title>”, SPIE Proceedings (2000) DOI
- [185]
- J. D. Kene and K. D. Kulat, “Soft Output Decoding Algorithm for Turbo Codes Implementation in Mobile Wi-Max Environment”, Procedia Technology 6, 666 (2012) DOI
- [186]
- B. Sklar, “A primer on turbo code concepts”, IEEE Communications Magazine 35, 94 (1997) DOI
- [187]
- K. R. Narayanan and G. L. Stuber, “List decoding of turbo codes”, IEEE Transactions on Communications 46, 754 (1998) DOI
- [188]
- G. Masera et al., “VLSI architectures for turbo codes”, IEEE Transactions on Very Large Scale Integration (VLSI) Systems 7, 369 (1999) DOI
- [189]
- E. Balevi and J. G. Andrews, “Autoencoder-Based Error Correction Coding for One-Bit Quantization”, (2019) arXiv:1909.12120
- [190]
- V. I. Levenshtein, Binary codes capable of correcting deletions, insertions and reversals (translated to English), Soviet Physics Dokl., 10(8), 707-710 (1966).
- [191]
- V. I. Levenshtein, Binary codes capable of correcting spurious insertions and deletions of one (translated to English), Prob. Inf. Transmission, 1(1), 8-17 (1965).
- [192]
- P. Kallquist, "Decoding of Zetterberg codes," in Proc. Fourth Joint Swedish-Soviet Workshop on Inform. Theory, Gotland, Sweden, Aug. 27-Sept. 1, 1989, p. 305-300
- [193]
- S. M. Dodunekov and J. E. M. Nilsson, “Algebraic decoding of the Zetterberg codes”, IEEE Transactions on Information Theory 38, 1570 (1992) DOI
- [194]
- M.-H. Jing et al., “A Result on Zetterberg Codes”, IEEE Communications Letters 14, 662 (2010) DOI
- [195]
- E.C. Posner, Combinatorial Structures in Planetary Reconnaissance in Error Correcting Codes, ed. H.B. Mann, Wiley, NY 1968.
- [196]
- R. R. Green, "A serial orthogonal decoder," JPL Space Programs Summary, vol. 37–39-IV, pp. 247–253, 1966.
- [197]
- A. Ashikhmin and S. Litsyn, “Simple MAP decoding of first order Reed-Muller and Hamming codes”, Proceedings 2003 IEEE Information Theory Workshop (Cat. No.03EX674) DOI
- [198]
- A. Barg and S. Zhou, “A quantum decoding algorithm for the simplex code”, in Proceedings of the 36th Annual Allerton Conference on Communication, Control and Computing, Monticello, IL, 23–25 September 1998 (UIUC 1998) 359–365
- [199]
- W. Fish et al., “Partial permutation decoding for simplex codes”, Advances in Mathematics of Communications 6, 505 (2012) DOI
- [200]
- J. D. Key and P. Seneviratne, “Partial permutation decoding for MacDonald codes”, Applicable Algebra in Engineering, Communication and Computing 27, 399 (2016) DOI