Name | Rate |
---|---|

Alamouti code | The only OSTBC with full rate, \(k = mt = 2\). |

Algebraic-geometry (AG) code | Several sequences of linear AG codes beat the Gilbert-Varshamov bound and/or are asymptotically good [1,2] (see Ref. [3] for details). The rate of any linear AG code satisfies \begin{align} \frac{k}{n} \geq 1 - \frac{d}{n} - \frac{1}{\sqrt{q}-1}~, \tag*{(1)}\end{align} which comes from the Drinfeld-Vladut bound [4]. Nonlinear AG codes can outperform this bound. |

Binary PSK (BPSK) code | Achieve capacity of AGWN in the low signal-to-noise regime [5] (see also [6]). BPSK concatenated with quantum-classical polar codes achieves the Holevo capacity for the pure-loss channel [7]. |

Binary Varshamov-Tenengolts (VT) code | Rate-\(1\) code, with \(\log n+1\) bits of redundancy when \(a=0\). Nearly optimal. |

Binary code | The rate of a binary code is usually defined as \(R=\frac{1}{n}\log_{2}K\) bits per symbol. The maximum rate of a binary code with linear distance is upper bounded by the McEliece, Rodemich, Rumsey and Welch (MRRW) bound [8] (see Refs. [9–12] for other proofs). |

Bose–Chaudhuri–Hocquenghem (BCH) code | Primitive BCH codes are asymptotically bad [13; pg. 269]. |

Cartier code | Cartier codes share similar asymptotic properties as subfield subcodes of residue AG codes, with both families admitting sequences of codes that achieve the Gilbert-Varshamov bound. |

Cycle LDPC code | Binary cycle LDPC codes are not asymptotically good [14]. |

Expander code | The rate is \(1 - m/n\) where \(n\) is the number of left nodes and \(m\) is the number of right nodes in the bipartite expander graph. |

Fountain code | Random linear fountain codes approach the Shannon limit as the file size \(K\) increases. However, they do not have a fixed encoding rate. |

Gallager (GL) code | GL codes nearly achieve Shannon capacity against binary-input additive Gaussian white noise using iterative decoding [15,16]. GL codes can outperform RS codes at short block length [17]. |

Galois-field \(q\)-ary code | The rate of a \(q\)-ary code is usually defined as \(R=\frac{1}{n}\log_q K\) dits per symbol. |

Generalized RM (GRM) code | GRM codes achieve capacity on sufficiently symmetric non-binary channels [18]. |

Goethals code | The rate is \({2^m -3m +1}/2^m\), going to 1 as block length goes to infinity. |

Hamming code | Asymptotic rate \(k/n = 1-\frac{\log n}{n} \to 1\) and normalized distance \(d/n \to 0\). |

Hermitian code | For polynomial evaluations up to degree \(D\): if \(D < q + 1 \), \(k = \frac{(D+1)(D+2)}{2}\), and if \(D \geq q + 1 \), \(k = (q+1)D - \frac{q(q-1)}{2} + 1 \). |

Hsu-Anastasopoulos LDPC (HA-LDPC) code | Codes can achieve capacity on the BEC channel under BP decoding [19] as well as the capacity of memoryless binary-input symmetric-output channels under MAP decoding [20]. HA-LDPC codes can achieve the GV bound with asymptotically high probability [19]. |

Irregular LDPC code | Nearly achieve capacity against binary-input additive Gaussian white noise using iterative decoding [21,22]. Such sequences have sublinearly growing distance per block length [23]. |

Irregular repeat-accumulate (IRA) code | IRA codes nearly achieve the Shannon capacity of the binary erasure channel using iterative decoding [24]. Puncturing ressens the decoding complexity while still allowing sequences of codes to achive capacity [25]. |

Justesen code | The first asymptotically good codes. Rate is \(R_m=k/n=K/2N\geq R\) and the relative minumum distance satisfy \(\delta_m=d_m/n\geq 0.11(1-2R)\), where \(K=\left\lceil 2NR\right\rceil\) for asymptotic rate \(0<R<1/2\); see pg. 311 of Ref. [13]. The code can be improved and extend the range of \(R\) from 0 to 1 by puncturing, i.e., by erasing \(s\) digits from each inner codeword. This yields a code of length \(n=(2m-s)N\) and rate \(R=mk/(2m-s)N\). The lower bound of the distance of the punctured code approaches \(d_m/n=(1-R/r)H^{-1}(1-r)\) as \(m\) goes to infinity, where \(r\) is the maximum of 1/2 and the solution to \(R=r^2/(1+\log(1-h^{-1}(1-r)))\), and \(h\) is the binary entropy function. |

Kerdock code | The transmission rate is \(2m/2^m\) which tends to 0 as \(m\) becomes large, hence these codes are asymptotically poor. |

Lattice-based code | Lattice codes with minimal-distance decoding achieve the capacity of the AGWN channel [26–29]. |

Linear binary code | A family of linear codes \(C_i = [n_i,k_i,d_i]\) is asymptotically good if the asymptotic rate \(\lim_{i\to\infty} k_i/n_i\) and asymptotic distance \(\lim_{i\to\infty} d_i/n_i\) are both positive. |

Locally recoverable code (LRC) | The rate \(r\) of an \((n,k,r)\) LRC code satisfies \begin{align} \frac{k}{n}\leq\frac{r}{r+1}~. \tag*{(2)}\end{align} |

Low-density generator-matrix (LDGM) code | Certain LDGM codes come close to achieving Shannon capacity [30]. |

Low-density parity-check (LDPC) code | Some LDPC codes achieve the Shannon capacity of the binary symmetric channel under maximum-likelihood decoding [16,31,32]. Other LDPC codes achieve capacity for smaller block lengths under belief-propagation decoding [33]. Random LDPC codes achieve list-decoding capacity [34]. |

MacKay-Neal LDPC (MN-LDPC) code | Certain sequences of optimally decoded codes can nearly achieve the Shannon capacity [16,35]. A sequence of codes achieves the capacity of memoryless binary-input symmetric-output channels under MAP decoding [20]. |

Nonlinear AG code | Certain nonlinear code sequences beat the Tsfasman-Vladut-Zink bound, outperforming linear AG codes. |

Orthogonal Spacetime Block Code (OSTBC) | The greatest rate which can be achieved is \(\frac{n_0+1}{2n_0}\), where either \(n=2n_0\) or \(n=2n_0-1\) [36]. |

Permutation spherical code | Number of codewords cannot increase exponentially with dimension \(n\) [37] |

Phase-shift keying (PSK) code | Nearly achieves Shannon AWGN capacity for one-dimensional constellations in the limit of infinite signal to noise [38; Fig. 11.7]. |

Polar code | Supports reliable transmission at rates \(K/N\) approaching the Shannon capacity of the channel under the successive cancellation decoder [39,40]; see also Refs. [41,42]. |

Preparata code | The rate is \(\frac{2^{m+1}-2m-2}{2^{m+1}-1}\). |

Quadrature-amplitude modulation (QAM) code | Nearly achieves Shannon AWGN capacity for two-dimensional constellations in the limit of infinite signal to noise [38; Fig. 11.8]. |

Rank-modulation Gray code (RMGC) | Rank modulation codes with code distance \(d=\Theta(n^{1+\epsilon})\) for \(\epsilon\in[0,1]\) achieve a rate of \(1-\epsilon\) [43]. |

Reed-Muller (RM) code | Achieves capacity of the binary erasure channel [44], the binary memoryless symmetric (BMS) channel under bitwise maximum-a-posteriori decoding [45] (see also Ref. [46]), and the binary symmetric channel (BSC), solving a long-standing conjecture [47]. |

Regular binary Tanner code | For a short code \(C_0\) with rate \(R_0\), the Tanner code has rate \(R \geq 2R_0-1\). If \(C_0\) satisfies the Gilbert-Varshamov bound, the rate \(R \geq \delta = 1-2h(\delta_0)\), where \(\delta\) (\(\delta_0\)) is the relative distance of the Tanner code (\(C_0\)), and \(h\) is the binary entropy function. |

Repeat-accumulate (RA) code | RA codes are not asymptotically good [48]. |

Repeat-accumulate-accumulate (RAA) code | Some sequences of non-deterministic RAA codes are asymptotically good [48,49]. |

Repetition code | Code rate is \(\frac{1}{n}\), code distance is \(n\). |

Single parity-check (SPC) code | The code rate is \(\frac{n}{n+1}\to 1\) as \(n\to\infty\). |

Spacetime block code (STBC) | A spacetime block code is said to be full rate when \(k = m t\), where \(k\) is the number of information symbolds encoded in its codewords. |

Spacetime code (STC) | Shannon capacity of various multiple-input multiple-output (MIMO) channels has been determined [50–52]. |

Spatially coupled LDPC (SC-LDPC) code | Spatial coupling of LDPC codes can increase the achievable rate against BEC, coming close to the capacity [53–55]. SC-LDPC codes achieve capacity of the binary memoryless symmetric (BMS) channel [33,56]. |

Sphere packing | The rate of a code consisting of \(M\) codewords that represent a signal of bandwidth \(W\) and duration \(T\) is defined as \begin{align} R=\frac{1}{T}\log_{2}M\quad\quad\text{bits/s}~. \tag*{(3)}\end{align} The Shannon capacity of the AGWN channel for a signal whose power is \(P\) is \begin{align} C=W\log_{2}\left(1+\frac{P}{\sigma^{2}}\right)\,. \tag*{(4)}\end{align} Random sphere packings achieve this capacity [57]; see the book [58] for more details. Tradeoffs between various parameters have been analyzed [59]. Deterministic sets of constellations from quadrature rules can also achieve capacity [5]. |

Spherical code | The Kabatiansky-Levenshtein bound [60] is the best known upper bound on the rate of a spherical code with fixed Euclidean distance. |

Tamo-Barg-Vladut code | Tamo-Barg-Vladut codes on asymptotically maximal curves improve upon the asymptotic LRC Gilbert-Varshamov bound [61]. |

Tensor-product code | Rate of the tensor-product code \(C_A \otimes C_B\) is a product of the rates of the codes \(C_A\) and \(C_B\). |

Tornado code | Come arbitrarily close to the capacity of the binary erasure channel. |

Tsfasman-Vladut-Zink (TVZ) code | TVZ codes exceed the asymptotic Gilbert-Varshamov (GV) bound [62] (see also Ref. [63]). Roughly speaking, this breakthrough result implies that AG codes can outperform random codes. Such families of codes are optimal. |

Wrapped spherical code | Asymptotically maximal spherical coding density is obtained with the densest possible sphere packing. |

Zetterberg code | The rate is given by \(1-\frac{4s}{n}\), which is asymptotically good, with a minimum distance of 5. |

## References

- [1]
- A. Garcia and H. Stichtenoth, “A tower of Artin-Schreier extensions of function fields attaining the Drinfeld-Vladut bound”, Inventiones Mathematicae 121, 211 (1995) DOI
- [2]
- A. Garcia and H. Stichtenoth, “On the Asymptotic Behaviour of Some Towers of Function Fields over Finite Fields”, Journal of Number Theory 61, 248 (1996) DOI
- [3]
- T. Høholdt, J.H. Van Lint, and R. Pellikaan, 1998. Algebraic geometry codes. Handbook of coding theory, 1 (Part 1), pp.871-961.
- [4]
- S. G. Vlăduţ, V. G. Drinfeld, “Number of points of an algebraic curve”, Funktsional. Anal. i Prilozhen., 17:1 (1983), 68–69; Funct. Anal. Appl., 17:1 (1983), 53–54
- [5]
- Y. Wu and S. Verdu, “The impact of constellation cardinality on Gaussian channel capacity”, 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton) (2010) DOI
- [6]
- C. E. Shannon, “A Mathematical Theory of Communication”, Bell System Technical Journal 27, 623 (1948) DOI
- [7]
- S. Guha and M. M. Wilde, “Polar coding to achieve the Holevo capacity of a pure-loss optical channel”, 2012 IEEE International Symposium on Information Theory Proceedings (2012) arXiv:1202.0533 DOI
- [8]
- R. McEliece et al., “New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities”, IEEE Transactions on Information Theory 23, 157 (1977) DOI
- [9]
- M. Navon and A. Samorodnitsky, “Linear Programming Bounds for Codes via a Covering Argument”, Discrete & Computational Geometry 41, 199 (2008) DOI
- [10]
- J. Friedman and J.-P. Tillich, “Generalized Alon--Boppana Theorems and Error-Correcting Codes”, SIAM Journal on Discrete Mathematics 19, 700 (2005) DOI
- [11]
- A. Samorodnitsky, “One more proof of the first linear programming bound for binary codes and two conjectures”, (2021) arXiv:2104.14587
- [12]
- N. Linial and E. Loyfer, “An Elementary Proof of the First LP Bound on the Rate of Binary Codes”, (2023) arXiv:2303.16619
- [13]
- F. J. MacWilliams and N. J. A. Sloane. The theory of error correcting codes. Elsevier, 1977.
- [14]
- L. Decreusefond and G. Zemor, “On the error-correcting capabilities of cycle codes of graphs”, Proceedings of 1994 IEEE International Symposium on Information Theory DOI
- [15]
- D. J. C. MacKay and R. M. Neal, “Near Shannon limit performance of low density parity check codes”, Electronics Letters 33, 457 (1997) DOI
- [16]
- D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices”, IEEE Transactions on Information Theory 45, 399 (1999) DOI
- [17]
- D. J. C. MacKay and M. C. Davey, “Evaluation of Gallager Codes for Short Block Length and High Rate Applications”, Codes, Systems, and Graphical Models 113 (2001) DOI
- [18]
- G. Reeves and H. D. Pfister, “Achieving Capacity on Non-Binary Channels with Generalized Reed-Muller Codes”, (2023) arXiv:2305.07779
- [19]
- C.-H. Hsu and A. Anastasopoulos, “Capacity-Achieving Codes with Bounded Graphical Complexity on Noisy Channels”, (2005) arXiv:cs/0509062
- [20]
- K. KASAI and K. SAKANIWA, “Spatially-Coupled MacKay-Neal Codes and Hsu-Anastasopoulos Codes”, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E94-A, 2161 (2011) arXiv:1102.4612 DOI
- [21]
- T. J. Richardson, M. A. Shokrollahi, and R. L. Urbanke, “Design of capacity-approaching irregular low-density parity-check codes”, IEEE Transactions on Information Theory 47, 619 (2001) DOI
- [22]
- Sae-Young Chung et al., “On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit”, IEEE Communications Letters 5, 58 (2001) DOI
- [23]
- C. Di, T. J. Richardson, and R. L. Urbanke, “Weight Distribution of Low-Density Parity-Check Codes”, IEEE Transactions on Information Theory 52, 4839 (2006) DOI
- [24]
- Jin, Hui, Aamod Khandekar, and Robert McEliece. "Irregular repeat-accumulate codes." Proc. 2nd Int. Symp. Turbo codes and related topics. 2000.
- [25]
- H. D. Pfister, I. Sason, and R. Urbanke, “Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity”, IEEE Transactions on Information Theory 51, 2352 (2005) DOI
- [26]
- R. Urbanke and B. Rimoldi, “Lattice codes can achieve capacity on the AWGN channel”, IEEE Transactions on Information Theory 44, 273 (1998) DOI
- [27]
- R. de Buda, “The upper error bound of a new near-optimal code”, IEEE Transactions on Information Theory 21, 441 (1975) DOI
- [28]
- R. de Budaand W. Kassem, About lattices and the random coding theorem, in Abstracts of Papers, IEEE Inter. Symp. Info. Theory 1981, IEEE Press, NY 1981, p. 145
- [29]
- W. Kassem, Optimal Lattice Codes for the Gaussian Channel, Ph.D Thesis, McMaster Univ., Hamilton, Ontario, 1981,doi:10.1109/18.651040
- [30]
- J. Garcia-Frias and Wei Zhong, “Approaching Shannon performance by iterative decoding of linear codes with low-density generator matrix”, IEEE Communications Letters 7, 266 (2003) DOI
- [31]
- R. Gallager, “Low-density parity-check codes”, IEEE Transactions on Information Theory 8, 21 (1962) DOI
- [32]
- V. Guruswami, “Iterative Decoding of Low-Density Parity Check Codes (A Survey)”, (2006) arXiv:cs/0610022
- [33]
- S. Kudekar, T. Richardson, and R. Urbanke, “Spatially Coupled Ensembles Universally Achieve Capacity under Belief Propagation”, (2012) arXiv:1201.2999
- [34]
- J. Mosheiff et al., “LDPC Codes Achieve List Decoding Capacity”, (2021) arXiv:1909.06430
- [35]
- D. J. C. MacKay and R. M. Neal, “Good codes based on very sparse matrices”, Cryptography and Coding 100 (1995) DOI
- [36]
- Xue-Bin Liang, “Orthogonal designs with maximal rates”, IEEE Transactions on Information Theory 49, 2468 (2003) DOI
- [37]
- H. Landau, “How does a porcupine separate its quills?”, IEEE Transactions on Information Theory 17, 157 (1971) DOI
- [38]
- R. E. Blahut, Modem Theory (Cambridge University Press, 2009) DOI
- [39]
- E. Arikan, “Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels”, IEEE Transactions on Information Theory 55, 3051 (2009) DOI
- [40]
- N. Hussami, S. B. Korada, and R. Urbanke, “Performance of polar codes for channel and source coding”, 2009 IEEE International Symposium on Information Theory (2009) DOI
- [41]
- M. Mondelli, S. H. Hassani, and R. L. Urbanke, “Unified Scaling of Polar Codes: Error Exponent, Scaling Exponent, Moderate Deviations, and Error Floors”, IEEE Transactions on Information Theory 62, 6698 (2016) DOI
- [42]
- S. B. Korada et al., “An empirical scaling law for polar codes”, 2010 IEEE International Symposium on Information Theory (2010) DOI
- [43]
- A. Barg and A. Mazumdar, “Codes in permutations and error correction for rank modulation”, 2010 IEEE International Symposium on Information Theory (2010) DOI
- [44]
- S. Kudekar et al., “Reed–Muller Codes Achieve Capacity on Erasure Channels”, IEEE Transactions on Information Theory 63, 4298 (2017) DOI
- [45]
- G. Reeves and H. D. Pfister, “Reed–Muller Codes on BMS Channels Achieve Vanishing Bit-Error Probability for All Rates Below Capacity”, IEEE Transactions on Information Theory 1 (2023) arXiv:2110.14631 DOI
- [46]
- E. Abbe, A. Shpilka, and A. Wigderson, “Reed-Muller codes for random erasures and errors”, (2014) arXiv:1411.4590
- [47]
- E. Abbe and C. Sandon, “A proof that Reed-Muller codes achieve Shannon capacity on symmetric channels”, (2023) arXiv:2304.02509
- [48]
- L. Bazzi, M. Mahdian, and D. A. Spielman, “The Minimum Distance of Turbo-Like Codes”, IEEE Transactions on Information Theory 55, 6 (2009) DOI
- [49]
- V. Guruswami and W. Machmouchi, “Explicit interleavers for a Repeat Accumulate Accumulate (RAA) code construction”, 2008 IEEE International Symposium on Information Theory (2008) DOI
- [50]
- E. Telatar, “Capacity of Multi‐antenna Gaussian Channels”, European Transactions on Telecommunications 10, 585 (1999) DOI
- [51]
- G. J. Foschini and M. J. Gans, Wireless Personal Communications 6, 311 (1998) DOI
- [52]
- A. Goldsmith et al., “Capacity limits of MIMO channels”, IEEE Journal on Selected Areas in Communications 21, 684 (2003) DOI
- [53]
- A. Jimenez Felstrom and K. S. Zigangirov, “Time-varying periodic convolutional codes with low-density parity-check matrix”, IEEE Transactions on Information Theory 45, 2181 (1999) DOI
- [54]
- S. Kudekar, T. J. Richardson, and R. L. Urbanke, “Threshold Saturation via Spatial Coupling: Why Convolutional LDPC Ensembles Perform So Well over the BEC”, IEEE Transactions on Information Theory 57, 803 (2011) DOI
- [55]
- M. Lentmaier et al., “Terminated LDPC convolutional codes with thresholds close to capacity”, Proceedings. International Symposium on Information Theory, 2005. ISIT 2005. (2005) arXiv:cs/0508030 DOI
- [56]
- S. Kumar et al., “Threshold Saturation for Spatially Coupled LDPC and LDGM Codes on BMS Channels”, IEEE Transactions on Information Theory 60, 7389 (2014) arXiv:1309.7543 DOI
- [57]
- C. E. Shannon, “Probability of Error for Optimal Codes in a Gaussian Channel”, Bell System Technical Journal 38, 611 (1959) DOI
- [58]
- J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups (Springer New York, 1999) DOI
- [59]
- D. Slepian, “Bounds on Communication”, Bell System Technical Journal 42, 681 (1963) DOI
- [60]
- G. A. Kabatiansky, V. I. Levenshtein, “On Bounds for Packings on a Sphere and in Space”, Probl. Peredachi Inf., 14:1 (1978), 3–25; Problems Inform. Transmission, 14:1 (1978), 1–17
- [61]
- A. Barg, I. Tamo, and S. Vladut, “Locally recoverable codes on algebraic curves”, 2015 IEEE International Symposium on Information Theory (ISIT) (2015) arXiv:1603.08876 DOI
- [62]
- M. A. Tsfasman, S. G. Vlădutx, and Th. Zink, “Modular curves, Shimura curves, and Goppa codes, better than Varshamov‐Gilbert bound”, Mathematische Nachrichten 109, 21 (1982) DOI
- [63]
- Y. Ihara. "Some remarks on the number of rational points of algebraic curves over finite fields." J. Fac. Sci. Univ. Tokyo Sect. IA Math., 28:721-724 (1982),1981.