Name | Rate |
---|---|

Alamouti code | The only OSTBC with full rate, \(k = mt = 2\). |

Algebraic-geometry (AG) code | Several sequences of linear AG codes beat the GV bound and/or are asymptotically good [1–3] (see Ref. [4] for details). The rate of any linear AG code satisfies \begin{align} \frac{k}{n} \geq 1 - \frac{d}{n} - \frac{1}{\sqrt{q}-1}~, \tag*{(1)}\end{align} which comes from the Drinfeld-Vladut bound [5]. Nonlinear AG codes can outperform this bound. |

Binary PSK (BPSK) code | Achieve capacity of AGWN in the low signal-to-noise regime [6] (see also [7]). BPSK concatenated with quantum-classical polar codes achieves the Holevo capacity for the AD channel [8]. |

Binary code | The rate of a binary code is usually defined as \(R=\frac{1}{n}\log_{2}K\) bits per symbol. The maximum rate of a binary code with linear distance is upper bounded by the McEliece, Rodemich, Rumsey and Welch (MRRW) bound [9] (see Refs. [10–13] for other proofs). |

Block code | Ideal decoding error scales is suppressed exponentially with the number of subsystems \(n\), and the exponent has been studied in Ref. [14–16]. |

Bose–Chaudhuri–Hocquenghem (BCH) code | Primitive BCH codes are asymptotically bad [17; pg. 269]. |

Cartier code | Cartier codes share similar asymptotic properties as subfield subcodes of residue AG codes, with both families admitting sequences of codes that achieve the GV bound. |

Concatenated code | There exist bounds on distance and rate of concatenated codes with a fixed outer and random inner code [18,19]. |

Constantin-Rao (CR) code | CR codes for particular groups have higher rates than distance-one codes under the binary asymmetric channel for all lengths except \(n = 2^r - 1\), in which case CR codes reduce to Hamming codes [20]; see Ref. [21]. Size analysis is presented in Refs. [22,23]. |

Convolutional code | Depends on the polynomials used. Using puncturing removal [24] the rate for the code can be increased from \(\frac{1}{t}\) to \(\frac{s}{t}\), where \(t\) is the number of output bits, and \(s\) depends on the puncturing done. This is done by deleting some pieces of the encoder output such that the most-likely decoders remain effective |

Cycle LDPC code | Cycle codes are not asymptotically good [25]. |

Error-correcting code (ECC) | The Shannon channel capacity (the maximum of the mutual information over input and output distributions) is the highest rate of information transmission through a classical (i.e., non-quantum) channel with arbitrarily small error rate [26]. Corrections to the capacity and tradeoff between decoding error, code rate and code length are determined using small [27–29], moderate [30–32] and large [33–36] deviation analysis. Sometimes the difference from the asymptotic rate at finite block length can be characterized by the channel dispersion [29,37]. |

Expander code | The rate is \(1 - m/n\) where \(n\) is the number of left nodes and \(m\) is the number of right nodes in the bipartite expander graph. |

Fountain code | Random linear fountain codes approach the Shannon limit as the file size \(K\) increases. However, they do not have a fixed encoding rate. |

Frameproof (FP) code | FP codes tend to have large minimal distance and low rate [38]. Specifically, for any positive integers \(n\) and \(c\), if \(l = 16c^{2}\log n\), then there exists a \(c\)-FP \( (l,n) \)-code which has rate \(n/l\) [38]. See Ref. [39] for other bounds on FP codes. |

Gallager (GL) code | GL codes nearly achieve Shannon capacity against binary-input additive Gaussian white noise using iterative decoding [40,41]. GL codes can outperform RS codes at short block length [42]. |

Generalized RM (GRM) code | GRM codes achieve capacity on sufficiently symmetric non-binary channels [43]. |

Generalized RS (GRS) code | Concatenations of GRS codes with random linear codes almost surely attains the GV bound [44]. |

Goethals code | The rate is \({2^m -3m +1}/2^m\), going to 1 as block length goes to infinity. |

Hermitian code | For polynomial evaluations up to degree \(D\): if \(D < q + 1 \), \(k = \frac{(D+1)(D+2)}{2}\), and if \(D \geq q + 1 \), \(k = (q+1)D - \frac{q(q-1)}{2} + 1 \). |

Hsu-Anastasopoulos LDPC (HA-LDPC) code | HA-LDPC codes achieve capacity on the BEC channel under BP decoding [45] and the memoryless binary-input output-symmetric (MBIOS) channels under ML decoding [45] and under MAP decoding [46]. They also achieve the GV bound with asymptotically high probability when the concatenation is with a rate-one LDGM code [45]. |

Identifiable parent property (IPP) code | A hypergraph approach can be used to show that, for any \(t \leq q\), there exists a sequence of IPP codes with asymptotically non-vanishing rate [47]. |

Irregular LDPC code | Nearly achieve capacity against binary-input additive Gaussian white noise using iterative decoding [48,49]. Such sequences have sublinearly growing distance per block length [50]. |

Irregular repeat-accumulate (IRA) code | IRA codes nearly achieve the Shannon capacity of the binary erasure channel using iterative decoding [51]. Puncturing ressens the decoding complexity while still allowing sequences of codes to achive capacity [52]. |

Justesen code | The first asymptotically good codes. Rate is \(R_m=k/n=K/2N\geq R\) and the relative minumum distance satisfy \(\delta_m=d_m/n\geq 0.11(1-2R)\), where \(K=\left\lceil 2NR\right\rceil\) for asymptotic rate \(0<R<1/2\); see pg. 311 of Ref. [17]. The code can be improved and extend the range of \(R\) from 0 to 1 by puncturing, i.e., by erasing \(s\) digits from each inner codeword. This yields a code of length \(n=(2m-s)N\) and rate \(R=mk/(2m-s)N\). The lower bound of the distance of the punctured code approaches \(d_m/n=(1-R/r)H^{-1}(1-r)\) as \(m\) goes to infinity, where \(r\) is the maximum of 1/2 and the solution to \(R=r^2/(1+\log(1-h^{-1}(1-r)))\), and \(h\) is the binary entropy function. |

Kerdock code | The transmission rate is \(2m/2^m\) which tends to 0 as \(m\) becomes large, hence these codes are asymptotically poor. |

Lattice-based code | Lattice codes with minimal-distance decoding achieve the capacity of the AGWN channel [53–56]. |

Linear \(q\)-ary code | Any code admitting a two-transitive automorphism group achieves capacity under the binary erasure channel [57–59]. |

Linear binary code | A family of linear codes \(C_i = [n_i,k_i,d_i]\) is asymptotically good if the asymptotic rate \(\lim_{i\to\infty} k_i/n_i\) and asymptotic distance \(\lim_{i\to\infty} d_i/n_i\) are both positive. Nearly all good linear binary codes for the asymmetric channel are also good for the symmetric channel [60]; this is not the case for non-binary codes [21]. |

Linear code with complementary dual (LCD) | Asymptotically good LCD codes exist [61]. |

Locally decodable code (LDC) | Families of LDCs with query complexity \(r=2\) need \(n\) to scale exponentially with \(k\) [62,63]. |

Locally recoverable code (LRC) | The rate of an \((n,k,r)\) LRC satisfies \begin{align} \frac{k}{n}\leq\frac{r}{r+1}~. \tag*{(2)}\end{align} Various asymptotic lower [64,65] and upper bounds [66] exist. |

Low-density generator-matrix (LDGM) code | Certain LDGM codes come close to achieving Shannon capacity [67]. |

Low-density parity-check (LDPC) code | Some LDPC codes achieve the Shannon capacity of the binary symmetric channel under maximum-likelihood decoding [41,68,69]. Other LDPC codes achieve capacity for smaller block lengths under belief-propagation decoding [70]. Random LDPC codes achieve list-decoding capacity [71]. |

MacKay-Neal LDPC (MN-LDPC) code | Certain sequences of optimally decoded codes can nearly achieve the Shannon capacity [41,72]. A sequence of codes achieves the capacity of memoryless binary-input symmetric-output channels under MAP decoding [46]. |

Nonlinear AG code | Certain nonlinear code sequences beat the Tsfasman-Vladut-Zink bound, outperforming linear AG codes. |

Orthogonal Spacetime Block Code (OSTBC) | The greatest rate which can be achieved is \(\frac{n_0+1}{2n_0}\), where either \(n=2n_0\) or \(n=2n_0-1\) [73]. |

Permutation spherical code | Number of codewords cannot increase exponentially with dimension \(n\) [74] |

Phase-shift keying (PSK) code | Nearly achieves Shannon AWGN capacity for one-dimensional constellations in the limit of infinite signal to noise [75; Fig. 11.7]. |

Pless symmetry code | Achieve capacity of the binary erasure channel; see Ref. [59]. |

Polar code | Supports reliable transmission at rates \(K/N\) approaching the Shannon capacity of the channel under the successive cancellation decoder [76,77]; see also Refs. [78,79]. |

Preparata code | The rate is \(\frac{2^{m+1}-2m-2}{2^{m+1}-1}\). |

Quadrature-amplitude modulation (QAM) code | Nearly achieves Shannon AWGN capacity for two-dimensional constellations in the limit of infinite signal to noise [75; Fig. 11.8]. |

Random code | Typical random codes (TRC) or typical random linear codes (TLC) refer to codes in the respective ensemble that satisfy a certain minimum distance. The relative fraction of typical codes in the ensemble approaches one as \(N\) goes to infinity [26] (see also Ref. [80]). Asymptotically, given distance \(d\), the maximum rate for a TRC is given by \(R=\frac{1}{2}R_{GV}(\delta)\) where \(R_{GV}\) is the Gilbert–Varshamov (GV) bound \(R_{GV}=1-h(\delta)\), and \(h(\delta)=h(\frac{d}{n})\) is the binary entropy function. The maximum rate for a TLC is given by \(R=R_{GV}(d)\), meaning that TLCs achieve the asymptotic GV bound. |

Rank-modulation code | Rank modulation codes with code distance of order \(d=\Theta(n^{1+\epsilon})\) for \(\epsilon\in[0,1]\) achieve a rate of \(1-\epsilon\) [81]. |

Reed-Muller (RM) code | Achieves capacity of the binary erasure channel [57,58], the binary memoryless symmetric (BMS) channel under bitwise maximum-a-posteriori decoding [82] (see also Ref. [83]), and the binary symmetric channel (BSC), solving a long-standing conjecture [84]. |

Regular binary Tanner code | For a short code \(C_0\) with rate \(R_0\), the Tanner code has rate \(R \geq 2R_0-1\). If \(C_0\) satisfies the GV bound, the rate \(R \geq \delta = 1-2h(\delta_0)\), where \(\delta\) (\(\delta_0\)) is the relative distance of the Tanner code (\(C_0\)), and \(h\) is the binary entropy function. |

Repeat-accumulate (RA) code | RA codes are not asymptotically good [85]. |

Repeat-accumulate-accumulate (RAA) code | Some sequences of non-deterministic RAA codes are asymptotically good [85,86]. |

Repetition code | Code rate is \(\frac{1}{n}\), code distance is \(n\). |

Single parity-check (SPC) code | The code rate is \(\frac{n}{n+1}\to 1\) as \(n\to\infty\). |

Spacetime block code (STBC) | A spacetime block code is said to be full rate when \(k = m t\), where \(k\) is the number of information symbolds encoded in its codewords. |

Spacetime code (STC) | Shannon capacity of various multiple-input multiple-output (MIMO) channels has been determined [87–89]. |

Spatially coupled LDPC (SC-LDPC) code | Spatial coupling of LDPC codes can increase the achievable rate against BEC, coming close to the capacity [90–92]. SC-LDPC codes achieve capacity of the binary memoryless symmetric (BMS) channel [70,93]. |

Sphere packing | The rate of a code consisting of \(M\) codewords that represent a signal of bandwidth \(W\) and duration \(T\) is defined as \begin{align} R=\frac{1}{T}\log_{2}M\quad\quad\text{bits/s}~. \tag*{(3)}\end{align} The Shannon capacity of the AGWN channel for a signal whose power is \(P\) is \begin{align} C=W\log_{2}\left(1+\frac{P}{\sigma^{2}}\right)\,. \tag*{(4)}\end{align} Random sphere packings achieve this capacity [14]; see the book [94] for more details. Tradeoffs between various parameters have been analyzed [95]. Deterministic sets of constellations from quadrature rules can also achieve capacity [6]. |

Spherical code | The Kabatiansky-Levenshtein bound [96] is the best known upper bound on the rate of a spherical code with fixed Euclidean distance. |

Tamo-Barg-Vladut code | Tamo-Barg-Vladut codes on asymptotically maximal curves improve upon the asymptotic LRC GV bound [64]. |

Tensor-product code | Rate of the tensor-product code \(C_A \otimes C_B\) is a product of the rates of the codes \(C_A\) and \(C_B\). |

Tornado code | Come arbitrarily close to the capacity of the binary erasure channel. |

Traceability code | Suppose \(n\) is the number of users, \(k\) is the number of users known by the pirates, and \(p\) is the probability that the pirates cannot be traced. An open (public) resilient scheme using a hash function has the personal keys of the users consisting of \(O(k^{2}\log n)\) decryption keys, which is the amount of decryptions needed to reveal the information. The amount of data redundancy overhead is about \(O(k^{4}\log n)\) [97]. A secret resilient scheme using a hash function has the personal keys of the users consisting of \(O(k \log(n/p))\) decryption keys, which is the amount of decryptions needed to reveal the information. The amount of data redundancy overhead is about \(O(k^{2} \log(n/p))\) [97]. A threshold (secret) scheme using a hash function that is successful against pirates which decrypt with probability \(> q\), has the personal keys of the users consisting of \((4k/3q)\log(n/p)\) decryption keys (note that this is the same as in the secret resilient scheme above). These types of schemes only need order \(O(1)\) decryption operations performed by users to decrypt the information successfully. Finally, the amount of data redundancy overhead is 4k encrypted keys, a large improvement compared to the above [97]. |

Tsfasman-Vladut-Zink (TVZ) code | TVZ codes, either in the residue AG or evaluation AG constructions, exceed the asymptotic GV bound [1] (see also Ref. [98]). Roughly speaking, this breakthrough result implies that AG codes can outperform random codes. |

Turbo code | Turbo codes nearly achieve the Shannon capacity [99]. |

Varshamov-Tenengolts (VT) code | Rate-\(1\) code, with \(\log n+1\) bits of redundancy when \(a=0\). Nearly optimal. |

Wozencraft ensemble code | Meets the GV bound for most choices of \(\alpha\). Puncturing the code yields a higher rate with also meeting the GV bound; see Ref. [100]. These codes can be used to asymptotically improve over the GV bound [101]. |

Wrapped spherical code | Asymptotically maximal spherical coding density is obtained with the densest possible sphere packing. |

Zetterberg code | The rate is given by \(1-\frac{4s}{n}\), which is asymptotically good, with a minimum distance of 5. |

\([2^r-1,2^r-r-1,3]\) Hamming code | Asymptotic rate \(k/n = 1-\frac{\log n}{n} \to 1\) and normalized distance \(d/n \to 0\). |

\(q\)-ary code | The rate of a \(q\)-ary code is usually defined as \(R=\frac{1}{n}\log_q K\) dits per symbol. The maximum rate of a \(q\)-ary code with linear distance is lower bounded by the asymptotic GV bound and upper bounded by the \(q\)-ary version of the MRRW LP bound [102]. |

\(q\)-ary quadratic-residue (QR) code | Achieve capacity of the binary erasure channel; see Ref. [59]. |

## References

- [1]
- M. A. Tsfasman, S. G. Vlădutx, and Th. Zink, “Modular curves, Shimura curves, and Goppa codes, better than Varshamov‐Gilbert bound”, Mathematische Nachrichten 109, 21 (1982) DOI
- [2]
- A. Garcia and H. Stichtenoth, “A tower of Artin-Schreier extensions of function fields attaining the Drinfeld-Vladut bound”, Inventiones Mathematicae 121, 211 (1995) DOI
- [3]
- A. Garcia and H. Stichtenoth, “On the Asymptotic Behaviour of Some Towers of Function Fields over Finite Fields”, Journal of Number Theory 61, 248 (1996) DOI
- [4]
- T. Høholdt, J.H. Van Lint, and R. Pellikaan, 1998. Algebraic geometry codes. Handbook of coding theory, 1 (Part 1), pp.871-961.
- [5]
- S. G. Vlăduţ, V. G. Drinfeld, “Number of points of an algebraic curve”, Funktsional. Anal. i Prilozhen., 17:1 (1983), 68–69; Funct. Anal. Appl., 17:1 (1983), 53–54
- [6]
- Y. Wu and S. Verdu, “The impact of constellation cardinality on Gaussian channel capacity”, 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton) (2010) DOI
- [7]
- C. E. Shannon, “A Mathematical Theory of Communication”, Bell System Technical Journal 27, 623 (1948) DOI
- [8]
- S. Guha and M. M. Wilde, “Polar coding to achieve the Holevo capacity of a pure-loss optical channel”, 2012 IEEE International Symposium on Information Theory Proceedings (2012) arXiv:1202.0533 DOI
- [9]
- R. McEliece et al., “New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities”, IEEE Transactions on Information Theory 23, 157 (1977) DOI
- [10]
- M. Navon and A. Samorodnitsky, “Linear Programming Bounds for Codes via a Covering Argument”, Discrete & Computational Geometry 41, 199 (2008) DOI
- [11]
- J. Friedman and J.-P. Tillich, “Generalized Alon--Boppana Theorems and Error-Correcting Codes”, SIAM Journal on Discrete Mathematics 19, 700 (2005) DOI
- [12]
- A. Samorodnitsky, “One more proof of the first linear programming bound for binary codes and two conjectures”, (2021) arXiv:2104.14587
- [13]
- N. Linial and E. Loyfer, “An Elementary Proof of the First LP Bound on the Rate of Binary Codes”, (2023) arXiv:2303.16619
- [14]
- C. E. Shannon, “Probability of Error for Optimal Codes in a Gaussian Channel”, Bell System Technical Journal 38, 611 (1959) DOI
- [15]
- C. E. Shannon, R. G. Gallager, and E. R. Berlekamp, “Lower bounds to error probability for coding on discrete memoryless channels. I”, Information and Control 10, 65 (1967) DOI
- [16]
- Fano, Robert M. The transmission of information. Vol. 65. Cambridge, MA, USA: Massachusetts Institute of Technology, Research Laboratory of Electronics, 1949.
- [17]
- F. J. MacWilliams and N. J. A. Sloane. The theory of error correcting codes. Elsevier, 1977.
- [18]
- A. Barg, J. Justesen, and C. Thommesen, “Concatenated codes with fixed inner code and random outer code”, IEEE Transactions on Information Theory 47, 361 (2001) DOI
- [19]
- D. Doron, J. Mosheiff, and M. Wootters, “When Do Low-Rate Concatenated Codes Approach The Gilbert-Varshamov Bound?”, (2024) arXiv:2405.08584
- [20]
- Kløve, Torleiv. Error correcting codes for the asymmetric channel. Department of Pure Mathematics, University of Bergen, 1981.
- [21]
- M. Grassl et al., “New Constructions of Codes for Asymmetric Channels via Concatenation”, IEEE Transactions on Information Theory 61, 1879 (2015) arXiv:1310.7536 DOI
- [22]
- R. J. McEliece and E. R. Rodemich, “The constantinrao construction for binary asymmetric error-correcting codes”, Information and Control 44, 187 (1980) DOI
- [23]
- K. A. S. Abdel-Ghaffar and H. C. Ferreira, “Systematic encoding of the Varshamov-Tenengol’ts codes and the Constantin-Rao codes”, IEEE Transactions on Information Theory 44, 340 (1998) DOI
- [24]
- L. Sari, “Effects of Puncturing Patterns on Punctured Convolutional Codes”, TELKOMNIKA (Telecommunication, Computing, Electronics and Control) 10, (2012) DOI
- [25]
- L. Decreusefond and G. Zemor, “On the error-correcting capabilities of cycle codes of graphs”, Proceedings of 1994 IEEE International Symposium on Information Theory DOI
- [26]
- C. E. Shannon, “A Mathematical Theory of Communication”, Bell System Technical Journal 27, 379 (1948) DOI
- [27]
- V. Strassen, “Asymptotische Absch¨atzungen in Shannons Informationstheorie,” Trans. Third Prague Conference on Information Theory, Prague, 689–723, (1962)
- [28]
- M. Hayashi, “Information Spectrum Approach to Second-Order Coding Rate in Channel Coding”, IEEE Transactions on Information Theory 55, 4947 (2009) arXiv:0801.2242 DOI
- [29]
- Y. Polyanskiy, H. V. Poor, and S. Verdu, “Channel Coding Rate in the Finite Blocklength Regime”, IEEE Transactions on Information Theory 56, 2307 (2010) DOI
- [30]
- Y. Altug and A. B. Wagner, “Moderate Deviations in Channel Coding”, (2012) arXiv:1208.1924
- [31]
- Y. Polyanskiy and S. Verdu, “Channel dispersion and moderate deviations limits for memoryless channels”, 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton) (2010) DOI
- [32]
- C. T. Chubb, V. Y. F. Tan, and M. Tomamichel, “Moderate Deviation Analysis for Classical Communication over Quantum Channels”, Communications in Mathematical Physics 355, 1283 (2017) arXiv:1701.03114 DOI
- [33]
- R. Gallager, Information Theory and Reliable Communication (Springer Vienna, 1972) DOI
- [34]
- I. Csiszár and J. Körner, Information Theory (Cambridge University Press, 2011) DOI
- [35]
- S. Arimoto, “On the converse to the coding theorem for discrete memoryless channels (Corresp.)”, IEEE Transactions on Information Theory 19, 357 (1973) DOI
- [36]
- G. Dueck and J. Korner, “Reliability function of a discrete memoryless channel at rates above capacity (Corresp.)”, IEEE Transactions on Information Theory 25, 82 (1979) DOI
- [37]
- S. H. Hassani, K. Alishahi, and R. L. Urbanke, “Finite-Length Scaling for Polar Codes”, IEEE Transactions on Information Theory 60, 5875 (2014) DOI
- [38]
- D. Boneh and J. Shaw, “Collusion-Secure Fingerprinting for Digital Data”, Advances in Cryptology — CRYPT0’ 95 452 (1995) DOI
- [39]
- C. Shangguan et al., “New Bounds For Frameproof Codes”, (2014) arXiv:1411.5782
- [40]
- D. J. C. MacKay and R. M. Neal, “Near Shannon limit performance of low density parity check codes”, Electronics Letters 33, 457 (1997) DOI
- [41]
- D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices”, IEEE Transactions on Information Theory 45, 399 (1999) DOI
- [42]
- D. J. C. MacKay and M. C. Davey, “Evaluation of Gallager Codes for Short Block Length and High Rate Applications”, Codes, Systems, and Graphical Models 113 (2001) DOI
- [43]
- G. Reeves and H. D. Pfister, “Achieving Capacity on Non-Binary Channels with Generalized Reed-Muller Codes”, (2023) arXiv:2305.07779
- [44]
- C. Thommesen, “The existence of binary linear concatenated codes with Reed - Solomon outer codes which asymptotically meet the Gilbert- Varshamov bound”, IEEE Transactions on Information Theory 29, 850 (1983) DOI
- [45]
- C.-H. Hsu and A. Anastasopoulos, “Capacity-Achieving Codes with Bounded Graphical Complexity on Noisy Channels”, (2005) arXiv:cs/0509062
- [46]
- K. KASAI and K. SAKANIWA, “Spatially-Coupled MacKay-Neal Codes and Hsu-Anastasopoulos Codes”, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E94-A, 2161 (2011) arXiv:1102.4612 DOI
- [47]
- A. Barg et al., “A Hypergraph Approach to the Identifying Parent Property: The Case of Multiple Parents”, SIAM Journal on Discrete Mathematics 14, 423 (2001) DOI
- [48]
- T. J. Richardson, M. A. Shokrollahi, and R. L. Urbanke, “Design of capacity-approaching irregular low-density parity-check codes”, IEEE Transactions on Information Theory 47, 619 (2001) DOI
- [49]
- Sae-Young Chung et al., “On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit”, IEEE Communications Letters 5, 58 (2001) DOI
- [50]
- C. Di, T. J. Richardson, and R. L. Urbanke, “Weight Distribution of Low-Density Parity-Check Codes”, IEEE Transactions on Information Theory 52, 4839 (2006) DOI
- [51]
- Jin, Hui, Aamod Khandekar, and Robert McEliece. "Irregular repeat-accumulate codes." Proc. 2nd Int. Symp. Turbo codes and related topics. 2000.
- [52]
- H. D. Pfister, I. Sason, and R. Urbanke, “Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity”, IEEE Transactions on Information Theory 51, 2352 (2005) DOI
- [53]
- R. Urbanke and B. Rimoldi, “Lattice codes can achieve capacity on the AWGN channel”, IEEE Transactions on Information Theory 44, 273 (1998) DOI
- [54]
- R. de Buda, “The upper error bound of a new near-optimal code”, IEEE Transactions on Information Theory 21, 441 (1975) DOI
- [55]
- R. de Budaand W. Kassem, About lattices and the random coding theorem, in Abstracts of Papers, IEEE Inter. Symp. Info. Theory 1981, IEEE Press, NY 1981, p. 145
- [56]
- W. Kassem, Optimal Lattice Codes for the Gaussian Channel, Ph.D Thesis, McMaster Univ., Hamilton, Ontario, 1981,doi:10.1109/18.651040
- [57]
- S. Kumar and H. D. Pfister, “Reed-Muller Codes Achieve Capacity on Erasure Channels”, (2015) arXiv:1505.05123
- [58]
- S. Kudekar et al., “Reed-Muller Codes Achieve Capacity on Erasure Channels”, (2016) arXiv:1601.04689
- [59]
- K. Ivanov and R. L. Urbanke, “Capacity-achieving codes: a review on double transitivity”, (2020) arXiv:2010.15453
- [60]
- Varshamov, R. R. "Some features of linear codes that correct asymmetric errors." Soviet Physics Doklady. Vol. 9. 1965.
- [61]
- J. L. Massey, “Linear codes with complementary duals”, Discrete Mathematics 106–107, 337 (1992) DOI
- [62]
- A. Ben-Aroya, O. Regev, and R. de Wolf, “A Hypercontractive Inequality for Matrix-Valued Functions with Applications to Quantum Computing and LDCs”, 2008 49th Annual IEEE Symposium on Foundations of Computer Science (2008) arXiv:0705.3806 DOI
- [63]
- O. Goldreich et al., “Lower bounds for linear locally decodable codes and private information retrieval”, Proceedings 17th IEEE Annual Conference on Computational Complexity DOI
- [64]
- A. Barg, I. Tamo, and S. Vladut, “Locally recoverable codes on algebraic curves”, 2015 IEEE International Symposium on Information Theory (ISIT) (2015) arXiv:1603.08876 DOI
- [65]
- V. R. Cadambe and A. Mazumdar, “Bounds on the Size of Locally Recoverable Codes”, IEEE Transactions on Information Theory 61, 5787 (2015) DOI
- [66]
- A. Agarwal et al., “Combinatorial Alphabet-Dependent Bounds for Locally Recoverable Codes”, IEEE Transactions on Information Theory 64, 3481 (2018) arXiv:1702.02685 DOI
- [67]
- J. Garcia-Frias and Wei Zhong, “Approaching Shannon performance by iterative decoding of linear codes with low-density generator matrix”, IEEE Communications Letters 7, 266 (2003) DOI
- [68]
- R. Gallager, “Low-density parity-check codes”, IEEE Transactions on Information Theory 8, 21 (1962) DOI
- [69]
- V. Guruswami, “Iterative Decoding of Low-Density Parity Check Codes (A Survey)”, (2006) arXiv:cs/0610022
- [70]
- S. Kudekar, T. Richardson, and R. Urbanke, “Spatially Coupled Ensembles Universally Achieve Capacity under Belief Propagation”, (2012) arXiv:1201.2999
- [71]
- J. Mosheiff et al., “Low-Density Parity-Check Codes Achieve List-Decoding Capacity”, SIAM Journal on Computing FOCS20 (2021) arXiv:1909.06430 DOI
- [72]
- D. J. C. MacKay and R. M. Neal, “Good codes based on very sparse matrices”, Cryptography and Coding 100 (1995) DOI
- [73]
- Xue-Bin Liang, “Orthogonal designs with maximal rates”, IEEE Transactions on Information Theory 49, 2468 (2003) DOI
- [74]
- H. Landau, “How does a porcupine separate its quills?”, IEEE Transactions on Information Theory 17, 157 (1971) DOI
- [75]
- R. E. Blahut, Modem Theory (Cambridge University Press, 2009) DOI
- [76]
- E. Arikan, “Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels”, IEEE Transactions on Information Theory 55, 3051 (2009) DOI
- [77]
- N. Hussami, S. B. Korada, and R. Urbanke, “Performance of polar codes for channel and source coding”, 2009 IEEE International Symposium on Information Theory (2009) DOI
- [78]
- M. Mondelli, S. H. Hassani, and R. L. Urbanke, “Unified Scaling of Polar Codes: Error Exponent, Scaling Exponent, Moderate Deviations, and Error Floors”, IEEE Transactions on Information Theory 62, 6698 (2016) DOI
- [79]
- S. B. Korada et al., “An empirical scaling law for polar codes”, 2010 IEEE International Symposium on Information Theory (2010) DOI
- [80]
- A. Barg and G. D. Forney, “Random codes: minimum distances and error exponents”, IEEE Transactions on Information Theory 48, 2568 (2002) DOI
- [81]
- A. Barg and A. Mazumdar, “Codes in permutations and error correction for rank modulation”, 2010 IEEE International Symposium on Information Theory (2010) DOI
- [82]
- G. Reeves and H. D. Pfister, “Reed–Muller Codes on BMS Channels Achieve Vanishing Bit-Error Probability for all Rates Below Capacity”, IEEE Transactions on Information Theory 70, 920 (2024) arXiv:2110.14631 DOI
- [83]
- E. Abbe, A. Shpilka, and A. Wigderson, “Reed-Muller codes for random erasures and errors”, (2014) arXiv:1411.4590
- [84]
- E. Abbe and C. Sandon, “A proof that Reed-Muller codes achieve Shannon capacity on symmetric channels”, (2023) arXiv:2304.02509
- [85]
- L. Bazzi, M. Mahdian, and D. A. Spielman, “The Minimum Distance of Turbo-Like Codes”, IEEE Transactions on Information Theory 55, 6 (2009) DOI
- [86]
- V. Guruswami and W. Machmouchi, “Explicit interleavers for a Repeat Accumulate Accumulate (RAA) code construction”, 2008 IEEE International Symposium on Information Theory (2008) DOI
- [87]
- E. Telatar, “Capacity of Multi‐antenna Gaussian Channels”, European Transactions on Telecommunications 10, 585 (1999) DOI
- [88]
- G. J. Foschini and M. J. Gans, Wireless Personal Communications 6, 311 (1998) DOI
- [89]
- A. Goldsmith et al., “Capacity limits of MIMO channels”, IEEE Journal on Selected Areas in Communications 21, 684 (2003) DOI
- [90]
- A. Jimenez Felstrom and K. S. Zigangirov, “Time-varying periodic convolutional codes with low-density parity-check matrix”, IEEE Transactions on Information Theory 45, 2181 (1999) DOI
- [91]
- S. Kudekar, T. J. Richardson, and R. L. Urbanke, “Threshold Saturation via Spatial Coupling: Why Convolutional LDPC Ensembles Perform So Well over the BEC”, IEEE Transactions on Information Theory 57, 803 (2011) DOI
- [92]
- M. Lentmaier et al., “Terminated LDPC convolutional codes with thresholds close to capacity”, Proceedings. International Symposium on Information Theory, 2005. ISIT 2005. (2005) arXiv:cs/0508030 DOI
- [93]
- S. Kumar et al., “Threshold Saturation for Spatially Coupled LDPC and LDGM Codes on BMS Channels”, IEEE Transactions on Information Theory 60, 7389 (2014) arXiv:1309.7543 DOI
- [94]
- J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups (Springer New York, 1999) DOI
- [95]
- D. Slepian, “Bounds on Communication”, Bell System Technical Journal 42, 681 (1963) DOI
- [96]
- G. A. Kabatiansky, V. I. Levenshtein, “On Bounds for Packings on a Sphere and in Space”, Probl. Peredachi Inf., 14:1 (1978), 3–25; Problems Inform. Transmission, 14:1 (1978), 1–17
- [97]
- B. Chor et al., “Tracing traitors”, IEEE Transactions on Information Theory 46, 893 (2000) DOI
- [98]
- Y. Ihara. "Some remarks on the number of rational points of algebraic curves over finite fields." J. Fac. Sci. Univ. Tokyo Sect. IA Math., 28:721-724 (1982),1981.
- [99]
- C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1”, Proceedings of ICC ’93 - IEEE International Conference on Communications DOI
- [100]
- V. Guruswami and S. Li, “A Deterministic Construction of a Large Distance Code from the Wozencraft Ensemble”, (2023) arXiv:2305.02484
- [101]
- P. Gaborit and G. Zemor, “Asymptotic Improvement of the Gilbert–Varshamov Bound for Linear Codes”, IEEE Transactions on Information Theory 54, 3865 (2008) arXiv:0708.4164 DOI
- [102]
- M. Aaltonen, “A new upper bound on nonbinary block codes”, Discrete Mathematics 83, 139 (1990) DOI