Name | Rate |
---|---|
Alamouti code | The only OSTBC with full rate, \(k = mt = 2\). |
Algebraic-geometry (AG) code | Several sequences of linear AG codes beat the GV bound and/or are asymptotically good [1–3] (see Ref. [4] for details). The rate of any linear AG code satisfies \begin{align} \frac{k}{n} \geq 1 - \frac{d}{n} - \frac{1}{\sqrt{q}-1}~, \tag*{(1)}\end{align} which comes from the Drinfeld-Vladut bound [5]. Nonlinear AG codes can outperform this bound. |
Binary PSK (BPSK) code | Achieve capacity of AGWN in the low signal-to-noise regime [6] (see also [7]). BPSK concatenated with quantum-classical polar codes achieves the Holevo capacity for the AD channel [8]. |
Binary code | The rate of a binary code is usually defined as \(R=\frac{1}{n}\log_{2}K\) bits per symbol. The maximum rate of a binary code with linear distance is upper bounded by the McEliece, Rodemich, Rumsey and Welch (MRRW) bound [9] (see Refs. [10–13] for other proofs). |
Bose–Chaudhuri–Hocquenghem (BCH) code | Primitive BCH codes are asymptotically bad [14; pg. 269]. |
Cartier code | Cartier codes share similar asymptotic properties as subfield subcodes of residue AG codes, with both families admitting sequences of codes that achieve the GV bound. |
Constantin-Rao (CR) code | CR codes for particular groups have higher rates than distance-one codes under the binary asymmetric channel for all lengths except \(n = 2^r - 1\), in which case CR codes reduce to Hamming codes [15]; see Ref. [16]. Size analysis is presented in Refs. [17,18]. |
Cycle LDPC code | Cycle codes are not asymptotically good [19]. |
Expander code | The rate is \(1 - m/n\) where \(n\) is the number of left nodes and \(m\) is the number of right nodes in the bipartite expander graph. |
Fountain code | Random linear fountain codes approach the Shannon limit as the file size \(K\) increases. However, they do not have a fixed encoding rate. |
Gallager (GL) code | GL codes nearly achieve Shannon capacity against binary-input additive Gaussian white noise using iterative decoding [20,21]. GL codes can outperform RS codes at short block length [22]. |
Galois-field \(q\)-ary code | The rate of a \(q\)-ary code is usually defined as \(R=\frac{1}{n}\log_q K\) dits per symbol. The maximum rate of a \(q\)-ary code with linear distance is lower bounded by the asymptotic GV bound and upper bounded by the \(q\)-ary version of the MRRW LP bound [23]. |
Generalized RM (GRM) code | GRM codes achieve capacity on sufficiently symmetric non-binary channels [24]. |
Generalized RS (GRS) code | Concatenations of GRS codes with random linear codes almost surely attains the GV bound [25]. |
Goethals code | The rate is \({2^m -3m +1}/2^m\), going to 1 as block length goes to infinity. |
Hermitian code | For polynomial evaluations up to degree \(D\): if \(D < q + 1 \), \(k = \frac{(D+1)(D+2)}{2}\), and if \(D \geq q + 1 \), \(k = (q+1)D - \frac{q(q-1)}{2} + 1 \). |
Hsu-Anastasopoulos LDPC (HA-LDPC) code | HA-LDPC codes achieve capacity on the BEC channel under BP decoding [26] and the memoryless binary-input output-symmetric (MBIOS) channels under ML decoding [26] and under MAP decoding [27]. They also achieve the GV bound with asymptotically high probability when the concatenation is with a rate-one LDGM code [26]. |
Irregular LDPC code | Nearly achieve capacity against binary-input additive Gaussian white noise using iterative decoding [28,29]. Such sequences have sublinearly growing distance per block length [30]. |
Irregular repeat-accumulate (IRA) code | IRA codes nearly achieve the Shannon capacity of the binary erasure channel using iterative decoding [31]. Puncturing ressens the decoding complexity while still allowing sequences of codes to achive capacity [32]. |
Justesen code | The first asymptotically good codes. Rate is \(R_m=k/n=K/2N\geq R\) and the relative minumum distance satisfy \(\delta_m=d_m/n\geq 0.11(1-2R)\), where \(K=\left\lceil 2NR\right\rceil\) for asymptotic rate \(0<R<1/2\); see pg. 311 of Ref. [14]. The code can be improved and extend the range of \(R\) from 0 to 1 by puncturing, i.e., by erasing \(s\) digits from each inner codeword. This yields a code of length \(n=(2m-s)N\) and rate \(R=mk/(2m-s)N\). The lower bound of the distance of the punctured code approaches \(d_m/n=(1-R/r)H^{-1}(1-r)\) as \(m\) goes to infinity, where \(r\) is the maximum of 1/2 and the solution to \(R=r^2/(1+\log(1-h^{-1}(1-r)))\), and \(h\) is the binary entropy function. |
Kerdock code | The transmission rate is \(2m/2^m\) which tends to 0 as \(m\) becomes large, hence these codes are asymptotically poor. |
Lattice-based code | Lattice codes with minimal-distance decoding achieve the capacity of the AGWN channel [33–36]. |
Linear \(q\)-ary code | Any code admitting a two-transitive automorphism group achieves capacity under the binary erasure channel [37–39]. |
Linear binary code | A family of linear codes \(C_i = [n_i,k_i,d_i]\) is asymptotically good if the asymptotic rate \(\lim_{i\to\infty} k_i/n_i\) and asymptotic distance \(\lim_{i\to\infty} d_i/n_i\) are both positive. Nearly all good linear binary codes for the asymmetric channel are also good for the symmetric channel [40]; this is not the case for non-binary codes [16]. |
Linear code with complementary dual (LCD) | Asymptotically good LCD codes exist [41]. |
Low-density generator-matrix (LDGM) code | Certain LDGM codes come close to achieving Shannon capacity [42]. |
Low-density parity-check (LDPC) code | Some LDPC codes achieve the Shannon capacity of the binary symmetric channel under maximum-likelihood decoding [21,43,44]. Other LDPC codes achieve capacity for smaller block lengths under belief-propagation decoding [45]. Random LDPC codes achieve list-decoding capacity [46]. |
MacKay-Neal LDPC (MN-LDPC) code | Certain sequences of optimally decoded codes can nearly achieve the Shannon capacity [21,47]. A sequence of codes achieves the capacity of memoryless binary-input symmetric-output channels under MAP decoding [27]. |
Nonlinear AG code | Certain nonlinear code sequences beat the Tsfasman-Vladut-Zink bound, outperforming linear AG codes. |
Orthogonal Spacetime Block Code (OSTBC) | The greatest rate which can be achieved is \(\frac{n_0+1}{2n_0}\), where either \(n=2n_0\) or \(n=2n_0-1\) [48]. |
Permutation spherical code | Number of codewords cannot increase exponentially with dimension \(n\) [49] |
Phase-shift keying (PSK) code | Nearly achieves Shannon AWGN capacity for one-dimensional constellations in the limit of infinite signal to noise [50; Fig. 11.7]. |
Pless symmetry code | Achieve capacity of the binary erasure channel; see Ref. [39]. |
Polar code | Supports reliable transmission at rates \(K/N\) approaching the Shannon capacity of the channel under the successive cancellation decoder [51,52]; see also Refs. [53,54]. |
Preparata code | The rate is \(\frac{2^{m+1}-2m-2}{2^{m+1}-1}\). |
Quadrature-amplitude modulation (QAM) code | Nearly achieves Shannon AWGN capacity for two-dimensional constellations in the limit of infinite signal to noise [50; Fig. 11.8]. |
Rank-modulation Gray code (RMGC) | Rank modulation codes with code distance \(d=\Theta(n^{1+\epsilon})\) for \(\epsilon\in[0,1]\) achieve a rate of \(1-\epsilon\) [55]. |
Reed-Muller (RM) code | Achieves capacity of the binary erasure channel [37,38], the binary memoryless symmetric (BMS) channel under bitwise maximum-a-posteriori decoding [56] (see also Ref. [57]), and the binary symmetric channel (BSC), solving a long-standing conjecture [58]. |
Regular binary Tanner code | For a short code \(C_0\) with rate \(R_0\), the Tanner code has rate \(R \geq 2R_0-1\). If \(C_0\) satisfies the GV bound, the rate \(R \geq \delta = 1-2h(\delta_0)\), where \(\delta\) (\(\delta_0\)) is the relative distance of the Tanner code (\(C_0\)), and \(h\) is the binary entropy function. |
Repeat-accumulate (RA) code | RA codes are not asymptotically good [59]. |
Repeat-accumulate-accumulate (RAA) code | Some sequences of non-deterministic RAA codes are asymptotically good [59,60]. |
Repetition code | Code rate is \(\frac{1}{n}\), code distance is \(n\). |
Single parity-check (SPC) code | The code rate is \(\frac{n}{n+1}\to 1\) as \(n\to\infty\). |
Spacetime block code (STBC) | A spacetime block code is said to be full rate when \(k = m t\), where \(k\) is the number of information symbolds encoded in its codewords. |
Spacetime code (STC) | Shannon capacity of various multiple-input multiple-output (MIMO) channels has been determined [61–63]. |
Spatially coupled LDPC (SC-LDPC) code | Spatial coupling of LDPC codes can increase the achievable rate against BEC, coming close to the capacity [64–66]. SC-LDPC codes achieve capacity of the binary memoryless symmetric (BMS) channel [45,67]. |
Sphere packing | The rate of a code consisting of \(M\) codewords that represent a signal of bandwidth \(W\) and duration \(T\) is defined as \begin{align} R=\frac{1}{T}\log_{2}M\quad\quad\text{bits/s}~. \tag*{(2)}\end{align} The Shannon capacity of the AGWN channel for a signal whose power is \(P\) is \begin{align} C=W\log_{2}\left(1+\frac{P}{\sigma^{2}}\right)\,. \tag*{(3)}\end{align} Random sphere packings achieve this capacity [68]; see the book [69] for more details. Tradeoffs between various parameters have been analyzed [70]. Deterministic sets of constellations from quadrature rules can also achieve capacity [6]. |
Spherical code | The Kabatiansky-Levenshtein bound [71] is the best known upper bound on the rate of a spherical code with fixed Euclidean distance. |
Tamo-Barg-Vladut code | Tamo-Barg-Vladut codes on asymptotically maximal curves improve upon the asymptotic LRC GV bound [72]. |
Tensor-product code | Rate of the tensor-product code \(C_A \otimes C_B\) is a product of the rates of the codes \(C_A\) and \(C_B\). |
Tornado code | Come arbitrarily close to the capacity of the binary erasure channel. |
Tsfasman-Vladut-Zink (TVZ) code | TVZ codes, either in the residue AG or evaluation AG constructions, exceed the asymptotic GV bound [1] (see also Ref. [73]). Roughly speaking, this breakthrough result implies that AG codes can outperform random codes. |
Varshamov-Tenengolts (VT) code | Rate-\(1\) code, with \(\log n+1\) bits of redundancy when \(a=0\). Nearly optimal. |
Wozencraft ensemble code | Meets the GV bound for most choices of \(\alpha\). Puncturing the code yields a higher rate with also meeting the GV bound; see Ref. [74]. These codes can be used to asymptotically improve over the GV bound [75]. |
Wrapped spherical code | Asymptotically maximal spherical coding density is obtained with the densest possible sphere packing. |
Zetterberg code | The rate is given by \(1-\frac{4s}{n}\), which is asymptotically good, with a minimum distance of 5. |
\([2^r-1,2^r-r-1,3]\) Hamming code | Asymptotic rate \(k/n = 1-\frac{\log n}{n} \to 1\) and normalized distance \(d/n \to 0\). |
\(q\)-ary quadratic-residue (QR) code | Achieve capacity of the binary erasure channel; see Ref. [39]. |
References
- [1]
- M. A. Tsfasman, S. G. Vlădutx, and Th. Zink, “Modular curves, Shimura curves, and Goppa codes, better than Varshamov‐Gilbert bound”, Mathematische Nachrichten 109, 21 (1982) DOI
- [2]
- A. Garcia and H. Stichtenoth, “A tower of Artin-Schreier extensions of function fields attaining the Drinfeld-Vladut bound”, Inventiones Mathematicae 121, 211 (1995) DOI
- [3]
- A. Garcia and H. Stichtenoth, “On the Asymptotic Behaviour of Some Towers of Function Fields over Finite Fields”, Journal of Number Theory 61, 248 (1996) DOI
- [4]
- T. Høholdt, J.H. Van Lint, and R. Pellikaan, 1998. Algebraic geometry codes. Handbook of coding theory, 1 (Part 1), pp.871-961.
- [5]
- S. G. Vlăduţ, V. G. Drinfeld, “Number of points of an algebraic curve”, Funktsional. Anal. i Prilozhen., 17:1 (1983), 68–69; Funct. Anal. Appl., 17:1 (1983), 53–54
- [6]
- Y. Wu and S. Verdu, “The impact of constellation cardinality on Gaussian channel capacity”, 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton) (2010) DOI
- [7]
- C. E. Shannon, “A Mathematical Theory of Communication”, Bell System Technical Journal 27, 623 (1948) DOI
- [8]
- S. Guha and M. M. Wilde, “Polar coding to achieve the Holevo capacity of a pure-loss optical channel”, 2012 IEEE International Symposium on Information Theory Proceedings (2012) arXiv:1202.0533 DOI
- [9]
- R. McEliece et al., “New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities”, IEEE Transactions on Information Theory 23, 157 (1977) DOI
- [10]
- M. Navon and A. Samorodnitsky, “Linear Programming Bounds for Codes via a Covering Argument”, Discrete & Computational Geometry 41, 199 (2008) DOI
- [11]
- J. Friedman and J.-P. Tillich, “Generalized Alon--Boppana Theorems and Error-Correcting Codes”, SIAM Journal on Discrete Mathematics 19, 700 (2005) DOI
- [12]
- A. Samorodnitsky, “One more proof of the first linear programming bound for binary codes and two conjectures”, (2021) arXiv:2104.14587
- [13]
- N. Linial and E. Loyfer, “An Elementary Proof of the First LP Bound on the Rate of Binary Codes”, (2023) arXiv:2303.16619
- [14]
- F. J. MacWilliams and N. J. A. Sloane. The theory of error correcting codes. Elsevier, 1977.
- [15]
- Kløve, Torleiv. Error correcting codes for the asymmetric channel. Department of Pure Mathematics, University of Bergen, 1981.
- [16]
- M. Grassl et al., “New Constructions of Codes for Asymmetric Channels via Concatenation”, IEEE Transactions on Information Theory 61, 1879 (2015) arXiv:1310.7536 DOI
- [17]
- R. J. McEliece and E. R. Rodemich, “The constantinrao construction for binary asymmetric error-correcting codes”, Information and Control 44, 187 (1980) DOI
- [18]
- K. A. S. Abdel-Ghaffar and H. C. Ferreira, “Systematic encoding of the Varshamov-Tenengol’ts codes and the Constantin-Rao codes”, IEEE Transactions on Information Theory 44, 340 (1998) DOI
- [19]
- L. Decreusefond and G. Zemor, “On the error-correcting capabilities of cycle codes of graphs”, Proceedings of 1994 IEEE International Symposium on Information Theory DOI
- [20]
- D. J. C. MacKay and R. M. Neal, “Near Shannon limit performance of low density parity check codes”, Electronics Letters 33, 457 (1997) DOI
- [21]
- D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices”, IEEE Transactions on Information Theory 45, 399 (1999) DOI
- [22]
- D. J. C. MacKay and M. C. Davey, “Evaluation of Gallager Codes for Short Block Length and High Rate Applications”, Codes, Systems, and Graphical Models 113 (2001) DOI
- [23]
- M. Aaltonen, “A new upper bound on nonbinary block codes”, Discrete Mathematics 83, 139 (1990) DOI
- [24]
- G. Reeves and H. D. Pfister, “Achieving Capacity on Non-Binary Channels with Generalized Reed-Muller Codes”, (2023) arXiv:2305.07779
- [25]
- C. Thommesen, “The existence of binary linear concatenated codes with Reed - Solomon outer codes which asymptotically meet the Gilbert- Varshamov bound”, IEEE Transactions on Information Theory 29, 850 (1983) DOI
- [26]
- C.-H. Hsu and A. Anastasopoulos, “Capacity-Achieving Codes with Bounded Graphical Complexity on Noisy Channels”, (2005) arXiv:cs/0509062
- [27]
- K. KASAI and K. SAKANIWA, “Spatially-Coupled MacKay-Neal Codes and Hsu-Anastasopoulos Codes”, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E94-A, 2161 (2011) arXiv:1102.4612 DOI
- [28]
- T. J. Richardson, M. A. Shokrollahi, and R. L. Urbanke, “Design of capacity-approaching irregular low-density parity-check codes”, IEEE Transactions on Information Theory 47, 619 (2001) DOI
- [29]
- Sae-Young Chung et al., “On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit”, IEEE Communications Letters 5, 58 (2001) DOI
- [30]
- C. Di, T. J. Richardson, and R. L. Urbanke, “Weight Distribution of Low-Density Parity-Check Codes”, IEEE Transactions on Information Theory 52, 4839 (2006) DOI
- [31]
- Jin, Hui, Aamod Khandekar, and Robert McEliece. "Irregular repeat-accumulate codes." Proc. 2nd Int. Symp. Turbo codes and related topics. 2000.
- [32]
- H. D. Pfister, I. Sason, and R. Urbanke, “Capacity-Achieving Ensembles for the Binary Erasure Channel With Bounded Complexity”, IEEE Transactions on Information Theory 51, 2352 (2005) DOI
- [33]
- R. Urbanke and B. Rimoldi, “Lattice codes can achieve capacity on the AWGN channel”, IEEE Transactions on Information Theory 44, 273 (1998) DOI
- [34]
- R. de Buda, “The upper error bound of a new near-optimal code”, IEEE Transactions on Information Theory 21, 441 (1975) DOI
- [35]
- R. de Budaand W. Kassem, About lattices and the random coding theorem, in Abstracts of Papers, IEEE Inter. Symp. Info. Theory 1981, IEEE Press, NY 1981, p. 145
- [36]
- W. Kassem, Optimal Lattice Codes for the Gaussian Channel, Ph.D Thesis, McMaster Univ., Hamilton, Ontario, 1981,doi:10.1109/18.651040
- [37]
- S. Kumar and H. D. Pfister, “Reed-Muller Codes Achieve Capacity on Erasure Channels”, (2015) arXiv:1505.05123
- [38]
- S. Kudekar et al., “Reed-Muller Codes Achieve Capacity on Erasure Channels”, (2016) arXiv:1601.04689
- [39]
- K. Ivanov and R. L. Urbanke, “Capacity-achieving codes: a review on double transitivity”, (2020) arXiv:2010.15453
- [40]
- Varshamov, R. R. "Some features of linear codes that correct asymmetric errors." Soviet Physics Doklady. Vol. 9. 1965.
- [41]
- J. L. Massey, “Linear codes with complementary duals”, Discrete Mathematics 106–107, 337 (1992) DOI
- [42]
- J. Garcia-Frias and Wei Zhong, “Approaching Shannon performance by iterative decoding of linear codes with low-density generator matrix”, IEEE Communications Letters 7, 266 (2003) DOI
- [43]
- R. Gallager, “Low-density parity-check codes”, IEEE Transactions on Information Theory 8, 21 (1962) DOI
- [44]
- V. Guruswami, “Iterative Decoding of Low-Density Parity Check Codes (A Survey)”, (2006) arXiv:cs/0610022
- [45]
- S. Kudekar, T. Richardson, and R. Urbanke, “Spatially Coupled Ensembles Universally Achieve Capacity under Belief Propagation”, (2012) arXiv:1201.2999
- [46]
- J. Mosheiff et al., “Low-Density Parity-Check Codes Achieve List-Decoding Capacity”, SIAM Journal on Computing FOCS20 (2021) arXiv:1909.06430 DOI
- [47]
- D. J. C. MacKay and R. M. Neal, “Good codes based on very sparse matrices”, Cryptography and Coding 100 (1995) DOI
- [48]
- Xue-Bin Liang, “Orthogonal designs with maximal rates”, IEEE Transactions on Information Theory 49, 2468 (2003) DOI
- [49]
- H. Landau, “How does a porcupine separate its quills?”, IEEE Transactions on Information Theory 17, 157 (1971) DOI
- [50]
- R. E. Blahut, Modem Theory (Cambridge University Press, 2009) DOI
- [51]
- E. Arikan, “Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels”, IEEE Transactions on Information Theory 55, 3051 (2009) DOI
- [52]
- N. Hussami, S. B. Korada, and R. Urbanke, “Performance of polar codes for channel and source coding”, 2009 IEEE International Symposium on Information Theory (2009) DOI
- [53]
- M. Mondelli, S. H. Hassani, and R. L. Urbanke, “Unified Scaling of Polar Codes: Error Exponent, Scaling Exponent, Moderate Deviations, and Error Floors”, IEEE Transactions on Information Theory 62, 6698 (2016) DOI
- [54]
- S. B. Korada et al., “An empirical scaling law for polar codes”, 2010 IEEE International Symposium on Information Theory (2010) DOI
- [55]
- A. Barg and A. Mazumdar, “Codes in permutations and error correction for rank modulation”, 2010 IEEE International Symposium on Information Theory (2010) DOI
- [56]
- G. Reeves and H. D. Pfister, “Reed–Muller Codes on BMS Channels Achieve Vanishing Bit-Error Probability for all Rates Below Capacity”, IEEE Transactions on Information Theory 70, 920 (2024) arXiv:2110.14631 DOI
- [57]
- E. Abbe, A. Shpilka, and A. Wigderson, “Reed-Muller codes for random erasures and errors”, (2014) arXiv:1411.4590
- [58]
- E. Abbe and C. Sandon, “A proof that Reed-Muller codes achieve Shannon capacity on symmetric channels”, (2023) arXiv:2304.02509
- [59]
- L. Bazzi, M. Mahdian, and D. A. Spielman, “The Minimum Distance of Turbo-Like Codes”, IEEE Transactions on Information Theory 55, 6 (2009) DOI
- [60]
- V. Guruswami and W. Machmouchi, “Explicit interleavers for a Repeat Accumulate Accumulate (RAA) code construction”, 2008 IEEE International Symposium on Information Theory (2008) DOI
- [61]
- E. Telatar, “Capacity of Multi‐antenna Gaussian Channels”, European Transactions on Telecommunications 10, 585 (1999) DOI
- [62]
- G. J. Foschini and M. J. Gans, Wireless Personal Communications 6, 311 (1998) DOI
- [63]
- A. Goldsmith et al., “Capacity limits of MIMO channels”, IEEE Journal on Selected Areas in Communications 21, 684 (2003) DOI
- [64]
- A. Jimenez Felstrom and K. S. Zigangirov, “Time-varying periodic convolutional codes with low-density parity-check matrix”, IEEE Transactions on Information Theory 45, 2181 (1999) DOI
- [65]
- S. Kudekar, T. J. Richardson, and R. L. Urbanke, “Threshold Saturation via Spatial Coupling: Why Convolutional LDPC Ensembles Perform So Well over the BEC”, IEEE Transactions on Information Theory 57, 803 (2011) DOI
- [66]
- M. Lentmaier et al., “Terminated LDPC convolutional codes with thresholds close to capacity”, Proceedings. International Symposium on Information Theory, 2005. ISIT 2005. (2005) arXiv:cs/0508030 DOI
- [67]
- S. Kumar et al., “Threshold Saturation for Spatially Coupled LDPC and LDGM Codes on BMS Channels”, IEEE Transactions on Information Theory 60, 7389 (2014) arXiv:1309.7543 DOI
- [68]
- C. E. Shannon, “Probability of Error for Optimal Codes in a Gaussian Channel”, Bell System Technical Journal 38, 611 (1959) DOI
- [69]
- J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups (Springer New York, 1999) DOI
- [70]
- D. Slepian, “Bounds on Communication”, Bell System Technical Journal 42, 681 (1963) DOI
- [71]
- G. A. Kabatiansky, V. I. Levenshtein, “On Bounds for Packings on a Sphere and in Space”, Probl. Peredachi Inf., 14:1 (1978), 3–25; Problems Inform. Transmission, 14:1 (1978), 1–17
- [72]
- A. Barg, I. Tamo, and S. Vladut, “Locally recoverable codes on algebraic curves”, 2015 IEEE International Symposium on Information Theory (ISIT) (2015) arXiv:1603.08876 DOI
- [73]
- Y. Ihara. "Some remarks on the number of rational points of algebraic curves over finite fields." J. Fac. Sci. Univ. Tokyo Sect. IA Math., 28:721-724 (1982),1981.
- [74]
- V. Guruswami and S. Li, “A Deterministic Construction of a Large Distance Code from the Wozencraft Ensemble”, (2023) arXiv:2305.02484
- [75]
- P. Gaborit and G. Zemor, “Asymptotic Improvement of the Gilbert–Varshamov Bound for Linear Codes”, IEEE Transactions on Information Theory 54, 3865 (2008) arXiv:0708.4164 DOI