Here is a list of all quantum codes that specify what rate they have.
Name Rate
Alamouti code The only OSTBC with unity rate.
Algebraic-geometry (AG) code Several sequences of linear AG codes beat the Gilbert-Varshamov bound and/or are asymptotically good [1][2] (see Ref. [3] for details). The rate of any linear AG code satisfies \begin{align} \frac{k}{n} \geq 1 - \frac{d}{n} - \frac{1}{\sqrt{q}-1}~, \end{align} which comes from the Drinfeld-Vladut bound [4]. Nonlinear AG codes can outperform this bound.
Bacon-Shor code A non-LDPC family of Bacon-Shor codes achieves a distance of $$\Omega(n^{1-\epsilon})$$ with sparse gauge operators.
Balanced product code A notable family of balanced product codes encode $$k \in \Theta(n^{4/5})$$ logical qubits with distance $$d \in \Omega(n^{3/5})$$ for any number of physical qubits $$n$$. Additionally, it is known that the code constructed from the balanced product of two good classical LDPC codes over groups of order $$\Theta(n)$$ has a constant encoding rate [5].
Binary PSK (BPSK) code Achieve capacity of AGWN in the low signal-to-noise regime [6] (see also [7]). BPSK concatenated with quantum-classical polar codes achieves the Holevo capacity for the pure-loss channel [8].
Binary Varshamov-Tenengolts (VT) code Rate-$$1$$ code, with $$\log n+1$$ bits of redundancy when $$a=0$$. Nearly optimal.
Bose–Chaudhuri–Hocquenghem (BCH) code Primitive BCH codes are asymptotically bad [9; pg. 269].
Bosonic c-q code The Holevo (also known as Gordon) capacity has been calculated for various bosonic quantum channels such as AGWN [10] (see Ref. [11] for a review).
Bosonic code The quantum capacity of the pure-loss channel [12] and the dephasing noise channel [13] are both known. The capacity of the displacement noise channel, the quantum analogue of AGWN, has been bounded using GKP codes [14][15].
Calderbank-Shor-Steane (CSS) stabilizer code For a depolarizing channel with probability $$p$$, CSS codes allowing for arbitrarily accurate recovery exist with asymptotic rate $$1-2h(p)$$, where $$h$$ is the binary entropy function [16].
Cartier code Cartier codes share similar asymptotic properties as subfield subcodes of residue AG codes, with both families admitting sequences of codes that achieve the Gilbert-Varshamov bound.
Chuang-Leung-Yamamoto (CLY) code Code rate is $$\frac{k}{n \log_2(N+1)}$$. To correct the loss of up to $$t$$ excitations with $$K+1$$ codewords, a code exists with scaling $$N \sim t^3 K/2$$.
Classical-quantum (c-q) code The Holevo channel capacity is the highest rate of classical information transmission through a quantum channel with arbitrarily small error rate [17][18][19]. Corrections to the Holevo capacity and tradeoff between decoding error, code rate and code length are determined in quantum generalizations of small [20], moderate [21][22], and large [23] deviation analysis.
Coherent-state constellation code Coherent-state constellation codes consisting of points from a Gaussian quadrature rule can be concatenated with quantum polar codes to achieve the quantum capacity of the thermal noise channel [24][25].
Color code For general 2D manifolds, $$kd^2 \leq c(\log k)^2 n$$ for some constant $$c$$ [26], meaning that color codes with finite rate can only achieve an asymptotic minimum distance that is logarithmic in $$n$$.
Constant-excitation (CE) code Fock-state CE codes can be used in a protocol that achieves the two-way quantum capacity of the pure-loss Gaussian channel [27].
Convolutional code Depends on the polynomials used. Using puncturing removal [28] the rate for the code can be increased from $$\frac{1}{t}$$ to $$\frac{s}{t}$$, where $$t$$ is the number of output bits, and $$s$$ depends on the puncturing done. This is done by deleting some pieces of the encoder output such that the most-likely decoders remain effective
Dinur-Hsieh-Lin-Vidick (DHLV) code Asymptotically good QLDPC codes.
Error-correcting code (ECC) The Shannon channel capacity is the highest rate of information transmission through a classical (i.e., non-quantum) channel with arbitrarily small error rate [29]. Corrections to the capacity and tradeoff between decoding error, code rate and code length are determined using small [30][31][32], moderate [33][34][21] and large [35][36][37][38] deviation analysis.
Expander code The rate is $$1 - m/n$$ where $$n$$ is the number of left nodes and $$m$$ is the number of right nodes in the bipartite expander graph.
Expander lifted-product code Expander lifted-product codes include the first examples [39] of (asymptotically) good QLDPC codes, i.e., codes with asymptotically constant rate and linear distance. The existence of such codes proves the QLDPC conjecture [40]. Another notable family encodes $$k \in \Theta(n^\alpha \log n)$$ logical qubits with distance $$d \in \Omega(n^{1 - \alpha} / \log n)$$ for any number of physical qubits $$n$$ and any real parameter $$0 \leq \alpha < 1$$ [41].
Fiber-bundle code Rate $$k/n = \Omega(n^{-2/5}/\text{polylog}(n))$$, distance $$d=\Omega(n^{3/5}/\text{polylog}(n))$$. This is the first QLDPC code to achieve a distance scaling better than $$\sqrt{n}~\text{polylog}(n)$$.
Fountain code Random linear fountain codes approach the Shannon limit as the file size $$K$$ increases. However, they do not have a fixed encoding rate.
Freedman-Meyer-Luo code Codes held a 20-year record the best lower bound on asymptotic scaling of the minimum code distance, $$d=\Omega(\sqrt{n \sqrt{\log n}})$$, broken by Ramanujan tensor-product codes.
Haar-random code The rate of the code is equal to the coherent information of the channel (i.e. the quantum channel capacity).
Hamming code Asymptotic rate $$k/n = 1-\frac{\log n}{n} \to 1$$ and normalized distance $$d/n \to 0$$.
Heavy-hexagon code $$1/n$$ for a distance-$$d$$ heavy-hexagon code on $$n = (5d^2-2d-1)/2$$ qubits.
Hermitian code For polynomial evaluations up to degree $$D$$: if $$D < q + 1$$, $$k = \frac{(D+1)(D+2)}{2}$$, and if $$D \geq q + 1$$, $$k = (q+1)D - \frac{q(q-1)}{2} + 1$$.
Justesen code

The first asymptotically good codes. Rate is $$R_m=k/n=K/2N\geq R$$ and the relative minumum distance satisfy $$\delta_m=d_m/n\geq 0.11(1-2R)$$, where $$K=\left\lceil 2NR\right\rceil$$ for asymptotic rate $$0<R<1/2$$; see pg. 311 of Ref. [9].

The code can be improved and extend the range of $$R$$ from 0 to 1 by puncturing, i.e., by erasing $$s$$ digits from each inner codeword. This yields a code of length $$n=(2m-s)N$$ and rate $$R=mk/(2m-s)N$$. The lower bound of the distance of the punctured code approaches $$d_m/n=(1-R/r)H^{-1}(1-r)$$ as $$m$$ goes to infinity, where $$r$$ is the maximum of 1/2 and the solution to $$R=r^2/(1+\log(1-h^{-1}(1-r)))$$, and $$h$$ is the binary entropy function.

Kitaev surface code Rate depends on the underlying cellulation and manifold. For general 2D manifolds, $$kd^2\leq c(\log k)^2 n$$ for some constant $$c$$ [26], meaning that (1) 2D surface codes with bounded geometry have distance scaling at most as $$O(\sqrt{n})$$ [42][43], and (2) surface codes with finite rate can only achieve an asymptotic minimum distance that is logarithmic in $$n$$. Higher-dimensional hyperbolic manifolds (see code children below) yield distances scaling more favorably. Loewner's theorem provides an upper bound for any bounded-geometry surface code [44].
Lattice-based code Lattice codes with minimal-distance decoding achieve the capacity of the AGWN channel [45].
Lifted-product (LP) code There is no known simple way to compute the logical dimension $$k$$ in the general case [41].
Linear binary code A family of linear codes $$C_i = [n_i,k_i,d_i]$$ is asymptotically good if the asymptotic rate $$\lim_{i\to\infty} k_i/n_i$$ and asymptotic distance $$\lim_{i\to\infty} d_i/n_i$$ are both positive.
Locally recoverable code (LRC) The rate $$r$$ of an $$(n,k,r)$$ LRC code satisfies \begin{align} \frac{k}{n}\leq\frac{r}{r+1}~. \end{align}
Low-density generator-matrix (LDGM) code Certain LDGM codes come close to achieving Shannon capacity [46].
Low-density parity-check (LDPC) code Achieve capacity on the binary symmetric channel under maximum-likelihood decoding [47][48][49]. Some LDPC codes achieve capacity for smaller block lengths under belief-propagation decoding [50]. Random LDPC codes achieve list-decoding capacity [51].
Monitored random-circuit code Rate can be finite [52], depending on the family of random codes generated by the circuit.
Movassagh-Ouyang Hamiltonian code The rate depends on the classical code, but distance can scale linearly with $$n$$.
Multi-mode GKP code Transmission schemes with multimode GKP codes achieve, up to a constant-factor offset, the capacity of displacement-noise and thermal-noise Gaussian loss channels [14][53][54][15]. Particular lattice families of multimode GKP codes achieve the hashing bound of the displacement noise channel [14].
Nonlinear AG code Certain nonlinear code sequences beat the Tsfasman-Vladut-Zink bound, outperforming linear AG codes.
Orthogonal Spacetime Block Code (OSTBC) The greatest rate which can be achieved is $$\frac{n_0+1}{2n_0}$$, where either $$n=2n_0$$ or $$n=2n_0-1$$ [55].
Pastawski-Yoshida-Harlow-Preskill (HaPPY) code The pentagon HaPPY code has an asymptotic rate $$\frac{1}{\sqrt{5}} \approx 0.447$$. The pentagon/hexagon HaPPY code, with alternating layers of pentagons and hexagons in the tiling, has a rate of $$0.299$$ if the last layer is a pentagon layer and a rate of $$0.088$$ if the last layer is a hexagon layer.
Perfect quantum code $$k/n\to 1$$ asymptotically with $$n$$.
Permutation spherical code Number of codewords cannot increase exponentially with dimension $$n$$ [56]
Phase-shift keyring (PSK) code Nearly achieves Shannon AWGN capacity for one-dimensional constellations in the limit of infinite signal to noise [57; Fig. 11.7].
Polar c-q code Codes achieve the symmetric Holevo information for sending classical information over any quantum channel.
Polar code Supports reliable transmission at rates $$K/N$$ approaching the Shannon capacity of the channel under the successive cancellation decoder [58].
Projective-plane surface code The rate is $$1/n$$, where $$n$$ is the number of edges of the particular cellulation.
Quadrature-amplitude modulation (QAM) code Nearly achieves Shannon AWGN capacity for two-dimensional constellations in the limit of infinite signal to noise [57; Fig. 11.8].
Quantum Reed-Muller code $$\frac{k}{n}$$, where $$k = 2^r - {r \choose t} + 2 \sum_{i=0}^{t-1} {r \choose i}$$. Additionally, CSS codes formed from binary Reed-Muller codes achieve channel capacity on erasure channels [59].
Quantum Tanner code Asymptotically good QLDPC codes.
Quantum error-correcting code (QECC) The quantum channel capacity is the highest rate of quantum information transmission through a quantum channel with arbitrarily small error rate; see [60; Ch. 24] for definitions and a history.
Quantum expander code $$[[n,k=\Theta(n),d=O(\sqrt{n})]]$$ code with asymptotically constant rate.
Quantum polar code The rate approaches the symmetric coherent information of the quantum noise channel [61].
Ramanujan-complex product code For 2D Ramanujan complexes, the rate is $$\Omega(\sqrt{ \frac{1}{n \log n} })$$, with minimum distance $$d = \Omega(\sqrt{n \log n})$$. For 3D, the rate is $$\Omega(\frac{1}{\sqrt{n}\log n})$$ with minimum distance $$d \geq \sqrt{n} \log n$$.
Random code Typical random codes (TRC) or typical random linear codes (TLC) refer to codes in the respective ensemble that satisfy a certain minimum distance. The relative fraction of typical codes in the ensemble approaches one as $$N$$ goes to infinity [29] (see also Ref. [62]). Asymptotically, given distance $$d$$, the maximum rate for a TRC is given by $$R=\frac{1}{2}R_{GV}(\delta)$$ where $$R_{GV}$$ is the Gilbert–Varshamov (GV) bound $$R_{GV}=1-h(\delta)$$, and $$h(\delta)=h(\frac{d}{n})$$ is the binary entropy function. The maximum rate for a TLC is given by $$R=R_{GV}(d)$$, meaning that TLCs achieve the asymoptic GV bound.
Rank-modulation Gray code (RMGC) Rank modulation codes with code distance $$d=\Theta(n^{1+\epsilon})$$ for $$\epsilon\in[0,1]$$ achieve a rate of $$1-\epsilon$$ [63].
Reed-Muller (RM) code Achieves capacity of the binary erasure channel [64] and the binary memoryless symmetric (BMS) channel under bitwise maximum-a-posteriori decoding [65]; see also Ref. [66].
Reed-Solomon (RS) code Generic Reed-Solomon codes achieve list-decoding capacity [67].
Repetition code Code rate is $$\frac{1}{n}$$, code distance is $$n$$.
Single parity-check (SPC) code The code rate is $$\frac{n}{n+1}\to 1$$ as $$n\to\infty$$.
Singleton-bound approaching AQECC Given rate $$R$$, tolerate adversarial errors nearly saturating the quantum Singleton bound of $$(1-R)/2$$.
Sphere packing Random sphere packings can achieve the capacity of the additive Gaussian white-noise (AGWN) channel [68]; see the book [69] for more details. Tradeoffs between various parameters have been analyzed [70]. Deterministic sets of constellations from quadrature rules can also achieve capacity [6].
Tanner code For a short code $$C_0$$ with rate $$R_0$$, the Tanner code has rate $$R \geq 2R_0-1$$. If $$C_0$$ satisfies the Gilbert-Varshamov bound, the rate $$R \geq \delta = 1-2h(\delta_0)$$, where $$\delta$$ ($$\delta_0$$) is the relative distance of the Tanner code ($$C_0$$), and $$h$$ is the binary entropy function.
Tensor-product code Rate of the tensor-product code $$C_A \otimes C_B$$ is a product of the rates of the codes $$C_A$$ and $$C_B$$.
Tornado code Come arbitrarily close to the capacity of the binary erasure channel.
Tsfasman-Vladut-Zink (TVZ) code TVZ codes exceed the asymptotic Gilbert-Varshamov (GV) bound [71] (see also Ref. [72]). Roughly speaking, this breakthrough result implies that AG codes can outperform random codes. Such families of codes are optimal.
Two-dimensional hyperbolic surface code Two-dimensional hyperbolic surface codes have an asymptotically constant encoding rate $$k/n$$ with a distance scaling logarithmically with $$n$$ when the surface is closed. The encoding rate depends on the tiling $${r,s}$$ and is given by $$k/n = (1-2/r - 2/s) + 2/n$$, which approaches a constant value as the number of physical qubits grows. The weight of the stabilizers is $$r$$ for $$Z$$-checks and $$s$$ for $$X$$-checks. For open boundary conditions, the code reduces to constant distnace.
Wrapped spherical code Asymptotically maximal spherical coding density is obtained with the densest possible sphere packing.
XYZ product code Not much has been proven about the relationship between XYZ-product codes and other codes. The logical dimension depends on properties of the input classical codes, specifically similarity invariants from abstract algebra. It is conjectured that specific instances of XYZ-product codes have a constant encoding rate and a minimum distance of $$d \in \Theta(n^{2/3})$$ [73].
Zetterberg code The rate is given by $$1-\frac{4s}{n}$$, which is asymptotically good, with a minimum distance of 5.
$$[[k+4,k,2]]$$ H code The H codes are dense, i.e., the rate $$\frac{k}{k+4}\rightarrow 1$$ as $$k \rightarrow \infty$$. The distance is 2. However an $$r$$-level concatenation of H codes gives a distance of $$2^r$$.

## References

[1]
A. Garcia and H. Stichtenoth, “A tower of Artin-Schreier extensions of function fields attaining the Drinfeld-Vladut bound”, Inventiones Mathematicae 121, 211 (1995). DOI
[2]
A. Garcia and H. Stichtenoth, “On the Asymptotic Behaviour of Some Towers of Function Fields over Finite Fields”, Journal of Number Theory 61, 248 (1996). DOI
[3]
T. Høholdt, J.H. Van Lint, and R. Pellikaan, 1998. Algebraic geometry codes. Handbook of coding theory, 1 (Part 1), pp.871-961.
[4]
S. G. Vlăduţ, V. G. Drinfeld, “Number of points of an algebraic curve”, Funktsional. Anal. i Prilozhen., 17:1 (1983), 68–69; Funct. Anal. Appl., 17:1 (1983), 53–54
[5]
N. P. Breuckmann and J. N. Eberhardt, “Balanced Product Quantum Codes”, IEEE Transactions on Information Theory 67, 6653 (2021). DOI; 2012.09271
[6]
Y. Wu and S. Verdu, “The impact of constellation cardinality on Gaussian channel capacity”, 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton) (2010). DOI
[7]
C. E. Shannon, “A Mathematical Theory of Communication”, Bell System Technical Journal 27, 623 (1948). DOI
[8]
S. Guha and M. M. Wilde, “Polar coding to achieve the Holevo capacity of a pure-loss optical channel”, 2012 IEEE International Symposium on Information Theory Proceedings (2012). DOI; 1202.0533
[9]
F. J. MacWilliams and N. J. A. Sloane. The theory of error correcting codes. Elsevier, 1977.
[10]
V. Giovannetti et al., “Ultimate classical communication rates of quantum optical channels”, Nature Photonics 8, 796 (2014). DOI; 1312.6225
[11]
K. Banaszek et al., “Quantum Limits in Optical Communications”, Journal of Lightwave Technology 38, 2741 (2020). DOI; 2002.05766
[12]
M. M. Wolf, D. Pérez-García, and G. Giedke, “Quantum Capacities of Bosonic Channels”, Physical Review Letters 98, (2007). DOI; quant-ph/0606132
[13]
Ludovico Lami and Mark M. Wilde, “Exact solution for the quantum and private capacities of bosonic dephasing channels”. 2205.05736
[14]
J. Harrington and J. Preskill, “Achievable rates for the Gaussian quantum channel”, Physical Review A 64, (2001). DOI; quant-ph/0105058
[15]
K. Noh, V. V. Albert, and L. Jiang, “Quantum Capacity Bounds of Gaussian Thermal Loss Channels and Achievable Rates With Gottesman-Kitaev-Preskill Codes”, IEEE Transactions on Information Theory 65, 2563 (2019). DOI; 1801.07271
[16]
E. Dennis et al., “Topological quantum memory”, Journal of Mathematical Physics 43, 4452 (2002). DOI; quant-ph/0110143
[17]
A. S. Holevo, “Bounds for the Quantity of Information Transmitted by a Quantum Communication Channel”, Probl. Peredachi Inf., 9:3 (1973), 3–11; Problems Inform. Transmission, 9:3 (1973), 177–183
[18]
B. Schumacher and M. D. Westmoreland, “Sending classical information via noisy quantum channels”, Physical Review A 56, 131 (1997). DOI
[19]
A. S. Holevo, “The capacity of the quantum channel with general signal states”, IEEE Transactions on Information Theory 44, 269 (1998). DOI
[20]
M. Tomamichel and V. Y. F. Tan, “Second-Order Asymptotics for the Classical Capacity of Image-Additive Quantum Channels”, Communications in Mathematical Physics 338, 103 (2015). DOI; 1308.6503
[21]
C. T. Chubb, V. Y. F. Tan, and M. Tomamichel, “Moderate Deviation Analysis for Classical Communication over Quantum Channels”, Communications in Mathematical Physics 355, 1283 (2017). DOI; 1701.03114
[22]
X. Wang, K. Fang, and M. Tomamichel, “On Converse Bounds for Classical Communication Over Quantum Channels”, IEEE Transactions on Information Theory 65, 4609 (2019). DOI; 1709.05258
[23]
M. Mosonyi and T. Ogawa, “Strong Converse Exponent for Classical-Quantum Channel Coding”, Communications in Mathematical Physics 355, 373 (2017). DOI; 1409.3562
[24]
F. Lacerda, J. M. Renes, and V. B. Scholz, “Coherent-state constellations and polar codes for thermal Gaussian channels”, Physical Review A 95, (2017). DOI; 1603.05970
[25]
F. Lacerda, J. M. Renes, and V. B. Scholz, “Coherent state constellations for Bosonic Gaussian channels”, 2016 IEEE International Symposium on Information Theory (ISIT) (2016). DOI
[26]
N. Delfosse, “Tradeoffs for reliable quantum information storage in surface codes and color codes”, 2013 IEEE International Symposium on Information Theory (2013). DOI; 1301.6588
[27]
M. S. Winnel et al., “Achieving the ultimate end-to-end rates of lossy quantum communication networks”, npj Quantum Information 8, (2022). DOI; 2203.13924
[28]
L. Sari, “Effects of Puncturing Patterns on Punctured Convolutional Codes”, TELKOMNIKA (Telecommunication, Computing, Electronics and Control) 10, (2012). DOI
[29]
C. E. Shannon, “A Mathematical Theory of Communication”, Bell System Technical Journal 27, 379 (1948). DOI
[30]
V. Strassen, “Asymptotische Absch¨atzungen in Shannons Informationstheorie,” Trans. Third Prague Conference on Information Theory, Prague, 689–723, (1962)
[31]
M. Hayashi, “Information Spectrum Approach to Second-Order Coding Rate in Channel Coding”, IEEE Transactions on Information Theory 55, 4947 (2009). DOI; 0801.2242
[32]
Y. Polyanskiy, H. V. Poor, and S. Verdu, “Channel Coding Rate in the Finite Blocklength Regime”, IEEE Transactions on Information Theory 56, 2307 (2010). DOI
[33]
Yucel Altug and Aaron B. Wagner, “Moderate Deviations in Channel Coding”. 1208.1924
[34]
Y. Polyanskiy and S. Verdu, “Channel dispersion and moderate deviations limits for memoryless channels”, 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton) (2010). DOI
[35]
R. Gallager, Information Theory and Reliable Communication (Springer Vienna, 1972). DOI
[36]
I. Csiszár and J. Körner, Information Theory (Cambridge University Press, 2011). DOI
[37]
S. Arimoto, “On the converse to the coding theorem for discrete memoryless channels (Corresp.)”, IEEE Transactions on Information Theory 19, 357 (1973). DOI
[38]
G. Dueck and J. Korner, “Reliability function of a discrete memoryless channel at rates above capacity (Corresp.)”, IEEE Transactions on Information Theory 25, 82 (1979). DOI
[39]
Pavel Panteleev and Gleb Kalachev, “Asymptotically Good Quantum and Locally Testable Classical LDPC Codes”. 2111.03654
[40]
N. P. Breuckmann and J. N. Eberhardt, “Quantum Low-Density Parity-Check Codes”, PRX Quantum 2, (2021). DOI; 2103.06309
[41]
P. Panteleev and G. Kalachev, “Quantum LDPC Codes With Almost Linear Minimum Distance”, IEEE Transactions on Information Theory 68, 213 (2022). DOI; 2012.04068
[42]
S. Bravyi, D. Poulin, and B. Terhal, “Tradeoffs for Reliable Quantum Information Storage in 2D Systems”, Physical Review Letters 104, (2010). DOI; 0909.5200
[43]
E. Fetaya, “Bounding the distance of quantum surface codes”, Journal of Mathematical Physics 53, 062202 (2012). DOI
[44]
“Z2-systolic freedom and quantum codes”, Mathematics of Quantum Computation 303 (2002). DOI
[45]
R. Urbanke and B. Rimoldi, “Lattice codes can achieve capacity on the AWGN channel”, IEEE Transactions on Information Theory 44, 273 (1998). DOI
[46]
J. Garcia-Frias and Wei Zhong, “Approaching Shannon performance by iterative decoding of linear codes with low-density generator matrix”, IEEE Communications Letters 7, 266 (2003). DOI
[47]
D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices”, IEEE Transactions on Information Theory 45, 399 (1999). DOI
[48]
R. Gallager, “Low-density parity-check codes”, IEEE Transactions on Information Theory 8, 21 (1962). DOI
[49]
Venkatesan Guruswami, “Iterative Decoding of Low-Density Parity Check Codes (A Survey)”. cs/0610022
[50]
Shrinivas Kudekar, Tom Richardson, and Ruediger Urbanke, “Spatially Coupled Ensembles Universally Achieve Capacity under Belief Propagation”. 1201.2999
[51]
Jonathan Mosheiff et al., “LDPC Codes Achieve List Decoding Capacity”. 1909.06430
[52]
M. J. Gullans and D. A. Huse, “Dynamical Purification Phase Transition Induced by Quantum Measurements”, Physical Review X 10, (2020). DOI; 1905.05195
[53]
K. Sharma et al., “Bounding the energy-constrained quantum and private capacities of phase-insensitive bosonic Gaussian channels”, New Journal of Physics 20, 063025 (2018). DOI; 1708.07257
[54]
M. Rosati, A. Mari, and V. Giovannetti, “Narrow bounds for the quantum capacity of thermal attenuators”, Nature Communications 9, (2018). DOI; 1801.04731
[55]
Xue-Bin Liang, “Orthogonal designs with maximal rates”, IEEE Transactions on Information Theory 49, 2468 (2003). DOI
[56]
H. Landau, “How does a porcupine separate its quills?”, IEEE Transactions on Information Theory 17, 157 (1971). DOI
[57]
R. E. Blahut, Modem Theory (Cambridge University Press, 2009). DOI
[58]
E. Arikan, “Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels”, IEEE Transactions on Information Theory 55, 3051 (2009). DOI
[59]
Shrinivas Kudekar et al., “Reed-Muller Codes Achieve Capacity on Erasure Channels”. 1601.04689
[60]
“Preface to the Second Edition”, Quantum Information Theory xi (2016). DOI; 1106.1445
[61]
M. M. Wilde and J. M. Renes, “Quantum polar codes for arbitrary channels”, 2012 IEEE International Symposium on Information Theory Proceedings (2012). DOI; 1201.2906
[62]
A. Barg and G. D. Forney, “Random codes: minimum distances and error exponents”, IEEE Transactions on Information Theory 48, 2568 (2002). DOI
[63]
A. Barg and A. Mazumdar, “Codes in permutations and error correction for rank modulation”, 2010 IEEE International Symposium on Information Theory (2010). DOI
[64]
S. Kudekar et al., “Reed–Muller Codes Achieve Capacity on Erasure Channels”, IEEE Transactions on Information Theory 63, 4298 (2017). DOI
[65]
Galen Reeves and Henry D. Pfister, “Reed-Muller Codes Achieve Capacity on BMS Channels”. 2110.14631
[66]
Emmanuel Abbe, Amir Shpilka, and Avi Wigderson, “Reed-Muller codes for random erasures and errors”. 1411.4590
[67]
Joshua Brakensiek, Sivakanth Gopi, and Visu Makam, “Generic Reed-Solomon codes achieve list-decoding capacity”. 2206.05256
[68]
C. E. Shannon, “Probability of Error for Optimal Codes in a Gaussian Channel”, Bell System Technical Journal 38, 611 (1959). DOI
[69]
J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups (Springer New York, 1999). DOI
[70]
D. Slepian, “Bounds on Communication”, Bell System Technical Journal 42, 681 (1963). DOI
[71]
M. A. Tsfasman, S. G. Vlădutx, and T. Zink, “Modular curves, Shimura curves, and Goppa codes, better than Varshamov-Gilbert bound”, Mathematische Nachrichten 109, 21 (1982). DOI
[72]
Y. Ihara. "Some remarks on the number of rational points of algebraic curves over finite fields." J. Fac. Sci. Univ. Tokyo Sect. IA Math., 28:721-724 (1982),1981.
[73]
A. Leverrier, S. Apers, and C. Vuillot, “Quantum XYZ Product Codes”, Quantum 6, 766 (2022). DOI; 2011.09746