Here is a list of all quantum codes that specify what rate they have.
Name Rate
Alamouti code The only OSTBC with unity rate.
Algebraic-geometry (AG) code Several sequences of linear AG codes beat the Gilbert-Varshamov bound and/or are asymptotically good [1][2] (see Ref. [3] for details). The rate of any linear AG code satisfies \begin{align} \frac{k}{n} \geq 1 - \frac{d}{n} - \frac{1}{\sqrt{q}-1}~, \end{align} which comes from the Drinfeld-Vladut bound [4]. Nonlinear AG codes can outperform this bound.
Bacon-Shor code A non-LDPC family of Bacon-Shor codes achieves a distance of \(\Omega(n^{1-\epsilon})\) with sparse gauge operators.
Balanced product code A notable family of balanced product codes encode \(k \in \Theta(n^{4/5})\) logical qubits with distance \(d \in \Omega(n^{3/5})\) for any number of physical qubits \(n\). Additionally, it is known that the code constructed from the balanced product of two good classical LDPC codes over groups of order \(\Theta(n)\) has a constant encoding rate [5].
Binary Varshamov-Tenengolts (VT) code Rate-\(1\) code, with \(\log n+1\) bits of redundancy when \(a=0\). Nearly optimal.
Bose–Chaudhuri–Hocquenghem (BCH) code Primitive BCH codes are asymptotically bad [6; pg. 269].
Calderbank-Shor-Steane (CSS) stabilizer code For a depolarizing channel with probability \(p\), CSS codes allowing for arbitrarily accurate recovery exist with asymptotic rate \(1-2h(p)\), where \(h\) is the binary entropy function [7].
Cartier code Cartier codes share similar asymptotic properties as subfield subcodes of residue AG codes, with both families admitting sequences of codes that achieve the Gilbert-Varshamov bound.
Chuang-Leung-Yamamoto code Code rate is \(\frac{k}{n \log_2(N+1)}\). To correct the loss of up to \(t\) excitations with \(K+1\) codewords, a code exists with scaling \(N \sim t^3 K/2\).
Color code For general 2D manifolds, \(kd^2 \leq c(\log k)^2 n\) for some constant \(c\) [8], meaning that color codes with finite rate can only achieve an asymptotic minimum distance that is logarithmic in \(n\).
Constant-excitation (CE) code Fock-state CE codes can be used in a protocol that achieves the two-way quantum capacity of the pure-loss Gaussian channel [9].
Convolutional code Depends on the polynomials used. Using puncturing removal [10] the rate for the code can be increased from \(\frac{1}{t}\) to \(\frac{s}{t}\), where \(t\) is the number of output bits, and \(s\) depends on the puncturing done. This is done by deleting some pieces of the encoder output such that the most-likely decoders remain effective
Dinur-Hsieh-Lin-Vidick (DHLV) code Asymptotically good QLDPC codes.
Expander code The rate is \(1 - m/n\) where \(n\) is the number of left nodes and \(m\) is the number of right nodes in the bipartite expander graph.
Expander lifted-product code Expander lifted-product codes include the first examples [11] of (asymptotically) good QLDPC codes, i.e., codes with asymptotically constant rate and linear distance. The existence of such codes proves the QLDPC conjecture [12]. Another notable family encodes \(k \in \Theta(n^\alpha \log n)\) logical qubits with distance \(d \in \Omega(n^{1 - \alpha} / \log n)\) for any number of physical qubits \(n\) and any real parameter \(0 \leq \alpha < 1\) [13].
Fiber-bundle code Rate \(k/n = \Omega(n^{-2/5}/\text{polylog}(n))\), distance \(d=\Omega(n^{3/5}/\text{polylog}(n))\). This is the first QLDPC code to achieve a distance scaling better than \(\sqrt{n}~\text{polylog}(n)\).
Fountain code Random linear fountain codes approach the Shannon limit as the file size \(K\) increases. However, they do not have a fixed encoding rate.
Freedman-Meyer-Luo code Codes held a 20-year record the best lower bound on asymptotic scaling of the minimum code distance, \(d=\Omega(\sqrt{n \sqrt{\log n}})\), broken by Ramanujan tensor-product codes.
H code The H codes are dense, i.e., the rate \(\frac{k}{k+4}\rightarrow 1\) as \(k \rightarrow \infty\). The distance is 2. However an \(r\)-level concatenation of H codes gives a distance of \(2^r\).
Haar-random code The rate of the code is equal to the coherent information of the channel (i.e. the quantum channel capacity).
Hamming code Asymptotic rate \(k/n = 1-\frac{\log n}{n} \to 1\) and normalized distance \(d/n \to 0\).
Heavy-hexagon code \(1/n\) for a distance-\(d\) heavy-hexagon code on \(n = (5d^2-2d-1)/2\) qubits.
Hermitian code For polynomial evaluations up to degree \(D\): if \(D < q + 1 \), \(k = \frac{(D+1)(D+2)}{2}\), and if \(D \geq q + 1 \), \(k = (q+1)D - \frac{q(q-1)}{2} + 1 \).
Justesen code

The first asymptotically good codes. Rate is \(R_m=k/n=K/2N\geq R\) and the relative minumum distance satisfy \(\delta_m=d_m/n\geq 0.11(1-2R)\), where \(K=\left\lceil 2NR\right\rceil\) for asymptotic rate \(0<R<1/2\); see pg. 311 of Ref. [6].

The code can be improved and extend the range of \(R\) from 0 to 1 by puncturing, i.e., by erasing \(s\) digits from each inner codeword. This yields a code of length \(n=(2m-s)N\) and rate \(R=mk/(2m-s)N\). The lower bound of the distance of the punctured code approaches \(d_m/n=(1-R/r)H^{-1}(1-r)\) as \(m\) goes to infinity, where \(r\) is the maximum of 1/2 and the solution to \(R=r^2/(1+\log(1-h^{-1}(1-r)))\), and \(h\) is the binary entropy function.

Kitaev surface code Rate depends on the underlying cellulation and manifold. For general 2D manifolds, \(kd^2\leq c(\log k)^2 n\) for some constant \(c\) [8], meaning that (1) 2D surface codes with bounded geometry have distance scaling at most as \(O(\sqrt{n})\) [14][15], and (2) surface codes with finite rate can only achieve an asymptotic minimum distance that is logarithmic in \(n\). Higher-dimensional hyperbolic manifolds (see code children below) yield distances scaling more favorably. Loewner's theorem provides an upper bound for any bounded-geometry surface code [16].
Lifted-product (LP) code There is no known simple way to compute the logical dimension \(k\) in the general case [13].
Linear binary code A family of linear codes \(C_i = [n_i,k_i,d_i]\) is asymptotically good if the asymptotic rate \(\lim_{i\to\infty} k_i/n_i\) and asymptotic distance \(\lim_{i\to\infty} d_i/n_i\) are both positive.
Locally recoverable code (LRC) The rate \(r\) of an \((n,k,r)\) LRC code satisfies \begin{align} \frac{k}{n}\leq\frac{r}{r+1}~. \end{align}
Low-density generator-matrix (LDGM) code Certain LDGM codes come close to achieving Shannon capacity [17].
Low-density parity-check (LDPC) code Achieve capacity on the binary symmetric channel under maximum-likelihood decoding [18][19][20]. Some LDPC codes achieve capacity for smaller block lengths under belief-propagation decoding [21]. Random LDPC codes achieve list-decoding capacity [22].
Monitored random-circuit code Rate can be finite [23], depending on the family of random codes generated by the circuit.
Movassagh-Ouyang Hamiltonian code The rate depends on the classical code, but distance can scale linearly with \(n\).
Multi-mode GKP code Transmission schemes with multimode GKP codes achieve, up to a constant-factor offset, the capacity of displacement-noise and thermal-noise Gaussian loss channels [24][25][26][27].
Nonlinear AG code Certain nonlinear code sequences beat the Tsfasman-Vladut-Zink bound, outperforming linear AG codes.
Orthogonal Spacetime Block Code (OSTBC) The greatest rate which can be achieved is \(\frac{n_0+1}{2n_0}\), where either \(n=2n_0\) or \(n=2n_0-1\) [28].
Parity-check code The code rate is \(\frac{n}{n+1}\to 1\) as \(n\to\infty\).
Pastawski-Yoshida-Harlow-Preskill (HaPPY) code The pentagon HaPPY code has an asymptotic rate \(\frac{1}{\sqrt{5}} \approx 0.447\). The pentagon/hexagon HaPPY code, with alternating layers of pentagons and hexagons in the tiling, has a rate of \(0.299\) if the last layer is a pentagon layer and a rate of \(0.088\) if the last layer is a hexagon layer.
Perfect quantum code \(k/n\to 1\) asymptotically with \(n\).
Polar code Supports reliable transmission at rates \(K/N\) approaching the Shannon capacity of the channel.
Projective-plane surface code The rate is \(1/n\), where \(n\) is the number of edges of the particular cellulation.
Quantum Reed-Muller code \(\frac{k}{n}\), where \(k = 2^r - {r \choose t} + 2 \sum_{i=0}^{t-1} {r \choose i}\). Additionally, CSS codes formed from binary Reed-Muller codes achieve channel capacity on erasure channels [29].
Quantum Tanner code Asymptotically good QLDPC codes.
Quantum expander code \([[n,k=\Theta(n),d=O(\sqrt{n})]]\) code with asymptotically constant rate.
Quantum polar code The rate approaches the symmetric coherent information of the quantum noise channel [30].
Ramanujan-complex product code For 2D Ramanujan complexes, the rate is \(\Omega(\sqrt{ \frac{1}{n \log n} })\), with minimum distance \(d = \Omega(\sqrt{n \log n}) \). For 3D, the rate is \( \Omega(\frac{1}{\sqrt{n}\log n}) \) with minimum distance \(d \geq \sqrt{n} \log n \).
Random code Typical random codes (TRC) or typical random linear codes (TLC) refer to codes in the respective ensemble that satisfy a certain minimum distance. The relative fraction of typical codes in the ensemble approaches one as \(N\) goes to infinity [31] (see also Ref. [32]). Asymptotically, given distance \(d\), the maximum rate for a TRC is given by \(R=\frac{1}{2}R_{GV}(\delta)\) where \(R_{GV}\) is the Gilbert–Varshamov (GV) bound \(R_{GV}=1-h(\delta)\), and \(h(\delta)=h(\frac{d}{n})\) is the binary entropy function. The maximum rate for a TLC is given by \(R=R_{GV}(d)\), meaning that TLCs achieve the asymoptic GV bound.
Rank-modulation code Rank modulation codes with code distance \(d=\Theta(n^{1+\epsilon})\) for \(\epsilon\in[0,1]\) achieve a rate of \(1-\epsilon\) [33].
Reed-Muller (RM) code Achieves capacity of the binary erasure channel [34] and of the binary memoryless symmetric (BMS) channel under bitwise maximum-a-posteriori decoding [35].
Reed-Solomon (RS) code Generic Reed-Solomon codes achieve list-decoding capacity [36].
Repetition code Code rate is \(\frac{1}{n}\), code distance is \(n\).
Tanner code For a short code \(C_0\) with rate \(R_0\), the Tanner code has rate \(R \geq 2R_0-1\). If \(C_0\) satisfies the Gilbert-Varshamov bound, the rate \(R \geq \delta = 1-2h(\delta_0)\), where \(\delta\) (\(\delta_0\)) is the relative distance of the Tanner code (\(C_0\)), and \(h\) is the binary entropy function.
Tensor-product code Rate of the tensor-product code \(C_A \otimes C_B\) is a product of the rates of the codes \(C_A\) and \(C_B\).
Tornado code Come arbitrarily close to the capacity of the binary erasure channel.
Tsfasman-Vladut-Zink (TVZ) code TVZ codes exceed the asymptotic Gilbert-Varshamov (GV) bound [37] (see also Ref. [38]). Roughly speaking, this breakthrough result implies that AG codes can outperform random codes. Such families of codes are optimal.
Two-dimensional hyperbolic surface code Two-dimensional hyperbolic surface codes have an asymptotically constant encoding rate \( k/n \) with a distance scaling logarithmically with \( n\) when the surface is closed. The encoding rate depends on the tiling \( {r,s} \) and is given by \( k/n = (1-2/r - 2/s) + 2/n \), which approaches a constant value as the number of physical qubits grows. The weight of the stabilizers is \( r \) for \( Z \)-checks and \( s \) for \( X \)-checks. For open boundary conditions, the code reduces to constant distnace.
XYZ product code Not much has been proven about the relationship between XYZ-product codes and other codes. The logical dimension depends on properties of the input classical codes, specifically similarity invariants from abstract algebra. It is conjectured that specific instances of XYZ-product codes have a constant encoding rate and a minimum distance of \(d \in \Theta(n^{2/3})\) [39].
Zetterberg code The rate is given by \(1-\frac{4s}{n}\), which is asymptotically good, with a minimum distance of 5.

References

[1]
A. Garcia and H. Stichtenoth, “A tower of Artin-Schreier extensions of function fields attaining the Drinfeld-Vladut bound”, Inventiones Mathematicae 121, 211 (1995). DOI
[2]
A. Garcia and H. Stichtenoth, “On the Asymptotic Behaviour of Some Towers of Function Fields over Finite Fields”, Journal of Number Theory 61, 248 (1996). DOI
[3]
T. Høholdt, J.H. Van Lint, and R. Pellikaan, 1998. Algebraic geometry codes. Handbook of coding theory, 1 (Part 1), pp.871-961.
[4]
S. G. Vlăduţ, V. G. Drinfeld, “Number of points of an algebraic curve”, Funktsional. Anal. i Prilozhen., 17:1 (1983), 68–69; Funct. Anal. Appl., 17:1 (1983), 53–54
[5]
N. P. Breuckmann and J. N. Eberhardt, “Balanced Product Quantum Codes”, IEEE Transactions on Information Theory 67, 6653 (2021). DOI; 2012.09271
[6]
F. J. MacWilliams and N. J. A. Sloane. The theory of error correcting codes. Elsevier, 1977.
[7]
E. Dennis et al., “Topological quantum memory”, Journal of Mathematical Physics 43, 4452 (2002). DOI; quant-ph/0110143
[8]
N. Delfosse, “Tradeoffs for reliable quantum information storage in surface codes and color codes”, 2013 IEEE International Symposium on Information Theory (2013). DOI; 1301.6588
[9]
Matthew S. Winnel et al., “Achieving the ultimate end-to-end rates of lossy quantum communication networks”. 2203.13924
[10]
L. Sari, “Effects of Puncturing Patterns on Punctured Convolutional Codes”, TELKOMNIKA (Telecommunication, Computing, Electronics and Control) 10, (2012). DOI
[11]
Pavel Panteleev and Gleb Kalachev, “Asymptotically Good Quantum and Locally Testable Classical LDPC Codes”. 2111.03654
[12]
N. P. Breuckmann and J. N. Eberhardt, “Quantum Low-Density Parity-Check Codes”, PRX Quantum 2, (2021). DOI; 2103.06309
[13]
P. Panteleev and G. Kalachev, “Quantum LDPC Codes With Almost Linear Minimum Distance”, IEEE Transactions on Information Theory 68, 213 (2022). DOI; 2012.04068
[14]
S. Bravyi, D. Poulin, and B. Terhal, “Tradeoffs for Reliable Quantum Information Storage in 2D Systems”, Physical Review Letters 104, (2010). DOI; 0909.5200
[15]
E. Fetaya, “Bounding the distance of quantum surface codes”, Journal of Mathematical Physics 53, 062202 (2012). DOI
[16]
“Z2-systolic freedom and quantum codes”, Mathematics of Quantum Computation 303 (2002). DOI
[17]
J. Garcia-Frias and Wei Zhong, “Approaching Shannon performance by iterative decoding of linear codes with low-density generator matrix”, IEEE Communications Letters 7, 266 (2003). DOI
[18]
D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices”, IEEE Transactions on Information Theory 45, 399 (1999). DOI
[19]
R. Gallager, “Low-density parity-check codes”, IEEE Transactions on Information Theory 8, 21 (1962). DOI
[20]
Venkatesan Guruswami, “Iterative Decoding of Low-Density Parity Check Codes (A Survey)”. cs/0610022
[21]
Shrinivas Kudekar, Tom Richardson, and Ruediger Urbanke, “Spatially Coupled Ensembles Universally Achieve Capacity under Belief Propagation”. 1201.2999
[22]
Jonathan Mosheiff et al., “LDPC Codes Achieve List Decoding Capacity”. 1909.06430
[23]
M. J. Gullans and D. A. Huse, “Dynamical Purification Phase Transition Induced by Quantum Measurements”, Physical Review X 10, (2020). DOI; 1905.05195
[24]
J. Harrington and J. Preskill, “Achievable rates for the Gaussian quantum channel”, Physical Review A 64, (2001). DOI; quant-ph/0105058
[25]
K. Sharma et al., “Bounding the energy-constrained quantum and private capacities of phase-insensitive bosonic Gaussian channels”, New Journal of Physics 20, 063025 (2018). DOI; 1708.07257
[26]
M. Rosati, A. Mari, and V. Giovannetti, “Narrow bounds for the quantum capacity of thermal attenuators”, Nature Communications 9, (2018). DOI; 1801.04731
[27]
K. Noh, V. V. Albert, and L. Jiang, “Quantum Capacity Bounds of Gaussian Thermal Loss Channels and Achievable Rates With Gottesman-Kitaev-Preskill Codes”, IEEE Transactions on Information Theory 65, 2563 (2019). DOI; 1801.07271
[28]
Xue-Bin Liang, “Orthogonal designs with maximal rates”, IEEE Transactions on Information Theory 49, 2468 (2003). DOI
[29]
Shrinivas Kudekar et al., “Reed-Muller Codes Achieve Capacity on Erasure Channels”. 1601.04689
[30]
M. M. Wilde and J. M. Renes, “Quantum polar codes for arbitrary channels”, 2012 IEEE International Symposium on Information Theory Proceedings (2012). DOI; 1201.2906
[31]
C. E. Shannon, “A Mathematical Theory of Communication”, Bell System Technical Journal 27, 379 (1948). DOI
[32]
A. Barg and G. D. Forney, “Random codes: minimum distances and error exponents”, IEEE Transactions on Information Theory 48, 2568 (2002). DOI
[33]
A. Barg and A. Mazumdar, “Codes in permutations and error correction for rank modulation”, 2010 IEEE International Symposium on Information Theory (2010). DOI
[34]
S. Kudekar et al., “Reed–Muller Codes Achieve Capacity on Erasure Channels”, IEEE Transactions on Information Theory 63, 4298 (2017). DOI
[35]
Galen Reeves and Henry D. Pfister, “Reed-Muller Codes Achieve Capacity on BMS Channels”. 2110.14631
[36]
Joshua Brakensiek, Sivakanth Gopi, and Visu Makam, “Generic Reed-Solomon codes achieve list-decoding capacity”. 2206.05256
[37]
M. A. Tsfasman, S. G. Vlădutx, and T. Zink, “Modular curves, Shimura curves, and Goppa codes, better than Varshamov-Gilbert bound”, Mathematische Nachrichten 109, 21 (1982). DOI
[38]
Y. Ihara. "Some remarks on the number of rational points of algebraic curves over finite fields." J. Fac. Sci. Univ. Tokyo Sect. IA Math., 28:721-724 (1982),1981.
[39]
A. Leverrier, S. Apers, and C. Vuillot, “Quantum XYZ Product Codes”, Quantum 6, 766 (2022). DOI; 2011.09746