Welcome to the Binary Kingdom.

Binary code
Encodes \(K\) states (codewords) in \(n\) binary coordinates and has distance \(d\). Usually denoted as \((n,K,d)\). The distance is the minimum Hamming distance between a pair of distinct codewords.
Protection: A binary code \(C\) corrects \(t\) errors if
\begin{align}
\forall x \in C~,~D(x,x+y) < D(x' , x+y)
\end{align}
for all codewords \(x' \neq x\) and all \(y\) such that \(|y|=t\), where \(D\) is the Hamming distance and \(|y| = D(y,0) \). A code corrects \(t\) errors if and only if \(d \geq 2t+1\), i.e., a code corrects errors on \(t \leq \left\lfloor (d-1)/2 \right\rfloor\) coordinates. In addition, a code detects errors on up to \(d-1\) coordinates, and corrects erasure errors on up to \(d-1\) coordinates.
Parents:
Error-correcting code (ECC).
Parent of:
Convolutional code, Fountain code, Levenshtein code, Linear binary code.
Cousin of:
Group-based code, Movassagh-Ouyang Hamiltonian code.

Convolutional code[1]
Classical codes that are formed using generator polynomials over the finite field with two elements. The encoder slides across contiguous subsets of the input bit-string (like a convolutional neural network) evaluating the polynomials on that window to obtain a number of parity bits. These parity bits are the encoded information. There are many ways to formulate these codes
Parents:
Binary code.
Cousins:
Quantum convolutional code, Reed-Solomon (RS) code.

Fountain code[2]
Code based on the idea of generating an endless stream of custom encoded packets for the receiver. The code is designed so that the receiver can recover the original transmission of size \(Kl\) bits after receiving at least \(K\) packets each of \(l\) bits.
Protection: Designed to protect against erasures during broadcasting of information by a sender to multiple receivers.
Parents:
Binary code.
Parent of:
Raptor (RAPid TORnado) code.
Cousins:
Random code.
Cousin of:
Tornado code.

Levenshtein code[3]
Binary codes constructed from combining two codes \(A'\) constructed out of Hadamard matrices.
Protection: Levenshtein codes meet the Plotkin bound \(K\leq 2\left\lfloor\frac{d}{2d-n}\right\rfloor\), where \(K\) is the number of codewords, \(d\) is the distance, and \(n\) is the length, and with the assumption that the Hadamard matrices for such parameters exist. The general proof depends on the correctness of Hadamard''s conjecture [4].
Parents:
Binary code.

Linear binary code
An \((n,2^k,d)\) linear code is denoted as \([n,k]\) or \([n,k,d]\), where \(d\) is the code's distance. Its codewords form a linear subspace, i.e., for any codewords \(x,y\), \(x+y\) is also a codeword. A code that is not linear is called nonlinear.
Protection: Distance \(d\) of a linear code is the number of nonzero entries in the (nonzero) codeword with the smallest such number. Corrects any error set for which no two elements of the set add up to a codeword.
Parents:
Binary code.
Parent of:
Binary Golay code, Binary repetition code, Graph homology code, Hadamard code, Justesen code, Parity-check code, Polar code, Reed-Muller (RM) code, Tanner code, Tornado code, Zetterberg code.
Cousin of:
Calderbank-Shor-Steane (CSS) stabilizer code, Entanglement-assisted (EA) QECC, Entanglement-assisted (EA) stabilizer code, Fock-state bosonic code, Lattice-based code, Low-density parity-check (LDPC) code, Qubit stabilizer code.

Justesen code[5]
Binary code resulting from generalized concatenation of a Reed-Solomon (RS) outer code with multiple inner codes sampled from the Wozencraft ensemble, i.e., \(N\) distinct binary inner codes of dimension \(m\) and length \(2m\). Justesen codes are parameterized by \(m\), with length \(n=2mN\) and dimension \(k=mK\), where \((N=2^m-1,K)\) is the RS outer code over \(GF(2^m)\).
Parents:
Linear binary code, Generalized concatenated code.
Cousins:
Reed-Solomon (RS) code, Wozencraft ensemble code, Random code.

Polar code[6]
In its basic version, a binary linear polar code encodes \(K\) message bits into \(N=2^n\) bits. The linear transformation that defines the code is given by the matrix \(G^{(n)}=B_N G^{\otimes n}\), where \(B_N\) is a certain \(N\times N\) permutation matrix, and \(G^{\otimes n}\) is the \(n\)th Kronecker power of the \(2\times 2\) kernel matrix \(G=\left[\begin{smallmatrix}1 & 0\\ 1 & 1 \end{smallmatrix}\right]\). To encode \(K\) message bits, one forms an \(N\)-vector \(u\) in which \(K\) coordinates represent the message bits. The remaining \(N-K\) coordinates are set to some fixed values and are said to be frozen. The codeword \(x \in \{0,1\}^N\) is obtained as \(x=u G^{\otimes n}\).
Protection: Protects against various types of noise in the communication channel, for instance, errors, erasures, or other types of noise. Distance plays no role in the analysis of its properties, and is much lower than the largest possible value given \(K,N\).
Parents:
Linear binary code, Generalized concatenated code.
Cousins:
Reed-Muller (RM) code.
Cousin of:
Quantum polar code.

Tanner code[7]
Binary linear code defined on edges on a regular graph \(G\) such that each subsequence of bits corresponding to edges in the neighborhood any vertex belong to some short binary linear code \(C_0\). Expansion properties of the underlying graph can yield efficient decoding algorithms.
Protection: Tanner Codes protect against noise on classical bit strings. If \(C_0\) is an \([d, d-t,d'> d(\gamma_0 +\frac{\lambda}{d})]_2\) code and G is an \((N, M, 2, d, \rho,\alpha)\)- expander where \(\rho = \gamma_0 (\gamma_0 +\frac{\lambda}{d})\), then the Tanner Code \(T(G, C_0)\) has rate \(1-\frac{M}{N}t\) and relative distance \(\geq \gamma_0(\gamma_0+\frac{\lambda}{d})\).
Parents:
Linear binary code, Parallel concatenated code.
Parent of:
Expander code.
Cousin of:
Expander lifted-product code, Quantum Tanner code.

Raptor (RAPid TORnado) code[8][9]
Raptor codes are concatenated erasure codes with two layers: an outer pre-code and a Luby-Transform (LT) inner code.
The pre-code is a linear binary erasure code, which is applied first to the input to create some redundant data.
The LT code is then applied to the intermediate symbols from the pre-code to generate final output symbols to be transmitted.
Protection: As a type of fountain code, a Raptor code is designed to correct erasures. The error probability of Raptor codes is measured in terms of its overhead, which is how many additional symbols are received above the dimension of the input \(k\). This relationship can vary widely depending on the input pre-code and degree distribution. For a well-designed degree distribution, the error probability of a Raptor code is directly related to the error probability of the pre-code's decoder. In other words, if there is a linear time decoder for the pre-code that has subexponentially small error probability, then the Raptor code's error probability will decrease exponentially with increasing overhead (past the \(n-k\) overhead symbols necessary for the pre-code).
Parents:
Fountain code.
Parent of:
Luby transform (LT) code.
Cousins:
Tornado code.

Binary Golay code[10]
A \([23, 12, 7]\) perfect binary linear code with connections to the theory of sporadic simple groups. Adding a parity bit to the code results in the \([24, 12, 8]\) extended Golay code. The codespace is a 12-dimensional linear subspace of \(GF(2)^{23}\), or \(GF(2)^{24}\) in the extended case.
Protection: The binary Golay code has distance 7 and can correct up to \(\frac{7-1}{2}=3\) errors.
Parents:
Linear binary code, Perfect code, Quadratic-residue code.

Binary repetition code[11]
\([n,1,n]\) binary linear code encoding one bit of information into an \(n\)-bit string. The length \(n\) needs to be an odd number, since the receiver will pick the majority to recover the information. The idea is to increase the code distance by repeating the logical information several times. It is a \((n,1)\)-Hamming code.
Protection: Detects errors on up to \(\frac{n-1}{2}\) coordinates, corrects erasure errors on up to \(\frac{n-1}{2}\) coordinates. The generator matrix is \(G=\left[\begin{smallmatrix}1 & 1&\cdots& 1 & 1 \end{smallmatrix}\right]\).
Parents:
Linear binary code.
Cousins:
Perfect code, Quantum repetition code, Hamming code.
Cousin of:
Single parity-check code.

Graph homology code[12]
This code's properties are derived from the size two chain complex associated with a particular graph. Given a connected simplicial (no self loops or muliedges) graph \(G = (V, E)\), which is not a tree, with incidence matrix \(\Gamma\) we can construct a code by choosing a parity check matrix which consists of all the linearly independent rows of \(\Gamma\). This is a \([n,k,d]\) code with \(n = |E|\), \(k = 1 - \mathcal{X}(G) = 1-|V|+|E|\), where \( \mathcal{X}(G)\) is the euler characteristic of the graph. The code distance is equal to the shortest size of a cycle, guaranteed to exist since \(G\) is not a tree.
Parents:
Linear binary code.
Cousins:
Perfect code, Calderbank-Shor-Steane (CSS) stabilizer code.

Hadamard code
The Hadamard code is dual to the extended Hamming Code.
Parents:
Linear binary code.
Cousins:
Dual linear code, Hamming code, Reed-Muller (RM) code.

Reed-Muller (RM) code[13][14]
Stub. Define first order (\(r=1\)) and second order.
Parents:
Linear binary code.
Cousins:
Divisible code.
Cousin of:
Hadamard code, Majorana stabilizer code, Polar code, Quantum Reed-Muller code, Quantum divisible code, \(q\)-ary group code.

Tornado code[2][15][16]
Stub.
Parents:
Linear binary code.
Cousins:
Fountain code, Low-density parity-check (LDPC) code.
Cousin of:
Raptor (RAPid TORnado) code.

Zetterberg code[17]
Family of binary cyclic \([2^{2s}+1,2^{2s}-4s+1]\) codes with distance \(d>5\) generated by the minimal polynomial \(g_s(x)\) of \(\alpha\) over \(GF(2)\), where \(\alpha\) is a primitive \(n\)th root of unity in the field \(GF(2^{4s})\). They are quasi-perfect codes and are one of the best known families of double-error correcting binary linear codes
Protection: Correct at least all weight-2 errors.
Parents:
Cyclic code, Linear binary code.
Cousins:
Perfect code.

Expander code[18]
Expander codes are binary linear codes whose parity check matrices are derived from the adjacency matrix of bipartite expander graphs. In particular, the rows of the parity check matrix correspond to the right nodes of the bipartite graph and the columns correspond to the left nodes. The codespace is equivalent to all subsets of the left nodes in the graph that have an even number of edges going into every right node of the graph. Since the expander graph is only left regular, these codes do not qualify as LDPC codes.
Protection: Bit flip errors of weight at most \((d-1)/2\) where \(d\) is the distance of the code and is linear in \(n\), the number of physical bits.
Parents:
Tanner code.
Cousin of:
Quantum expander code.

Luby transform (LT) code[19]
Erasure codes based on fountain codes. They improve on random linear fountain codes by having a much more efficient encoding and decoding algorithm.
Parents:
Raptor (RAPid TORnado) code.

Single parity-check code
An \([n,n-1,2]\) binary linear error-detecting code encoding an \(n\)-bit codeword into an \((n+1)\)-bit string. In this code, parity information of a codeword is sotred in an extra parity bit. If the Hamming weight of a codeword is odd, then its parity is 1. If the Hamming weight of a codeword is even, then its parity is 0. This code is inexpensive since it only requires an extra parity bit and a single parity check.
Protection: This code cannot protect information, it can only detect 1-bit error.
Parents:
Parity-check code.
Cousins:
Binary repetition code.

## References

- [1]
- Peter Elias. Coding for noisy channels. IRE Convention Records, 3(4):37–46, 1955.
- [2]
- J. W. Byers et al., “A digital fountain approach to reliable distribution of bulk data”, ACM SIGCOMM Computer Communication Review 28, 56 (1998). DOI
- [3]
- V.I. Levenshtein, Application of Hadamard matrices to a problem in coding theory, Problems of Cybernetics, vol. 5, GIFML, Moscow, 1961, 125–136.
- [4]
- F. J. MacWilliams and N. J. A. Sloane. The theory of error correcting codes. Elsevier, 1977
- [5]
- J. Justesen, “Class of constructive asymptotically good algebraic codes”, IEEE Transactions on Information Theory 18, 652 (1972). DOI
- [6]
- E. Arikan, “Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels”, IEEE Transactions on Information Theory 55, 3051 (2009). DOI
- [7]
- R. Tanner, “A recursive approach to low complexity codes”, IEEE Transactions on Information Theory 27, 533 (1981). DOI
- [8]
- A. Shokrollahi, “Raptor codes”, IEEE Transactions on Information Theory 52, 2551 (2006). DOI
- [9]
- Petar Maymounkov, Online codes, Technical report, New York University, 2002.
- [10]
- Golay, M. J. E. "Notes on Digital Coding." Proc. IRE 37, 657, 1949.
- [11]
- W. C. Huffman, J.-L. Kim, and P. Solé, Concise Encyclopedia of Coding Theory (Chapman and Hall/CRC, 2021). DOI
- [12]
- H. Bombin and M. A. Martin-Delgado, “Homological error correction: Classical and quantum codes”, Journal of Mathematical Physics 48, 052105 (2007). DOI; quant-ph/0605094
- [13]
- D. E. Muller, “Application of Boolean algebra to switching circuit design and to error detection”, Transactions of the I.R.E. Professional Group on Electronic Computers EC-3, 6 (1954). DOI
- [14]
- I. Reed, “A class of multiple-error-correcting codes and the decoding scheme”, Transactions of the IRE Professional Group on Information Theory 4, 38 (1954). DOI
- [15]
- M. G. Luby et al., “Practical loss-resilient codes”, Proceedings of the twenty-ninth annual ACM symposium on Theory of computing - STOC '97 (1997). DOI
- [16]
- M. G. Luby et al., “Efficient erasure correcting codes”, IEEE Transactions on Information Theory 47, 569 (2001). DOI
- [17]
- L.-H. Zetterberg, “Cyclic codes from irreducible polynomials for correction of multiple errors”, IEEE Transactions on Information Theory 8, 13 (1962). DOI
- [18]
- M. Sipser and D. A. Spielman, “Expander codes”, IEEE Transactions on Information Theory 42, 1710 (1996). DOI
- [19]
- M. Luby, “LT codes”, The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings.. DOI