Binary code 

Root code for the Binary Kingdom


Encodes \(K\) states (codewords) in \(n\) binary coordinates and has distance \(d\). Usually denoted as \((n,K,d)\). The distance is the minimum Hamming distance between a pair of distinct codewords.


A binary code \(C\) corrects \(t\) errors in the Hamming distance if \begin{align} \forall x \in C~,~D(x,x+y) < D(x' , x+y) \tag*{(1)}\end{align} for all codewords \(x' \neq x\) and all \(y\) such that \(|y|=t\), where \(D\) is the Hamming distance and \(|y| = D(y,0) \). A code corrects \(t\) errors if and only if \(d \geq 2t+1\), i.e., a code corrects errors on \(t \leq \left\lfloor (d-1)/2 \right\rfloor\) coordinates. The number of correctable errors is called the decoding radius, and it is upper bounded by half of the distance. In addition, a code detects errors on up to \(d-1\) coordinates, and corrects erasure errors on up to \(d-1\) coordinates.


The rate of a binary code is usually defined as \(R=\frac{1}{n}\log_{2}K\) bits per symbol. The maximum rate of a binary code with linear distance is upper bounded by the McEliece, Rodemich, Rumsey and Welch (MRRW) bound [1] (see Refs. [25] for other proofs).


For few-bit codes (\(n\) is small), decoding can be based on a lookup table. For infinite code families, the size of such a table scales exponentially with \(n\), so approximate decoding algorithms scaling polynomially with \(n\) have to be used. The decoder determining the most likely error given a noise channel is called the maximum-likelihood decoder.Given a received string \(x\) and an error bound \(e\), a list decoder returns a list of all codewords that are at most \(e\) from \(x\) in Hamming distance. The number of codewords in a neighborhood of \(x\) has to be polynomial in \(n\) in order for this decoder to run in time polynomial in \(n\).





R. McEliece et al., “New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities”, IEEE Transactions on Information Theory 23, 157 (1977) DOI
M. Navon and A. Samorodnitsky, “Linear Programming Bounds for Codes via a Covering Argument”, Discrete &amp; Computational Geometry 41, 199 (2008) DOI
J. Friedman and J.-P. Tillich, “Generalized Alon--Boppana Theorems and Error-Correcting Codes”, SIAM Journal on Discrete Mathematics 19, 700 (2005) DOI
A. Samorodnitsky, “One more proof of the first linear programming bound for binary codes and two conjectures”, (2021) arXiv:2104.14587
N. Linial and E. Loyfer, “An Elementary Proof of the First LP Bound on the Rate of Binary Codes”, (2023) arXiv:2303.16619
F. J. MacWilliams and N. J. A. Sloane. The theory of error correcting codes. Elsevier, 1977.
Delsarte, Philippe. "Bounds for unrestricted codes, by linear programming." (1972).
Page edit log

Your contribution is welcome!

on (edit & pull request)— see instructions

edit on this site

Zoo Code ID: bits_into_bits

Cite as:
“Binary code”, The Error Correction Zoo (V. V. Albert & P. Faist, eds.), 2022.
@incollection{eczoo_bits_into_bits, title={Binary code}, booktitle={The Error Correction Zoo}, year={2022}, editor={Albert, Victor V. and Faist, Philippe}, url={} }
Share via:
Twitter | Mastodon |  | E-mail
Permanent link:

Cite as:

“Binary code”, The Error Correction Zoo (V. V. Albert & P. Faist, eds.), 2022.