Binary code 

Root code for the Binary Kingdom

Description

Encodes \(K\) states (codewords) in \(n\) binary coordinates and has distance \(d\). Usually denoted as \((n,K,d)\). The distance is the minimum Hamming distance between a pair of distinct codewords.

The coordinate permutation group \(S_n\) of order \(n!\) is formed by \(n\)-dimensional matrices with a 1 in each row and column [1; Ch. 8][2; Ch. 3]. The group of isometries of Hamming space is the hyperoctahedral group \(\Z_2\wr S_n=\Z_2^n\rtimes S_n\), i.e., the permutation group together with the group formed by the action of binary space on itself (under addition). Two binary codes are equivalent if there exists a hyperoctahedral group element that maps one into the other [4][3; Def. 1.8.8].

Protection

A binary code \(C\) corrects \(t\) errors in the Hamming distance if \begin{align} \forall x \in C~,~D(x,x+y) < D(x' , x+y) \tag*{(1)}\end{align} for all codewords \(x' \neq x\) and all \(y\) such that \(|y|=t\), where \(D\) is the Hamming distance and \(|y| = D(y,0) \). A code corrects \(t\) errors if and only if \(d \geq 2t+1\), i.e., a code corrects errors on \(t \leq \left\lfloor (d-1)/2 \right\rfloor\) coordinates. The number of correctable errors is called the decoding radius, and it is upper bounded by half of the distance. In addition, a code detects errors on up to \(d-1\) coordinates, and corrects erasure errors on up to \(d-1\) coordinates.

Rate

The rate of a binary code is usually defined as \(R=\frac{1}{n}\log_{2}K\) bits per symbol. The maximum rate of a binary code with linear distance is upper bounded by the McEliece, Rodemich, Rumsey and Welch (MRRW) bound [5] (see Refs. [69] for other proofs).

Decoding

For few-bit codes (\(n\) is small), decoding can be based on a lookup table. For infinite code families, the size of such a table scales exponentially with \(n\), so approximate decoding algorithms scaling polynomially with \(n\) have to be used. The decoder determining the most likely error given a noise channel is called the maximum-likelihood decoder.Given a received string \(x\) and an error bound \(e\), a list decoder returns a list of all codewords that are at most \(e\) from \(x\) in Hamming distance. The number of codewords in a neighborhood of \(x\) has to be polynomial in \(n\) in order for this decoder to run in time polynomial in \(n\).

Parent

Children

Cousins

  • Orthogonal array (OA) — An \((n,K)\) binary code with dual distance \(d^{\perp}\) is an OA\(_{K/2^{d^{\perp}-1}}(d^{\perp}-1,n,2)\) [10][1; Ch. 5].
  • Qubit c-q code — Any binary code can be embedded into a qubit Hilbert space, and thus passed through a qubit channel, by associating length-\(n\) bitstrings with basis vectors in a Hilbert space over \(\mathbb{Z}_2^n\). For example, a bit of information can be embedded into a two-dimensional vector space by associating the two bit values with two basis vectors for the space.
  • Construction-\(A\) code — Each binary code yields a sphere packing under Construction A.
  • Binary antipodal code — Binary antipodal codes are spherical codes obtained from binary codes via the antipodal mapping.
  • Fock-state bosonic code — Fock-state code distance is a natural extension of Hamming distance between binary strings.
  • Movassagh-Ouyang Hamiltonian code — Movassagh-Ouyang codes are constructed from classical binary codes.
  • Self-complementary quantum code — A binary code is called self-complementary if, for each codeword \(c\), its negation \(\overline{c}\) is also a codeword.

References

[1]
F. J. MacWilliams and N. J. A. Sloane. The theory of error correcting codes. Elsevier, 1977.
[2]
J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups (Springer New York, 1999) DOI
[3]
W. C. Huffman, J.-L. Kim, and P. Solé, "Basics of coding theory." Concise Encyclopedia of Coding Theory (Chapman and Hall/CRC, 2021) DOI
[4]
P. R. J. Östergård, "Construction and Classification of Codes." Concise Encyclopedia of Coding Theory (Chapman and Hall/CRC, 2021) DOI
[5]
R. McEliece et al., “New upper bounds on the rate of a code via the Delsarte-MacWilliams inequalities”, IEEE Transactions on Information Theory 23, 157 (1977) DOI
[6]
M. Navon and A. Samorodnitsky, “Linear Programming Bounds for Codes via a Covering Argument”, Discrete &amp; Computational Geometry 41, 199 (2008) DOI
[7]
J. Friedman and J.-P. Tillich, “Generalized Alon--Boppana Theorems and Error-Correcting Codes”, SIAM Journal on Discrete Mathematics 19, 700 (2005) DOI
[8]
A. Samorodnitsky, “One more proof of the first linear programming bound for binary codes and two conjectures”, (2021) arXiv:2104.14587
[9]
N. Linial and E. Loyfer, “An Elementary Proof of the First LP Bound on the Rate of Binary Codes”, (2023) arXiv:2303.16619
[10]
Delsarte, Philippe. "Bounds for unrestricted codes, by linear programming." (1972).
Page edit log

Your contribution is welcome!

on github.com (edit & pull request)— see instructions

edit on this site

Zoo Code ID: bits_into_bits

Cite as:
“Binary code”, The Error Correction Zoo (V. V. Albert & P. Faist, eds.), 2022. https://errorcorrectionzoo.org/c/bits_into_bits
BibTeX:
@incollection{eczoo_bits_into_bits, title={Binary code}, booktitle={The Error Correction Zoo}, year={2022}, editor={Albert, Victor V. and Faist, Philippe}, url={https://errorcorrectionzoo.org/c/bits_into_bits} }
Share via:
Twitter | Mastodon |  | E-mail
Permanent link:
https://errorcorrectionzoo.org/c/bits_into_bits

Cite as:

“Binary code”, The Error Correction Zoo (V. V. Albert & P. Faist, eds.), 2022. https://errorcorrectionzoo.org/c/bits_into_bits

Github: https://github.com/errorcorrectionzoo/eczoo_data/edit/main/codes/classical/bits/bits_into_bits.yml.