Block code 


A code intended to encode a piece, or block, of a data stream on a block of \(n\) symbols. Each symbol is taken from some fixed possibly infinite alphabet \(\Sigma\) [1; Ch. 3], which can include bits, Galois fields, rings, or real numbers.

The overall alphabet of the code is \(\Sigma^n\), and \(n\) is called the length of the code. In some cases, only a subset of \(\Sigma^n\) is available to use for constructing the code. For example, in the case of spherical codes, one is constrained to \(n\)-dimensional real vectors on the unit sphere.

An alternative more stringent definition (not used here) is in terms of a map encoding logical information from \(\Sigma^k\) into \(\Sigma^n\), yielding an \((n,k,d)_{\Sigma}\) block code, where \(d\) is the code distance.


Block codes protect from errors acting on a few of the \(n\) symbols. A block code with distance \(d\) detects errors acting on up to \(d-1\) symbols, and corrects erasure errors on up to \(d-1\) symbols.


The Shannon channel capacity (the maximum of the mutual information over input and output distributions) is the highest rate of information transmission through a classical (i.e., non-quantum) channel with arbitrarily small error rate [2]. Corrections to the capacity and tradeoff between decoding error, code rate and code length are determined using small [35], moderate [68] and large [912] deviation analysis. Sometimes the difference from the asymptotic rate at finite block length can be characterized by the channel dispersion [5,13].


Decoding an error-correcting code is equivalent to finding the ground state of some statistical mechanical model [14].





J. H. van Lint, Introduction to Coding Theory (Springer Berlin Heidelberg, 1999) DOI
C. E. Shannon, “A Mathematical Theory of Communication”, Bell System Technical Journal 27, 379 (1948) DOI
V. Strassen, “Asymptotische Absch¨atzungen in Shannons Informationstheorie,” Trans. Third Prague Conference on Information Theory, Prague, 689–723, (1962)
M. Hayashi, “Information Spectrum Approach to Second-Order Coding Rate in Channel Coding”, IEEE Transactions on Information Theory 55, 4947 (2009) arXiv:0801.2242 DOI
Y. Polyanskiy, H. V. Poor, and S. Verdu, “Channel Coding Rate in the Finite Blocklength Regime”, IEEE Transactions on Information Theory 56, 2307 (2010) DOI
Y. Altug and A. B. Wagner, “Moderate Deviations in Channel Coding”, (2012) arXiv:1208.1924
Y. Polyanskiy and S. Verdu, “Channel dispersion and moderate deviations limits for memoryless channels”, 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton) (2010) DOI
C. T. Chubb, V. Y. F. Tan, and M. Tomamichel, “Moderate Deviation Analysis for Classical Communication over Quantum Channels”, Communications in Mathematical Physics 355, 1283 (2017) arXiv:1701.03114 DOI
R. Gallager, Information Theory and Reliable Communication (Springer Vienna, 1972) DOI
I. Csiszár and J. Körner, Information Theory (Cambridge University Press, 2011) DOI
S. Arimoto, “On the converse to the coding theorem for discrete memoryless channels (Corresp.)”, IEEE Transactions on Information Theory 19, 357 (1973) DOI
G. Dueck and J. Korner, “Reliability function of a discrete memoryless channel at rates above capacity (Corresp.)”, IEEE Transactions on Information Theory 25, 82 (1979) DOI
S. H. Hassani, K. Alishahi, and R. L. Urbanke, “Finite-Length Scaling for Polar Codes”, IEEE Transactions on Information Theory 60, 5875 (2014) DOI
N. Sourlas, “Spin-glass models as error-correcting codes”, Nature 339, 693 (1989) DOI
Page edit log

Your contribution is welcome!

on (edit & pull request)

edit on this site

Zoo Code ID: block

Cite as:
“Block code”, The Error Correction Zoo (V. V. Albert & P. Faist, eds.), 2023.
  title={Block code},
  booktitle={The Error Correction Zoo},
  editor={Albert, Victor V. and Faist, Philippe},
Share via:
Twitter | Mastodon |  | E-mail
Permanent link:

Cite as:

“Block code”, The Error Correction Zoo (V. V. Albert & P. Faist, eds.), 2023.