Concatenated error correction code

In coding theory, concatenated codes form a class of that are derived by combining an inner code and an outer code. They were conceived in 1966 by as a solution to the problem of finding a code that has both exponentially decreasing error probability with increasing block length and decoding . Concatenated codes became widely used in space communications in the 1970s.

Contents

Background

The field of is concerned with sending a stream of data at the highest possible rate over a given , and then decoding the original data reliably at the receiver, using encoding and decoding algorithms that are feasible to implement in a given technology.

shows that over many common channels there exist channel coding schemes that are able to transmit data reliably at all rates R less than a certain threshold C, called the of the given channel. In fact, the probability of decoding error can be made to decrease exponentially as the block length N of the coding scheme goes to infinity. However, the complexity of a naive optimum decoding scheme that simply computes the likelihood of every possible transmitted codeword increases exponentially with N, so such an optimum decoder rapidly becomes infeasible.

In his doctoral thesis, showed that concatenated codes could be used to achieve exponentially decreasing error probabilities at all data rates less than capacity, with decoding complexity that increases only polynomially with the code block length.

Description

Let Cin be a [n, k, d] code, that is, a of length n, k, minimum d, and r = k/n, over an alphabet A:

C_{in}: A^k rightarrow A^n

Let Cout be a [N, K, D] code over an alphabet B with |B| = |A|k symbols:

C_{out}: B^K rightarrow B^N

The inner code Cin takes one of |A|k = |B| possible inputs, encodes into an n-tuple over A, transmits, and decodes into one of |B| possible outputs. We regard this as a (super) channel which can transmit one symbol from the alphabet B. We use this channel N times to transmit each of the N symbols in a codeword of Cout. The concatenation of Cout (as outer code) with Cin (as inner code), denoted Cout∘Cin, is thus a code of length Nn over the alphabet A: It uses information from the inner code to improve performance of the outer code, and was the first example of an algorithm using .

Applications

Although a simple concatenation scheme was implemented already for the 1971 Mars orbiter mission, Since then, concatenated codes became the workhorse for efficient error correction coding, and stayed so at least until the invention of and . For the outer code, a longer hard-decision block code, frequently a with eight-bit symbols, is used. The larger symbol size makes the outer code more robust to that can occur due to channel impairments, and also because erroneous output of the convolutional code itself is bursty. and it became a popular construction both within and outside of the space sector. It is still notably used today for , such as the broadcast standard.

In a looser sense, any (serial) combination of two or more codes may be referred to as a concatenated code. For example, within the standard, a highly efficient is combined with an algebraic outer code in order to remove any resilient errors left over from the inner LDPC code due to its inherent error floor.

A simple concatenation scheme is also used on the compact disc (CD), where an interleaving layer between two Reed–Solomon codes of different sizes spreads errors across various blocks.

Turbo codes: A parallel concatenation approach

The description above is given for what is now called a serially concatenated code. Turbo codes, as described first in 1993, implemented a parallel concatenation of two convolutional codes, with an interleaver between the two codes and an iterative decoder that passes information forth and back between the codes. This design has a better performance than any previously conceived concatenated codes.

However, a key aspect of turbo codes is their iterated decoding approach. Iterated decoding is now also applied to serial concatenations in order to achieve higher coding gains, such as within serially concatenated convolutional codes (SCCCs). An early form of iterated decoding was implemented with two to five iterations in the “Galileo code” of the .

See Also on BitcoinWiki

Source

http://wikipedia.org/