Speed Matters: How Ethernet Went From 3 Mbps to 100 Gbps… and Beyond

Archive for October 4, 2011

Convolutional Codes

Introduction

Convolutional codes are frequently used to correct errors in noisy channels. They have rather good correcting capability and perform well even on very bad channels (with error probabilities of about 10−3). Convolutional codes are extensively used in satellite communications. Although convolutional encoding is a simple procedure, decoding of a convolutional code is much more complex task.

Several classes of algorithms exist for this purpose:

1. Threshold decoding is the simplest of them, but it can be successfully applied only to the specific classes of convolutional codes. It is also far from optimal.

2. Sequential decoding is a class of algorithms performing much better than threshold algorithms. Their serious advantage is that decoding complexity is virtually independent from the length of the particular code. Although sequential algorithms are also suboptimal, they are successfully used with very long codes, where no other algorithm can be acceptable. The main drawback of sequential decoding is unpredictable decoding latency.

3. Viterbi decoding is an optimal (in a maximum-likelihood sense) algorithm for decoding of a convolutional code. Its main drawback is that the decoding complexity grows exponentially with the code length. So, it can be utilized only for relatively short codes.

There are also soft-output algorithms, like SOVA (Soft Output Viterbi Algorithm) or MAP algorithm, which provide not only a decision, but also an estimate of its reliability. They are used primarily in the decoders of turbo codes.

Convolutional Encoders

  1. Like any error-correcting code, a convolutional code works by adding some structured redundant information to the user’s data and then correcting errors using this information.
  2. A convolutional encoder is a linear system.
  3. A binary convolutional encoder can be represented as a shift register. The outputs of the encoder are modulo 2 sums of the values in the certain register’s cells. The input to the encoder is either the un-encoded sequence (for non-recursive codes) or the un-encoded sequence added with the values of some register’s cells (for recursive codes).
  4. Convolutional codes can be systematic and non-systematic. Systematic codes are those where an un-encoded sequence is a part of the output sequence. Systematic codes are almost always recursive; conversely, non-recursive codes are almost always non-systematic.
  5. A combination of register’s cells that forms one of the output streams (or that is added with the input stream for recursive codes) is defined by a polynomial. Let m be the maximum degree of the polynomials constituting a code, then K=m1 is a constraint length of the code.
  6. A code rate is an inverse number of output polynomials.

Trellis Diagram

  • A convolutional encoder is often seen as a finite state machine.
  • Each state corresponds to some value of the encoder’s register.
  • Given the input bit value, from a certain state the encoder can move to two other states.
  • These state transitions constitute a diagram which is called a trellis diagram.
  • A trellis diagram for the code is depicted on the Figure.
  • A solid line corresponds to input 0, a dotted line – to input 1 (note that encoder states are designated in such a way that the rightmost bit is the newest one).
  • Each path on the trellis diagram corresponds to a valid sequence from the encoder’s output.
  • Conversely, any valid sequence from the encoder’s output can be represented as a path on the trellis diagram.
  • One of the possible paths is denoted as red (as an example).
  • Note that each state transition on the diagram corresponds to a pair of output bits.
  • There are only two allowed transitions for every state, so there are two allowed pairs of output bits, and the two other pairs are forbidden.
  • If an error occurs, it is very likely that the receiver will get a set of forbidden pairs, which don’t constitute a path on the trellis diagram.
  • So, the task of the decoder is to find a path on the trellis diagram which is the closest match to the received sequence.

Let’s define a free distance d f as a minimal Hamming distance between two different allowed binary sequences (a Hamming distance is defined as a number of differing bits).

A free distance is an important property of the convolutional code. It influences a number of closely located errors the decoder is able to correct.

Viterbi Algorithm

We will use the code generated by the encoder on Figure as an example.

Basic Definitions

Ideally, Viterbi algorithm reconstructs the maximum-likelihood path given the input sequence.

Let’s define some terms:

Soft Decision Decoder

  • A soft decision decoder is a decoder receiving bits from the channel with some kind of reliability estimate.
  • Three bits are usually sufficient for this task.
  • Further increasing soft decision width will increase performance only slightly while considerably increasing computational difficulty.
  • For example, if we use a 3-bit soft decision, then “000” is the strongest zero, “011” is a weakest zero, “100” is a weakest one and “111” is a strongest one.

Hard Decision Decoder

  • A hard decision decoder is a decoder which receives only bits from the channel (without any reliability estimate).

Branch Metric

  • A branch metric – a distance between the received pair of bits and one of the “ideal” pairs (“00”, “01”, “10”, “11”).

Path Metric

A path metric is a sum of metrics of all branches in the path.

Distance

A meaning of distance depends on the type of the decoder:

  • for a hard decision decoder it is a Hamming distance, i.e. a number of differing bits;
  • For a soft decision decoder it is a Euclidean distance.

In these terms, the maximum-likelihood path is a path with the minimal path metric. Thus the problem of decoding is equivalent to the problem of finding such a path.

Let’s suppose that for every possible encoder state we know a path with minimum metric ending in this state. For any given encoder state there is two (and only two) states from which the encoder can move to that state, and for both of these transitions we know branch metrics. So, there are only two paths ending in any given state on the next step. One of them has lesser metric, it is a survivor path.

The other path is dropped as less likely. Thus we know a path with minimum metric on the next step, and the above procedure can be repeated.

Implementation

A Viterbi algorithm consists of the following three major parts:

  1. Branch metric calculation – calculation of a distance between the input pair of bits and the four possible “ideal” pairs (“00”, “01”, “10”, “11”).
  2. Path metric calculation – for every encoder state, calculate a metric for the survivor path ending in this state (a survivor path is a path with the minimum metric).
  3. Trace back – this step is necessary for hardware implementations that don’t store full information about the survivor paths, but store only one bit decision every time when one survivor path is selected from the two.

1.     Branch Metric Calculation

Methods of branch metric calculation are different for hard decision and soft decision decoders.

For a hard decision decoder, a branch metric is a Hamming distance between the received pair of bits and the “ideal” pair. Therefore, a branch metric can take values of 0, 1 and 2. Thus for every input pair we have 4 branch metrics (one for each pair of “ideal” values).

For a soft decision decoder, a branch metric is measured using the Euclidean distance. Let x be the first received bit in the pair, y – the second, x0 and y0 – the “ideal” values. Then branch metric is

Mb = (x−x0)2 + (y−y0)2

Furthermore, when we calculate 4 branch metric for a soft decision decoder, we don’t actually need to know absolute metric values – only the difference between them makes sense. So, nothing will change if we subtract one value from the all four branch metrics:

Mb = (x2 − 2 x x0 + x02) + (y2−2 y y+ y02)

Mb * = Mb−x2−y2 = ( x0 2 −2 x x0 ) + (y0 2 −2 y y0)

Note that the second formula, Mb *, can be calculated without hardware multiplication: x02 and y02 can be pre-calculated, and multiplication of x by x0 and y by y0 can be done very easily in hardware given that x0 and y0 are constants.

It should be also noted that Mb * is a signed variable and should be calculated in 2’s complement format.

Path Metric Calculation

Path metrics are calculated using a procedure called ACS (Add-Compare-Select). This procedure is repeated for every encoder state.

  1. Add – for a given state, we know two states on the previous step which can move to this state, and the output bit pairs that correspond to these transitions. To calculate new path metrics, we add the previous path metrics with the corresponding branch metrics.
  2. Compare, select – we now have two paths, ending in a given state. One of them (with greater metric) is dropped.

As there are 2K−1 encoder states, we have 2K−1 survivor paths at any given time.

It is important that the difference between two survivor path metrics cannot exceed logK−1 , where δ is a difference between maximum and minimum possible branch metrics.

The problem with path metrics is that they tend to grow constantly and will eventually overflow.

But, since the absolute values of path metric don’t actually matter, and the difference between them is limited, a data type with a certain number of bits will be sufficient.

There are two ways of dealing with this problem:

  1. Since the absolute values of path metric don’t actually matter, we can at any time subtract an identical value from the metric of every path. It is usually done when all path metrics exceed a chosen threshold (in this case the threshold value is subtracted from every path metric). This method is simple, but not very efficient when implemented in hardware.
  2. The second approach allows overflow, but uses a sufficient number of bits to be able to detect whether the overflow took place or not. The compare procedure must be modified in this case.

The whole range of the data type’s capacity is divided into 4 equal parts. If one path metric is in the 3-rd quarter, and the other – in the 0-th, then the overflow took place and the path in min max the 3-rd quarter should be selected. In other cases an ordinary compare procedure is applied.

This works, because a difference between path metrics can’t exceed a threshold value, and the range of path variable is selected such that it is at least two times greater than the threshold.

Traceback

It has been proven that all survivor paths merge after decoding a sufficiently large block of data, i.e. they differ only in their endings and have the common beginning.

If we decode a continuous stream of data, we want our decoder to have finite latency. It is obvious that when some part of path at the beginning of the graph belongs to every survivor path, the decoded bits corresponding to this part can be sent to the output. Given the above statement, we can perform the decoding as follows:

  1. Find the survivor paths for N+D input pairs of bits.
  2. Trace back from the end of any survivor paths to the beginning.
  3. Send N bits to the output.
  4. Find the survivor paths for another N pairs of input bits.
  5. Go to step 2.

In this procedure D is an important parameter called decoding depth. A decoding depth should be considerably large for quality decoding, no less than 5K. Increasing D decreases the probability of a decoding error, but also increases latency.

As for N, it specifies how many bits we are sending to the output after each traceback. For example, if N=1, the latency is minimal, but the decoder needs to trace the whole tree every step. It is computationally ineffective. In hardware implementations N usually equals D.

Viterbi Algorithm

Viterbi algorithm is a well-known maximum-likelihood algorithm for decoding of convolutional codes.

The terms “Viterbi path” and “Viterbi algorithm” are also applied to related dynamic programming algorithms that discover the single most likely explanation for an observation. For example, in statistical parsing a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is sometimes called the “Viterbi parse”.

The Viterbi algorithm was conceived by Andrew Viterbi in 1967 as a decoding algorithm for convolutional codes over noisy digital communication links. For more details on the history of the development of the algorithm see David Forney’s article.[1] The algorithm has found universal application in decoding the convolutional codes used in both CDMA and GSM digital cellular, dial-up modems, satellite, deep-space communications, and 802.11 wireless LANs. It is now also commonly used in speech recognition, keyword spotting, computational linguistics, and bioinformatics. For example, in speech-to-text (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the “hidden cause” of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.

Introduction

 The Viterbi algorithm is a dynamic programming algorithm for finding the most likely sequence of hidden states – called the Viterbi path – that results in a sequence of observed events, especially in the context of Markov information sources, and more generally, hidden Markov models. The forward algorithm is a closely related algorithm for computing the probability of a sequence of observed events. These algorithms belong to the realm of information theory.

Assumptions

The algorithm makes a number of assumptions.

  • First, both the observed events and hidden events must be in a sequence. This sequence often corresponds to time.
  • Second, these two sequences need to be aligned, and an instance of an observed event needs to correspond to exactly one instance of a hidden event.
  • Third, computing the most likely hidden sequence up to a certain point t must depend only on the observed event at point t, and the most likely sequence at point t − 1.

These assumptions are all satisfied in a first-order hidden Markov model.

Overview

Assumption 1

  • The Viterbi algorithm operates on a state machine assumption.
  • That is, at any time the system being modeled is in one of a finite number of states.
  • While multiple sequences of states (paths) can lead to a given state, at least one of them is a most likely path to that state, called the “survivor path”.
  • This is a fundamental assumption of the algorithm because the algorithm will examine all possible paths leading to a state and only keep the one most likely.
  • This way the algorithm does not have to keep track of all possible paths, only one per state.

Assumption 2

  • A second key assumption is that a transition from a previous state to a new state is marked by an incremental metric, usually a number.
  • This transition is computed from the event.

Assumption 3

  • The third key assumption is that the events are cumulative over a path in some sense, usually additive.
  • So the crux of the algorithm is to keep a number for each state.
  • When an event occurs, the algorithm examines moving forward to a new set of states by combining the metric of a possible previous state with the incremental metric of the transition due to the event and chooses the best.
  • The incremental metric associated with an event depends on the transition possibility from the old state to the new state.

Path history must be stored. In some cases, the search history is complete because the state machine at the encoder starts in a known state and there is sufficient memory to keep all the paths. In other cases, a programmatic solution must be found for limited resources: one example is convolutional encoding, where the decoder must truncate the history at a depth large enough to keep performance to an acceptable level. Although the Viterbi algorithm is very efficient and there are modifications that reduce the computational load, the memory requirements tend to remain constant.

Extensions

With the algorithm called iterative Viterbi decoding one can find the subsequence of an observation that matches best (on average) to a given HMM. Iterative Viterbi decoding works by iteratively invoking a modified Viterbi algorithm, re-estimating the score for filler until convergence.

An alternative algorithm, the Lazy Viterbi algorithm, has been proposed recently. This works by not expanding any nodes until it really needs to, and usually manages to get away with doing a lot less work (in software) than the ordinary Viterbi algorithm for the same result – however, it is not so easy to parallelize in hardware.

The Viterbi algorithm has been extended to operate with a deterministic finite automaton in order to quickly generate the trellis with state transitions pointing back at variable amount of history.