top of page

## Digital Communications

#### Information Theory and Error Control Coding Overview

4/11/2020

Running Time: 4:23

This playlist provides a brief introduction to information theory and error control coding. The areas of information theory and error control coding are quite vast, so we only touch on a few topics. The first main topic is channel capacity. Channel capacity is the maximum data rate that can be achieved on a communication channel with arbitrarily low probability of error rates. The second topic is block coding. Block coding is a technique for mapping k-bit input words to n-bit output words in way to provide robustness against bit error during data transmission.

#### Channel Capacity and the Channel Error Exponent

4/11/2020

Running Time: 8:39

The capacity of a communication channel, C, is the maximum data rate that can be communicated at with an arbitrarily low probability of error rate. The channel error exponent E(Rm) is a function of the communication rate Rm, and lets use provide a bound on the probability of bit error rate. As long as as the channel error exponent E(Rm) is positive, we can increase the transmission time T to achieve low error rate. Importantly, the channel error exponent is positive as long as Rm is less than C.

#### Shannon's Capacity Formula

4/11/2020

Running Time: 10:41

The channel capacity equation for the special case of an additive white noise Gaussian channel (AWGN) has a simple form. This equation is often called the "Shannon Capacity Equation" or the "Shannon-Hartley Theorem". This video provides the equation, and also examines the capacity equation when operating a data rate Rm that achieves capacity (i.e. Rm = C). This curve divides the Eb/N0 vs Rm/B plane into two regions. To the left of the curve we operate within the channel capacity and can achieve arbitrarily small error rates. To the right of the curve we exceed channel capacity and cannot provide guaranteed performance.

#### Block Coding Introduction

4/11/2020

Running Time: 10:29

Block codes are an error control coding technique. They add extra bits of information to the original data bits to provide error detection and correction capabilities. An (n,k) block code takes a k-bit input word w, and maps it to an n-bit codeword x. The code rate of a block code is R = n/k. We discuss how the mapping from message words to codewords should be one-to-one, and ideally, the codewords should be as "different" as possible. This seems like an intuitive/reasonable strategy for constructing a block code. In the next video, we formally derive the optimal decoding rule and see how codeword distance is indeed an important design factor.

#### The Hamming Distance

4/12/2020

Running Time: 4:05

The Hamming distance is a measure of distance between two codewords. We denote the distance between y and x as dH(y,x). The distance is easily computed as summation of the bit-wise modulo-2 sum. In other words, the Hamming distance between two vectors is just the number of bit positions where they differ.

#### Block Decoding Minimum Distance Decoding Rule Derivation

4/12/2020

Running Time: 12:45

We derive the optimum decision rule for block coding on the binary symmetric channel (BSC). We start by writing an expression for the posterior probability P(xm | y). We want to choose xm that maximizes this probability for the value of y observed at the receiver. We re-write this expression using Bayes rule, and show that maximizing this posterior probability is equivalent to choosing the codeword xm that has minimum distance to y. So, we our optimal decision is simply the codeword xm that has the smallest Hamming distance dH(xm,y). This is why the decoding operation is referred to as a minimum distance decoding rule.

bottom of page