I'm a full-time electrical engineer with a background in communication theory and signal processing.  I'm a part-time lecturer at the University of Alabama in Huntsville where I teach courses in linear systems, digital communications, and random processes.  I'm also an online adjunct faculty member at Southern New Hampshire University where I teach courses in deductive reasoning and applied linear algebra.

This is my personal website and it primarily contains resources for the different courses I teach. All video content can also be found on my YouTube channel (click and subscribe below), but this website provides more flexibility in organizing the information.

If you're a student studying signals & systems, random processes, communication theory, linear algebra, Matlab, or topics in advanced mathematics, you should find this website very helpful.  The site now contains 610+ videos totaling more than 83 hours of content.

Teaching is "just" my part-time job.  Please visit my LinkedIn profile to see information about current/past employment, contributions I've made to various projects, a list of publications, and other typical CV/resume information.

Inner Product Introduction

4/15/20

Running Time: 6:05

The inner product, defined on some vector space V, is a function that maps two vectors to a scalar. The inner product between vector x and y is denoted <x,y>. The inner product has 4 properties that we discuss. For the special case where the vector space is Rn, the inner product is the dot product. For this special case, the dot product between vectors x and y is simply xy^T (i.e. the product of x with the transpose of y). We work an example of computing the dot product between two vectors.

Information Theory and Error Control Coding Overview

4/11/20

Running Time: 4:23

This playlist provides a brief introduction to information theory and error control coding. The areas of information theory and error control coding are quite vast, so we only touch on a few topics. The first main topic is channel capacity. Channel capacity is the maximum data rate that can be achieved on a communication channel with arbitrarily low probability of error rates. The second topic is block coding. Block coding is a technique for mapping k-bit input words to n-bit output words in way to provide robustness against bit error during data transmission.

Carrier Synchronization Introduction

4/7/20

Running Time: 3:45

Several different types of synchronization are often required in a digital communication system. Carrier synchronization is required if processing at the receiver requires a coherent reference, symbol synchronization is required to know when individual symbols start/stop, and frame synchronization is often required to know when frames of data start/stop. This video introduces some of these synchronization concepts, and the following videos investigate specific schemes in more detail.

Introduction to Digital Filter Design

11/15/19

Running Time: 9:26

This playlist of videos provides a short introduction to designing digital filters.  In this first video, we review some basic filtering concepts.  The Frequency Response of a filter describes how the amplitude and phase of an input signal change as a function of signal frequency.  For an input of a single-frequency cosine, we derive the output signal showing how the input amplitude changes by a factor of |H(omega)| and the phase changes by an amount of arg(H(omega)).  We call|H(omega)| the Amplitude Response of the system and arg(H(omega)) the Phase Response of the system.  In subsequent videos, we’ll learn how to design digital filters that have desired amplitude response and phase response characteristics.

The Inverse Z-Transform by Long Division Example #1

4/12/19

Running Time: 7:12

The partial fraction expansion (PFE) approach to finding the inverse Z-transform is great for finding a time-domain equation that is valid for all time. Sometimes, finding just a few values of the time-domain signal is needed. In this case, finding the inverse Z-transform of a signal via long division can be used to compute time-domain samples of the signal one value at a time. In this video, we show how to use the long division approach to compute several values of a right-sided time-domain signal. For right-sided signals, this division operation should result in “z” raised to negative powers (e.g. z^-1, z^-2, etc.) since negative powers of z correspond to time-domain samples at positive values of time.

Derivation of the Discrete-Time Fourier Series Coefficients

10/12/18

Running Time: 12:19

In this video, we derive an equation for the Discrete-Time Fourier Series (DTFS) coefficients of the periodic discrete-time signal x[k].  Given this N0-periodic signal, the equation we derive lets us compute the N0 DTFS coefficients as a function of x[k].  In subsequent videos, we will use this equation to compute the DTFS coefficients for specific periodic discrete-time signals.

Difference Equation Zero-Input Response Theory

9/22/18

Running Time: 18:35

This video introduces the concept of the zero-input response of a difference equation. The zero-input response, denoted as y0[k], is the solution of a difference equation assuming zero input.  As such, this solution is only due to the initial conditions of the system.

The basic form of the  zero-input response is derived and we find that the zero-input response must be a linear combination of the characteristic modes of the system, where the characteristic modes are exponential functions.  We discuss the case where the roots of the system characteristic equation are distinct, repeated, and complex.

While this video just develops the general theory, terminology, and form of the zero-input response, the videos that follow work specific examples.

Iterative Solution of A Difference Equation

9/20/18

Running Time: 7:02

This video explains how to solve a difference equation using an iterative approach. Given initial conditions for the difference equation, subsequent values of the solution can be computed recursively one-at-at-time.  While this approach doesn't yield an analytic equation as the solution, it is often useful to use this approach to solve for a handful of values of the overall solution.  This iterative approach can also be easily implement in Matlab or another program to compute large numbers of values of the solution in a FOR loop.

Fourier Series Expansion on an Interval

8/15/18

Running Time: 10:17

We typically use the Fourier Series (FS) to represent periodic signals.  When we do, the Fourier Series representation is equal to the signal for all time.

For non-periodic signals, we can still use Fourier Series to represent the signal on some time interval.  The time interval we choose will set the periodicity of the FS representation.  Outside of our expansion interval the original signal and the FS will not equal each other, but they will on the expansion interval.

This video works a specific example of finding the FS representation of the continuous-time signal x(t) = exp(-alpha*t) on the time interval 0 to 10.  After computing the Fourier Series Coefficients, we plot the FS representation for different numbers of terms in the summation to see how the representation converges to the desired signal.

Discrete-Time Signals Introduction

7/1/18

Running Time: 2:07

This is the first video in a 14-part series that continues to introduce basic concepts of discrete-time signals and systems.  This introductory video outlines the basic topics that will be covered in the series to include:

1) Discrete-Time Signal Operations
2) Common Discrete-Time Signal Types
3) Discrete-Time System Examples and Representation