
>> Welcome to module one of Digital Signal Processing.

In this module we are going to see what signals actually are.

We are going to go through a history, see the earliest example of these discretetime signals.

Actually it goes back to Egyptian times.

Then through this history see how digital signals,

for example with the telegraph signals, became important in communications.

And today, how signals are pervasive in many applications,

in every day life objects.

For this we're going to see what the signal is,

what a continuous time analog signal is,

what a discretetime, continuousamplitude signal is

and how these signals relate to each other and are used in communication devices.

We are not going to have any math in this first module.

It is more illustrative and the mathematics will come later in this class.

This is an introduction to what digital signal processing is all about.

Before getting going, let's give some background material.

There is a textbook called Signal Processing for Communications by Paolo Prandoni and myself.

You can have a paper version or you can get the free PDF or HTML version

on the website here indicated on the slide.

There will be quizzes, there will be homework sets, and there will be occasional complementary lectures.

What is actually a signal?

We talk about digital signal processing, so we need to define what the signal is.

Typically, it's a description of the evolution over physical phenomenon.

Quite simply, if I speak here, there is sound pressure waves going through the air

that's a typical signal.
When you listen to the speech, there is a

loud speaker creating a sound pressure
waves that reaches your ear.

And that's another signal.
However, in between is the world of

digital signal processing because after
the microphone it gets transformed in a

set of members.
It is processed in the computer.

It is being transferred through the
internet.

Finally it is decoded to create the sound
pressure wave to reach your ears.

Other example are the temperature
evolution over time, the magnetic

deviation for example, L P recording , is
a grey level on paper for a black and

white photograph,some flickering colors on
TV screen.

Here we have a thermometer recording
temperature over time.

So you see the evolution And there are
discrete ticks and you see how it changes

over time.
So what are the characteristics of digital

signals.
There are two key ingredients.

First there is discrete time.
As we have seen in the previous slide on

the horizontal axis there are discrete
Evenly spaced ticks and that corresponds

to discretisation in time.
There is also discrete amplitude because

the numbers that are measured will be
represented in a computer and cannot have

some infinite precision.
So what amount more sophisticated things,

functions, derivative, and integrals.
The question of discreet versus

continuous, or analog versus discreet,
goes probably back to the earliest time of

science, for example, the school of
Athens.

There was a lot of debate between
philosophers and mathematicians about the

idea of continuum, or the difference
between countable things and uncountable

things.
So in this picture, you see green are

famous philosophers like Plato, in red,
famous mathematicians like Pythagoras,

somebody that we are going to meet again
in this class, and there is a famous

paradox which is called Zeno's paradox.
So if you should narrow will it ever

arrive in destination?
We know that physics Allows us to verify

this but mathematics have the problem with
this and we can see this graphically.

So you want to go from A to B, you cover
half of the distance that is C, center

quarters that's D and also eighth that's E
etc will you ever get there and of course

we know you gets there because the sum
from 1 to infinity of 1 over 2 to the n is

equal to 1, a beautiful formula that we'll
see several times reappearing in this.

Unfortunately during the middle ages in
Europe, things were a bit lost.

As you can see, people had other worries.
In the 17th century things picked up

again.
Here we have a physicist and astronomer

Galileo, and the philosopher Rene
Descartes, and both contributed to the

advancement of mathematics at that time.
Descartes' idea was simple but powerful.

Start with a point, put it into a
coordinate system Then put more

sophisticated things like lines, and you
can use algebra.

This led to the idea of calculus, which
allowed to mathematically describe

physical phenomenon.
For example Galileo was able to describe

the trajectory of a bullet, using infinite
decimal variations in both horizontal and

vertical direction.
Calculus itself was formalized by Newton

and Leibniz, and is one of the great
advances of mathematics in the 17th and

18th century.
It is time to do some very simple

continuous time signal processing.
We have a function in blue here, between a

and b, and we would like to compute it's
average.

As it is well known, this well be the
integral of the function, divided by the

length's of the interval, and it is shown
here in red dots.

What would be the equivalent in this
discreet time symbol processing.

We have a set of samples between say, 0
and capital N minus 1.

The average is simply 1 over n, the sum
Was the antidote terms x[n] between 0 and

N minus 1.
Again, it is shown in the red dotted line.

In this case, because the signal is very
smooth, the continuous time average and

the discrete time average Are essentially
the same.

This was nice and easy but what if the
signal is too fast, and we don't know

exactly how to compute either the
continuous time operations or an

equivalent operation on samples.
Enters Joseph Fourier, one of the greatest

mathematicians of the nineteenth century.
And the inventor of Fourier series,

Fourier analysis which are essentially the
ground tools of signal processing.

We show simply a picture to give the idea
of Fourier analysis.

It is a local Fourier spectrum as you
would see for example on an equalizer

table in a disco.
And it shows the distribution of power

across frequencies, something we are going
to understand in detail in this class.

But to do this quick time processing of
continuous time signals we need some

further results.
And these were derived by Harry Niquist

and Claude Shannon, two researchers at
Bell Labs.

They derived the socalled sampling
theorem, first appearing in 1920's and

formalized in 1948.
If the function X of T is sufficiently

slow then there is a simple interpolation
formula for X of T, it's the sum of the

samples Xn, Interpolating with the
function that is called sync function.

It looks a little but complicated now, but
it's something we're going to study in

great detail because it's 1 of the
fundamental formulas linking this discrete

time and continuous time signal
processing.

Let us look at this sampling in action.
So we have the blue curve, we take

samples, the red dots from the samples.
We use the same interpolation.

We put one blue curve, second one, third
one, fourth one, etc.

When we sum them all together, we get back
the original blue curve.

It is magic.
This interaction of continuous time and

discrete time processing is summarized in
these two pictures.

On the left you have a picture of the
analog world.

On the right you have a picture of the
discrete or digital world, as you would

see in a Digital camera for example, and
this is because the world is analog.

It has continuous time continuous space,
and the computer is digital.

It is discreet time discreet temperature.
When you look at an image taken with a

digital camera, you may wonder what the
resolution is.

And here we have a picture of a bird.
This bird happens to have very high visual

acuity, probably much better than mine.
Still, if you zoom into the digital

picture, after a while, around the eye
here, you see little squares appearing,

showing indeed that the picture is digital
Because discrete values over the domain of

the image and it also has actually
discrete amplitude which we cannot quite

see here at this level of resolution.
As we said the key ingredients are

discrete time and discrete amplitude for
digital signals.

So, let us look at x of t here.
It's a sinusoid, and investigate discrete

time first.
We see this with xn and discrete

amplitude.
We see this with these levels of the

amplitudes which are also discrete ties.
And so this signal looks very different

from the original continuous time signal x
of t.

It has discrete values on the time axes
and discrete values on the vertical

amplitude axis.
So why do we need digital amplitude?

Well, because storage is digital, because
processing is digital, and because

transmission is digital.
And you are going to see all of these in

sequence.
So data storage, which is of course very

important, used to be purely analog.
You had paper.

You had wax cylinders.
You had vinyl.

You had compact cassettes, VHS, etcetera.
In imagery you had Kodachrome, slides,

Super 8, film etc.
Very complicated, a whole biodiversity of

analog storages.
In digital, much simpler.

There is only zeros and ones, so all
digital storage, to some extent, looks the

same.
The storage medium might look very

different, so here we have a collection of
storage from the last 25 years.

However, fundamentally there are only 0's
and 1's on these storage devices.

So in that sense, they are all compatible
with each other.

Processing also moved from analog to
digital.

On the left side, you have a few examples
of analog processing devices, an analog

watch, an analog amplifier.
On the right side you have a piece of

code.
Now this piece of code could run on many

different digital computers.
It would be compatible with all these

digital platforms.
The analog processing devices Are

essentially incompatible with each other.
Data transmission has also gone from

analog to digital.
So lets look at the very simple model

here, you've on the left side of the
transmitter, you have a channel on the

right side you have a receiver.
What happens to analog signals when they

are send over a channel.
So x of t goes through the channel, its

first multiplied by 1 over G because there
is path loss and then there is noise added

indicated here with the sigma of t.
The output here is x hat of t.

Let's start with some analog signal x of
t.

Multiply it by 1 over g, and add some
noise.

How do we recover a good reproduction of x
of t?

Well, we can compensate for the path loss,
so we multiply by g, to get xhat 1 of t.

But the problem is that x1 hat of t, is x
of t.

That's the good news plus g times sigma of
t so the noise has been amplified.

Let's see this in action.
We start with x of t, we scale by G, we

add some noise, we multiply by G.
And indeed now, we have a very noisy

signal.
This was the idea behind transAtlantic

cables which were laid in the 19th century
and were essentially analog devices until

telegraph signals were properly encoded as
digital signals.

As can be seen in this picture, this was
quite an adventure to lay a cable across

the Atlantic and then to try to transmit
analog signals across these very long

distances.
For a long channel because the path loss

is so big, you need to put repeaters.
So the process we have just seen, would be

repeated capital N times.
Each time the paths loss would be

compensated, but the noise will be
amplified by a factor of n.

Let us see this in action, so start with x
of t, paths loss by g, added noise,

amplification by G with the amplification
the amplification of the noise, and the

signal.
For the second segment we have the pass

loss again, so X hat 1 is divided by G.
And added noise, then we amplify to get x

hat 2 of t, which now has twice an amount
of noise, 2 g times signal of t.

So, if we do this n times, you can see
that the analog signal, after repeated

amplification.
Is mostly noise.

And that becomes problematic to transmit
information.

In digital communication, the physics do
not change.

We have the same path loss, we have added
noise.

However, two things change.
One is that we don't send arbitrary

signals but, for example, only signals
that[INAUDIBLE].

Take values plus 1 and minus 1, and we do
some specific processing to recover these

signals.
Specifically at the outward of the

channel, we multiply by g, and then we
take the signa operation.

So x1hat, is signa of x of t, plug g times
sigma of t.

Let us again look at this in action.
We start with the signal x of t that is

easier, plus 5 or minus 5.
5.

It goes through the channel, so it loses
amplitude by a factor of g, and their is

some noise added.
We multiply by g, so we recover x of t

plus g times the noise of sigma t.
Then we apply the threshold operation.

And true enough, we recover a plus 5 minus
5 signal, which is identical to the ones

that was sent on the channel.
Thanks to digital processing the

transmission of information has made
tremendous progress.

In the mid nineteenth century a
transatlantic cable would transmit 8 words

per minute.
That's about 5 bits per second.

A hundred years later a coaxial cable with
48 voice channels.

At already 3 megabits per second.
In 2005, fiber optic technology allowed 10

terabits per second.
A terabit is 10 to the 12 bits per second.

And today, in 2012, we have fiber cables
with 60 terabits per second.

On the voice channel, the one that is used
for telephony, in 1950s you could send

1200 bits per second.
In the 1990's, that was already 56

kilobits per second.
Today, with ADSL technology, we are

talking about 24 megabits per second.
Please note that the last module in the

class will actually explain how ADSL The
works using all the tricks in the box that

we are learning in this class.
It is time to conclude this introductory

module.
And we conclude with a picture.

If you zoom into this picture you see it's
the motto of the class, signal is

strength.