< Return to Video

## 1 - Introduction to signal processing

• 0:00 - 0:03
>> Welcome to module one of Digital Signal Processing.
• 0:03 - 0:07
In this module we are going to see what signals actually are.
• 0:07 - 0:11
We are going to go through a history, see the earliest example of these discrete-time signals.
• 0:11 - 0:14
Actually it goes back to Egyptian times.
• 0:14 - 0:17
Then through this history see how digital signals,
• 0:17 - 0:21
for example with the telegraph signals, became important in communications.
• 0:21 - 0:26
And today, how signals are pervasive in many applications,
• 0:26 - 0:28
in every day life objects.
• 0:28 - 0:30
For this we're going to see what the signal is,
• 0:30 - 0:33
what a continuous time analog signal is,
• 0:33 - 0:37
what a discrete-time, continuous-amplitude signal is
• 0:37 - 0:42
and how these signals relate to each other and are used in communication devices.
• 0:42 - 0:45
We are not going to have any math in this first module.
• 0:45 - 0:49
It is more illustrative and the mathematics will come later in this class.
• 0:51 - 0:55
This is an introduction to what digital signal processing is all about.
• 0:55 - 0:58
Before getting going, let's give some background material.
• 0:58 - 1:03
There is a textbook called Signal Processing for Communications by Paolo Prandoni and myself.
• 1:03 - 1:08
You can have a paper version or you can get the free PDF or HTML version
• 1:08 - 1:11
on the website here indicated on the slide.
• 1:11 - 1:17
There will be quizzes, there will be homework sets, and there will be occasional complementary lectures.
• 1:17 - 1:20
What is actually a signal?
• 1:20 - 1:24
We talk about digital signal processing, so we need to define what the signal is.
• 1:24 - 1:28
Typically, it's a description of the evolution over physical phenomenon.
• 1:28 - 1:33
Quite simply, if I speak here, there is sound pressure waves going through the air
• 1:33 - 1:36
that's a typical signal.
When you listen to the speech, there is a
• 1:36 - 1:40
loud speaker creating a sound pressure
waves that reaches your ear.
• 1:40 - 1:44
And that's another signal.
However, in between is the world of
• 1:44 - 1:49
digital signal processing because after
the microphone it gets transformed in a
• 1:49 - 1:52
set of members.
It is processed in the computer.
• 1:52 - 1:55
It is being transferred through the
internet.
• 1:55 - 1:59
Finally it is decoded to create the sound
pressure wave to reach your ears.
• 1:59 - 2:04
Other example are the temperature
evolution over time, the magnetic
• 2:04 - 2:09
deviation for example, L P recording , is
a grey level on paper for a black and
• 2:09 - 2:13
white photograph,some flickering colors on
TV screen.
• 2:13 - 2:18
Here we have a thermometer recording
temperature over time.
• 2:18 - 2:23
So you see the evolution And there are
discrete ticks and you see how it changes
• 2:23 - 2:26
over time.
So what are the characteristics of digital
• 2:26 - 2:29
signals.
There are two key ingredients.
• 2:29 - 2:33
First there is discrete time.
As we have seen in the previous slide on
• 2:33 - 2:38
the horizontal axis there are discrete
Evenly spaced ticks and that corresponds
• 2:38 - 2:43
to discretisation in time.
There is also discrete amplitude because
• 2:43 - 2:47
the numbers that are measured will be
represented in a computer and cannot have
• 2:47 - 2:52
some infinite precision.
So what amount more sophisticated things,
• 2:52 - 2:56
functions, derivative, and integrals.
The question of discreet versus
• 2:56 - 3:02
continuous, or analog versus discreet,
goes probably back to the earliest time of
• 3:02 - 3:05
science, for example, the school of
Athens.
• 3:05 - 3:10
There was a lot of debate between
philosophers and mathematicians about the
• 3:10 - 3:15
idea of continuum, or the difference
between countable things and uncountable
• 3:15 - 3:18
things.
So in this picture, you see green are
• 3:18 - 3:23
famous philosophers like Plato, in red,
famous mathematicians like Pythagoras,
• 3:23 - 3:28
somebody that we are going to meet again
in this class, and there is a famous
• 3:28 - 3:33
paradox which is called Zeno's paradox.
So if you should narrow will it ever
• 3:33 - 3:37
arrive in destination?
We know that physics Allows us to verify
• 3:37 - 3:42
this but mathematics have the problem with
this and we can see this graphically.
• 3:42 - 3:47
So you want to go from A to B, you cover
half of the distance that is C, center
• 3:47 - 3:52
quarters that's D and also eighth that's E
etc will you ever get there and of course
• 3:52 - 3:57
we know you gets there because the sum
from 1 to infinity of 1 over 2 to the n is
• 3:57 - 4:02
equal to 1, a beautiful formula that we'll
see several times reappearing in this.
• 4:02 - 4:07
Unfortunately during the middle ages in
Europe, things were a bit lost.
• 4:07 - 4:13
As you can see, people had other worries.
In the 17th century things picked up
• 4:13 - 4:16
again.
Here we have a physicist and astronomer
• 4:16 - 4:21
Galileo, and the philosopher Rene
Descartes, and both contributed to the
• 4:21 - 4:26
advancement of mathematics at that time.
Descartes' idea was simple but powerful.
• 4:26 - 4:30
Start with a point, put it into a
co-ordinate system Then put more
• 4:30 - 4:34
sophisticated things like lines, and you
can use algebra.
• 4:34 - 4:39
This led to the idea of calculus, which
allowed to mathematically describe
• 4:39 - 4:44
physical phenomenon.
For example Galileo was able to describe
• 4:44 - 4:50
the trajectory of a bullet, using infinite
decimal variations in both horizontal and
• 4:50 - 4:54
vertical direction.
Calculus itself was formalized by Newton
• 4:54 - 5:00
and Leibniz, and is one of the great
advances of mathematics in the 17th and
• 5:00 - 5:03
18th century.
It is time to do some very simple
• 5:03 - 5:08
continuous time signal processing.
We have a function in blue here, between a
• 5:08 - 5:11
and b, and we would like to compute it's
average.
• 5:11 - 5:15
As it is well known, this well be the
integral of the function, divided by the
• 5:15 - 5:19
length's of the interval, and it is shown
here in red dots.
• 5:19 - 5:23
What would be the equivalent in this
discreet time symbol processing.
• 5:23 - 5:27
We have a set of samples between say, 0
and capital N minus 1.
• 5:27 - 5:32
The average is simply 1 over n, the sum
Was the antidote terms x[n] between 0 and
• 5:32 - 5:36
N minus 1.
Again, it is shown in the red dotted line.
• 5:36 - 5:42
In this case, because the signal is very
smooth, the continuous time average and
• 5:42 - 5:45
the discrete time average Are essentially
the same.
• 5:45 - 5:50
This was nice and easy but what if the
signal is too fast, and we don't know
• 5:50 - 5:54
exactly how to compute either the
continuous time operations or an
• 5:54 - 6:00
equivalent operation on samples.
Enters Joseph Fourier, one of the greatest
• 6:00 - 6:05
mathematicians of the nineteenth century.
And the inventor of Fourier series,
• 6:05 - 6:10
Fourier analysis which are essentially the
ground tools of signal processing.
• 6:10 - 6:13
We show simply a picture to give the idea
of Fourier analysis.
• 6:13 - 6:18
It is a local Fourier spectrum as you
would see for example on an equalizer
• 6:18 - 6:22
table in a disco.
And it shows the distribution of power
• 6:22 - 6:27
across frequencies, something we are going
to understand in detail in this class.
• 6:27 - 6:32
But to do this quick time processing of
continuous time signals we need some
• 6:32 - 6:35
further results.
And these were derived by Harry Niquist
• 6:35 - 6:39
and Claude Shannon, two researchers at
Bell Labs.
• 6:39 - 6:45
They derived the so-called sampling
theorem, first appearing in 1920's and
• 6:45 - 6:49
formalized in 1948.
If the function X of T is sufficiently
• 6:49 - 6:55
slow then there is a simple interpolation
formula for X of T, it's the sum of the
• 6:55 - 7:01
samples Xn, Interpolating with the
function that is called sync function.
• 7:01 - 7:06
It looks a little but complicated now, but
it's something we're going to study in
• 7:06 - 7:11
great detail because it's 1 of the
fundamental formulas linking this discrete
• 7:11 - 7:14
time and continuous time signal
processing.
• 7:14 - 7:19
Let us look at this sampling in action.
So we have the blue curve, we take
• 7:19 - 7:24
samples, the red dots from the samples.
We use the same interpolation.
• 7:24 - 7:29
We put one blue curve, second one, third
one, fourth one, etc.
• 7:29 - 7:33
When we sum them all together, we get back
the original blue curve.
• 7:33 - 7:37
It is magic.
This interaction of continuous time and
• 7:37 - 7:41
discrete time processing is summarized in
these two pictures.
• 7:41 - 7:44
On the left you have a picture of the
analog world.
• 7:44 - 7:49
On the right you have a picture of the
discrete or digital world, as you would
• 7:49 - 7:54
see in a Digital camera for example, and
this is because the world is analog.
• 7:54 - 7:59
It has continuous time continuous space,
and the computer is digital.
• 7:59 - 8:03
It is discreet time discreet temperature.
When you look at an image taken with a
• 8:03 - 8:07
digital camera, you may wonder what the
resolution is.
• 8:07 - 8:12
And here we have a picture of a bird.
This bird happens to have very high visual
• 8:12 - 8:16
acuity, probably much better than mine.
Still, if you zoom into the digital
• 8:16 - 8:22
picture, after a while, around the eye
here, you see little squares appearing,
• 8:22 - 8:27
showing indeed that the picture is digital
Because discrete values over the domain of
• 8:27 - 8:32
the image and it also has actually
discrete amplitude which we cannot quite
• 8:32 - 8:37
see here at this level of resolution.
As we said the key ingredients are
• 8:37 - 8:42
discrete time and discrete amplitude for
digital signals.
• 8:42 - 8:47
So, let us look at x of t here.
It's a sinusoid, and investigate discrete
• 8:47 - 8:50
time first.
We see this with xn and discrete
• 8:50 - 8:53
amplitude.
We see this with these levels of the
• 8:53 - 8:59
amplitudes which are also discrete ties.
And so this signal looks very different
• 8:59 - 9:03
from the original continuous time signal x
of t.
• 9:03 - 9:08
It has discrete values on the time axes
and discrete values on the vertical
• 9:08 - 9:12
amplitude axis.
So why do we need digital amplitude?
• 9:12 - 9:17
Well, because storage is digital, because
processing is digital, and because
• 9:17 - 9:21
transmission is digital.
And you are going to see all of these in
• 9:21 - 9:24
sequence.
So data storage, which is of course very
• 9:24 - 9:28
important, used to be purely analog.
You had paper.
• 9:28 - 9:31
You had wax cylinders.
You had vinyl.
• 9:31 - 9:38
You had compact cassettes, VHS, etcetera.
In imagery you had Kodachrome, slides,
• 9:38 - 9:43
Super 8, film etc.
Very complicated, a whole biodiversity of
• 9:43 - 9:46
analog storages.
In digital, much simpler.
• 9:46 - 9:51
There is only zeros and ones, so all
digital storage, to some extent, looks the
• 9:51 - 9:54
same.
The storage medium might look very
• 9:54 - 9:59
different, so here we have a collection of
storage from the last 25 years.
• 9:59 - 10:04
However, fundamentally there are only 0's
and 1's on these storage devices.
• 10:04 - 10:08
So in that sense, they are all compatible
with each other.
• 10:08 - 10:11
Processing also moved from analog to
digital.
• 10:11 - 10:16
On the left side, you have a few examples
of analog processing devices, an analog
• 10:16 - 10:20
watch, an analog amplifier.
On the right side you have a piece of
• 10:20 - 10:23
code.
Now this piece of code could run on many
• 10:23 - 10:27
different digital computers.
It would be compatible with all these
• 10:27 - 10:30
digital platforms.
The analog processing devices Are
• 10:30 - 10:35
essentially incompatible with each other.
Data transmission has also gone from
• 10:35 - 10:38
analog to digital.
So lets look at the very simple model
• 10:38 - 10:43
here, you've on the left side of the
transmitter, you have a channel on the
• 10:43 - 10:47
right side you have a receiver.
What happens to analog signals when they
• 10:47 - 10:51
are send over a channel.
So x of t goes through the channel, its
• 10:51 - 10:55
first multiplied by 1 over G because there
is path loss and then there is noise added
• 10:55 - 11:00
indicated here with the sigma of t.
The output here is x hat of t.
• 11:00 - 11:04
Let's start with some analog signal x of
t.
• 11:04 - 11:09
Multiply it by 1 over g, and add some
noise.
• 11:09 - 11:14
How do we recover a good reproduction of x
of t?
• 11:14 - 11:22
Well, we can compensate for the path loss,
so we multiply by g, to get xhat 1 of t.
• 11:22 - 11:27
But the problem is that x1 hat of t, is x
of t.
• 11:27 - 11:35
That's the good news plus g times sigma of
t so the noise has been amplified.
• 11:35 - 11:41
Let's see this in action.
We start with x of t, we scale by G, we
• 11:41 - 11:48
add some noise, we multiply by G.
And indeed now, we have a very noisy
• 11:48 - 11:52
signal.
This was the idea behind trans-Atlantic
• 11:52 - 11:58
cables which were laid in the 19th century
and were essentially analog devices until
• 11:58 - 12:03
telegraph signals were properly encoded as
digital signals.
• 12:03 - 12:08
As can be seen in this picture, this was
quite an adventure to lay a cable across
• 12:08 - 12:14
the Atlantic and then to try to transmit
analog signals across these very long
• 12:14 - 12:18
distances.
For a long channel because the path loss
• 12:18 - 12:23
is so big, you need to put repeaters.
So the process we have just seen, would be
• 12:23 - 12:28
repeated capital N times.
Each time the paths loss would be
• 12:28 - 12:32
compensated, but the noise will be
amplified by a factor of n.
• 12:32 - 12:38
Let us see this in action, so start with x
of t, paths loss by g, added noise,
• 12:38 - 12:45
amplification by G with the amplification
the amplification of the noise, and the
• 12:45 - 12:49
signal.
For the second segment we have the pass
• 12:49 - 12:55
loss again, so X hat 1 is divided by G.
And added noise, then we amplify to get x
• 12:55 - 13:00
hat 2 of t, which now has twice an amount
of noise, 2 g times signal of t.
• 13:00 - 13:05
So, if we do this n times, you can see
that the analog signal, after repeated
• 13:05 - 13:07
amplification.
Is mostly noise.
• 13:07 - 13:11
And that becomes problematic to transmit
information.
• 13:11 - 13:14
In digital communication, the physics do
not change.
• 13:14 - 13:17
We have the same path loss, we have added
noise.
• 13:17 - 13:20
However, two things change.
One is that we don't send arbitrary
• 13:20 - 13:24
signals but, for example, only signals
that[INAUDIBLE].
• 13:24 - 13:30
Take values plus 1 and minus 1, and we do
some specific processing to recover these
• 13:30 - 13:33
signals.
Specifically at the outward of the
• 13:33 - 13:38
channel, we multiply by g, and then we
take the signa operation.
• 13:38 - 13:42
So x1hat, is signa of x of t, plug g times
sigma of t.
• 13:42 - 13:47
Let us again look at this in action.
We start with the signal x of t that is
• 13:47 - 13:49
easier, plus 5 or minus 5.
5.
• 13:49 - 13:55
It goes through the channel, so it loses
amplitude by a factor of g, and their is
• 13:55 - 13:59
some noise added.
We multiply by g, so we recover x of t
• 13:59 - 14:05
plus g times the noise of sigma t.
Then we apply the threshold operation.
• 14:05 - 14:11
And true enough, we recover a plus 5 minus
5 signal, which is identical to the ones
• 14:11 - 14:15
that was sent on the channel.
Thanks to digital processing the
• 14:15 - 14:19
transmission of information has made
tremendous progress.
• 14:19 - 14:24
In the mid nineteenth century a
transatlantic cable would transmit 8 words
• 14:24 - 14:26
per minute.
That's about 5 bits per second.
• 14:26 - 14:30
A hundred years later a coaxial cable with
48 voice channels.
• 14:30 - 14:36
At already 3 megabits per second.
In 2005, fiber optic technology allowed 10
• 14:36 - 14:41
terabits per second.
A terabit is 10 to the 12 bits per second.
• 14:41 - 14:47
And today, in 2012, we have fiber cables
with 60 terabits per second.
• 14:47 - 14:53
On the voice channel, the one that is used
for telephony, in 1950s you could send
• 14:53 - 14:57
1200 bits per second.
In the 1990's, that was already 56
• 14:57 - 15:01
kilobits per second.
Today, with ADSL technology, we are
• 15:01 - 15:06
talking about 24 megabits per second.
Please note that the last module in the
• 15:06 - 15:12
class will actually explain how ADSL The
works using all the tricks in the box that
• 15:12 - 15:17
we are learning in this class.
It is time to conclude this introductory
• 15:17 - 15:20
module.
And we conclude with a picture.
• 15:20 - 15:25
If you zoom into this picture you see it's
the motto of the class, signal is
• 15:25 - 15:26
strength.
Title:
1 - Introduction to signal processing
Description:

more » « less
Video Language:
English
 Claude Almansi edited English subtitles for 1 - Introduction to signal processing Claude Almansi edited English subtitles for 1 - Introduction to signal processing Claude Almansi edited English subtitles for 1 - Introduction to signal processing Claude Almansi added a translation Claude Almansi edited English subtitles for 1 - Introduction to signal processing Claude Almansi edited English subtitles for 1 - Introduction to signal processing jngiam edited English subtitles for 1 - Introduction to signal processing jngiam added a translation

# English subtitles

Incomplete

## Revisions

• Claude Almansi