9.3 - Controlling the power
-
0:01 - 0:04Hi and welcome to Module 9.3 of Digital
Signal Processing. -
0:04 - 0:08We are still talking about Digital
Communication Systems. -
0:08 - 0:11In the previous module we addressed
bandwidth constraint. -
0:11 - 0:14In this module we will tackle the powered
constraint so first we will introduce the -
0:14 - 0:18concept of noise and probability of error
in a communication system. -
0:18 - 0:22We will look at signaling alphabet and
power and their related power. -
0:22 - 0:25And finally, we'll introduce QAM
signaling. -
0:25 - 0:29So we have seen that a transmitter sends
a sequence of symbols a of n. -
0:29 - 0:32Created by the mapper.
Now we take the receiver into account. -
0:32 - 0:36We don't yet know how, but it's safe to
assume that the receiver in the end will -
0:36 - 0:41obtain an estimation hat a of n.
Of the original transmitted symbol -
0:41 - 0:43sequence.
It's an estimation because even if there -
0:43 - 0:46is no distortion introduced by the
channel. -
0:46 - 0:50Even if nothing bad happens.
There will always be a certain amount of -
0:50 - 0:53noise, that will corrupt the original
sequence. -
0:53 - 0:56When noise is very large, our estimate
for the transmitted symbol will be off, -
0:56 - 1:00and will incur a decoding error.
Now, this probability of error will -
1:00 - 1:04depend on the power of the noise, with
respect to the power of the signal. -
1:04 - 1:08And will also depend on the decoding
strategies that we've put in place, how -
1:08 - 1:12smart we are in circumventing the effects
of the noise. -
1:12 - 1:16One we can maximize the probability of
correctly guessing the transmit symbol, -
1:16 - 1:20is by using suitable alphabets.
And so we will see in more detail what -
1:20 - 1:24that means.
Remember the scheme for the transmitter. -
1:24 - 1:27We have a bitstream coming in.
And then we have the scrambler. -
1:27 - 1:35And then the mapper.
And here we have a sequence of symbols a -
1:35 - 1:37of n.
These symbols will have to be sent over -
1:37 - 1:42the channel.
And to do so, we upsample. -
1:42 - 1:48And we interpolate, and then we transmit.
Now, how do we go from bitstreams to -
1:48 - 1:53samples in more detail?
In other words, how does the mapper work? -
1:53 - 1:57The mapper will split the incoming
bitstreams into chunks and will assign a -
1:57 - 2:02symbol, a of n, from a finite alphabet to
each chunk. -
2:02 - 2:06The alphabet, we will decide later what
it is composed of. -
2:06 - 2:10To undo the mapping operation and recover
the bitstream, the receiver will perform -
2:10 - 2:14a slicing operation.
So the receiver will a value, hat a of n, -
2:14 - 2:22where hat indicates the fact that noise
has leaked into the value of the signal. -
2:22 - 2:26And the receiver will decide which symbol
from the alphabet, which is known to the -
2:26 - 2:29receiver as well, is closest to the
received symbol. -
2:29 - 2:34And from there, it will be extremely easy
to piece back the original bitstream. -
2:34 - 2:37As an example, let's look at simple
two-level signaling. -
2:37 - 2:40This generates signals of the kind we
have seen in the example so far, -
2:40 - 2:45alternating between two levels.
The way the mapper works is by splitting -
2:45 - 2:52the incoming bitstream into single bits.
And the output symbol sequence uses an -
2:52 - 2:57alphabet composed of two symbols, g and
minus g, and associates g to a bit of -
2:57 - 3:05value 1 and minus g to a bit of value 0.
And the receiver, the slicer. -
3:05 - 3:10Looks at the sign of the incoming symbol
sequence which has been corrupted by -
3:10 - 3:13noise.
And decides that the nth bit will be 1 if -
3:13 - 3:19the sign of the nth symbol is positive,
and 0 otherwise. -
3:19 - 3:22Lets look at an example, lets assume G
equal to 1. -
3:22 - 3:26So the two-level signal will alternate
between plus 1 and minus 1. -
3:26 - 3:31And suppose we have an input bit sequence
that gives rise to this signal here after -
3:31 - 3:36transmission and after decoding at the
receiver. -
3:36 - 3:40The resulting symbol sequence will look
like this, where each symbol has been -
3:40 - 3:45corrupted by a varying amount of noise.
If we now slice this sequence by -
3:45 - 3:50thresholding, as shown, shown before.
We recover a simple sequence like this -
3:50 - 3:55where we have indicated in red the errors
incurred by the slicer because of the -
3:55 - 3:58noise.
So if you want to analyze in more detail -
3:58 - 4:02what the probability of error is, we have
to make some hypothesis on the signals -
4:02 - 4:07involved in this toy experiment.
Assume that each received symbol can be -
4:07 - 4:11modeled as the original symbol plus a
noise sample. -
4:11 - 4:15Assume also that the bits in the
bitstream are equiprobable. -
4:15 - 4:19So zero and one appear with probability
50% each. -
4:19 - 4:22Assume that the noise and the signal are
independent. -
4:22 - 4:26And assume that the noise is additive
white Gaussian noise with zero mean and -
4:26 - 4:30known variance sigma 0.
With this hypothesis, the probability of -
4:30 - 4:34error can be written out as follows.
First of all, we split the probability of -
4:34 - 4:39errors into 2 conditional probabilities.
Conditioned by whether the nth bit is -
4:39 - 4:42equal to 1, or the nth bit is equal to
zero. -
4:42 - 4:45In the first case, when the nth bit is
equal to 1. -
4:45 - 4:48Remember, the produced symbol will be
equal to G, so the probability of error -
4:48 - 4:54is equal to the probability for the noise
sample to be less than minus G. -
4:54 - 4:58Because only in this case the sum of the
sample plus the noise will be negative. -
4:58 - 5:03Similarly, when the nth bit is equal to
0, we have a negative sample. -
5:03 - 5:08And the only way for that to change sign
is if the noise sample is greater than G. -
5:08 - 5:14Since the probability of each occurrence
is 1 half because of the symmetry of the -
5:14 - 5:19Gaussian distribution function.
This is equal to the probability for the -
5:19 - 5:23noise sample to be larger than G.
And we can compute this as the integral -
5:23 - 5:27from G to infinity of the probability
distribution function for the Gaussian -
5:27 - 5:31distribution with the known variance
here. -
5:31 - 5:35This function has a standard name.
It's called the error function. -
5:35 - 5:38And since this integral can not be
computed in closed form, this function is -
5:38 - 5:42available in most numerical packages
under this name. -
5:42 - 5:45So the important thing to notice here is
that the probability of error is some -
5:45 - 5:49function of the ratio between the
amplitude of the signal and the standard -
5:49 - 5:54deviation of the noise.
And we can carry this analysis further by -
5:54 - 5:59considering the transmitted power.
We have a bi-level signal and each level -
5:59 - 6:03occurs with 1 half probability.
So the variance of the signal, which -
6:03 - 6:07corresponds to the power, is equal to G
squared time the probability of the nth -
6:07 - 6:12being equal to 1.
Plus G squared times the probability of -
6:12 - 6:15the nth bit being equal to 0, which is
equal to G squared. -
6:15 - 6:18And so, if we rewrite the probability
error function we can write that it is -
6:18 - 6:23equal to the error function of the ratio.
Between the standard deviation of the -
6:23 - 6:27transmitted signal divided by the
standard deviation of the noise, which is -
6:27 - 6:30equivalent to saying that it is the error
function of the square root of the signal -
6:30 - 6:35to noise ratio.
If we plot this as a function of the -
6:35 - 6:40signal to noise ratio in dBs.
And I remind here that dBs here mean that -
6:40 - 6:44we compute 10 times the log in base 10 of
the power of the signal divided by the -
6:44 - 6:50power of the noise.
And since we are in a log log scale, we -
6:50 - 6:54can see that the probability of error
decays exponentially with the signal to -
6:54 - 7:00noise ratio.
This exponential decay is quite the norm -
7:00 - 7:03in communication systems.
And while the absolute rate of decay -
7:03 - 7:08might change in terms of the linear
constants involved in the curve. -
7:08 - 7:12The trend will stay the same even for
more complex signaling schemes. -
7:12 - 7:15So the lesson that we learn from the
simple example is that in order to reduce -
7:15 - 7:20the probability of error, we should
increase G, the amplitude of the signal. -
7:20 - 7:23But of course, increasing G also
increases the power of the transmitted -
7:23 - 7:28signal, and we know that we cannot go
above the channel's power constraint. -
7:28 - 7:33And so that's how the power constraint
limits the reliability of transmission. -
7:33 - 7:38The bilevel signalling scheme is very
instructive, but it's also very limited -
7:38 - 7:41in the sense that we're sending just one
bit per output symbol. -
7:41 - 7:44So to increase the throughput, to
increase the number of bits per second -
7:44 - 7:48that we send over a channel, we can use
multilevel signaling. -
7:48 - 7:51There are very many ways to do so and we
will just look at a few, but the -
7:51 - 7:56fundamental idea is that we take now.
Larger chunks of bits and therefore we -
7:56 - 8:00have alphabets that have a higher
cardinality. -
8:00 - 8:04So more values in the alphabet means more
bits per symbol and therefore a higher -
8:04 - 8:08data rate.
But not to give the ending away, we will -
8:08 - 8:10see that the power of the signal will
also be dependent on the size of the -
8:10 - 8:14alphabet.
And so, in order not to exceed a certain -
8:14 - 8:17probability of error, given the channel's
power of constraint, we will not be able -
8:17 - 8:21to grow the alphabet indefinitely.
But we can be smart in the way we build -
8:21 - 8:24this alphabet and so we will look at some
examples. -
8:24 - 8:28The first example is PAM, Pulse Amplitude
Modulation. -
8:28 - 8:32We split the incoming bitstream into
chunks of M bits so that each chunk -
8:32 - 8:37corresponds to an integer between 0 and 2
to the M minus 1. -
8:37 - 8:40We can call this sequence of integers k
of n and this sequence is mapped onto a -
8:40 - 8:46sequence of symbols a of n like so.
There's a gain factor G, like always. -
8:46 - 8:51And then we use 2 to the n minus 1 odd
integers around 0. -
8:51 - 8:58So for instance, if M is equal to 2, we
have 0, 1, 2, and 3 as potential items -
8:58 - 9:03for k of n.
And a of n will be either. -
9:03 - 9:11Let's assume G is equal to 1.
Will be either minus 3, or minus 1, or 1, -
9:11 - 9:16or 3.
We will see why we use the odd integers -
9:16 - 9:19in just a second.
And the receiver the slicer will work by -
9:19 - 9:23simply associating to the received
symbol, the closest odd integer, always -
9:23 - 9:29taking the gain into account.
So graphically, again, PAM for M equal to -
9:29 - 9:322 and G equal to 1, will look like this.
Here are the odd integers. -
9:32 - 9:38The distance between two transmitted
points, or transmitted symbols, is 2G -
9:38 - 9:42right here.
G Is equal to 1, but it would be in -
9:42 - 9:48general 2 times the gain.
And using odd integers creates a -
9:48 - 9:49zero-mean sequence.
If we assume that each symbol is -
9:49 - 9:51equiprobable.
Which is likely, given that we've used a -
9:51 - 9:56scrambler in the transmitter.
The the resulting mean is zero. -
9:56 - 9:59The analysis of the probability of error
for PAM is very similar to what we -
9:59 - 10:04carried out for bilateral signaling.
As a matter of fact, binary signaling is -
10:04 - 10:08simply PAM with M equal to 1.
The end result is very similar, and it's -
10:08 - 10:11an exponential decaying function of the
ratio between the power of the signal and -
10:11 - 10:15the power of the noise.
The reason why we don't analyze this -
10:15 - 10:19further is because we have an improvement
in store. -
10:19 - 10:22And the improvement is aimed at
increasing the throughput, increasing the -
10:22 - 10:26number of bits per symbol that we can
send without necessarily increasing the -
10:26 - 10:30probability of error.
So here's a wild idea. -
10:30 - 10:34Let's use complex numbers and build a
complex valued transmission system. -
10:34 - 10:37This requires certain suspension of
disbelief for the time being, but believe -
10:37 - 10:41me, it will work in the end.
The name for this complex valued mapping -
10:41 - 10:45scheme is QAM.
Which is an acronym for Quadtrature -
10:45 - 10:48Amplitude Modulation, and it works like
so. -
10:48 - 10:52The mapper takes the income and bit
stream, and splits it into chunks of M -
10:52 - 10:57bits, with M even.
Then it uses half of the bits, to define -
10:57 - 11:01a PAM sequence, which we call a of r of
n, and the reamaining, M over 2 bits, to -
11:01 - 11:07define another independent PAM sequence.
Ai of n. -
11:07 - 11:12The final symbol sequence is a sequence
of complex numbers, where the real part -
11:12 - 11:14is the first PAM sequence, and the
imaginary part is the second PAM -
11:14 - 11:18sequence.
And of course, in front we have a gain -
11:18 - 11:22factor, G.
So the transmission alphabet, a, is given -
11:22 - 11:29by points in the complex plane, with
odd-valued coordinates around the origin. -
11:29 - 11:33At the receiver, the slicer works by
finding the symbol in the alphabet, which -
11:33 - 11:37is closest in Euclidean distance to the
received symbol. -
11:37 - 11:42Let's look at this graphically.
This is a set of points for QAM -
11:42 - 11:47transmission with M equal to 2, which
corresponds to two bilevel PAM signals on -
11:47 - 11:55the real axis and on the imaginary axis.
So that results into four points. -
11:55 - 11:59If we increase the number of bits per
symbol, we set M equal to 4, that -
11:59 - 12:03corresponds to two pam signals with 2
bits each, which makes for a -
12:03 - 12:08constellation.
This is how these arrangement of points -
12:08 - 12:12in the complex plain are called.
A constellation of four by four points at -
12:12 - 12:16the odd-valued coordinates in the complex
plane. -
12:16 - 12:23If we increase M to 8, then we have a 256
point constellation, with 16 points per -
12:23 - 12:27side.
Lets look at what happens when a symbol -
12:27 - 12:31is received, and how we derive an
expression for the probability of error. -
12:31 - 12:35If this is the nominal constellation, the
transmitter will choose one of these -
12:35 - 12:40values for transmission, say this one.
And this value will corrupted by noise in -
12:40 - 12:44the transmission and the receiving
process. -
12:44 - 12:48And will appear somewhere in the complex
plane, not necessarily exactly on the -
12:48 - 12:52point it originates from.
The way the slicer operates, is by -
12:52 - 12:56defining decision regions around each
point in the constellation. -
12:56 - 13:02So suppose for this point here, the
transmitted point, the decision region is -
13:02 - 13:07square, of side 2G, centered around which
is made in point. -
13:07 - 13:11So what happens is that when we receive
symbols. -
13:11 - 13:14They will now fall on the original point.
But as long as they fall within the -
13:14 - 13:17decision region, they will be decoded
correctly. -
13:17 - 13:20So for instance here.
We will decode this correctly. -
13:20 - 13:23Here we will decode this correctly.
Same here. -
13:23 - 13:26But this point for instance falls outside
of the decision region and therefore it -
13:26 - 13:30will be associated to a different
constellation point, thereby causing an -
13:30 - 13:34error.
To quantify the probability of error, we -
13:34 - 13:38assume as per usual that each received
symbol is the sum of the transmitted -
13:38 - 13:42symbol.
Plus a noise sample theta of n. -
13:42 - 13:48And we further assume that this noise is
a complex value Gaussian noise of equal -
13:48 - 13:53variance in the complex and real
components. -
13:53 - 13:58We're working on a completely digital
system that operates. -
13:58 - 14:02With complex valued quantities.
So we're making a new model for the -
14:02 - 14:05noise, and we will see later, how to
translate the physical real noise, into a -
14:05 - 14:10complex variable.
With these assumptions, the probability -
14:10 - 14:14of error, is equal to the probability
that the real part of the noise is larger -
14:14 - 14:19than G in magnitude.
Plus the probability that the imaginary -
14:19 - 14:22part of the noise is larger than G in
magnitude. -
14:22 - 14:25We assume that real and imaginary
component of the noise are independent, -
14:25 - 14:28and that's why we can split the
probability like so. -
14:28 - 14:32Now, if you remember the shape of the
decision region, this condition is -
14:32 - 14:36equivalent to saying that the noise is
pushing the real part of the point, -
14:36 - 14:41outside of the decision region, in either
direction, and same for the imaginary -
14:41 - 14:45part.
Now if we develop this, this is equal to -
14:45 - 14:491 minus the probability that the real
part of the noise is less than G, and the -
14:49 - 14:52imaginary part of the noise is less than
G. -
14:52 - 14:57This is the complimentary condition to
what we just wrote above. -
14:57 - 15:01And so this is equal to 1 minus the
integral over the decision region d of -
15:01 - 15:06the complex valued probability density
function for the noise. -
15:06 - 15:10In order to compute this integral, we're
going to approximate the shape of the -
15:10 - 15:13decision region.
With the inbound circle. -
15:13 - 15:16So instead of using the square, we're
going to use a circle centered around the -
15:16 - 15:20transmission point.
When the constellation is very dense, -
15:20 - 15:24this approximation is quite accurate.
With this approximation, we can compute -
15:24 - 15:28the integral exactly for a gaussian
distribution. -
15:28 - 15:32And if we assume that the variance of the
noise is sigma 0 squared over 2 in each -
15:32 - 15:38component, real or imaginary.
It turns out that the probability of -
15:38 - 15:42error is equal to each of the minus g
squared over sigma 0 square. -
15:42 - 15:45Now to obtain a probability of error as a
function of the signal to noise ratio we -
15:45 - 15:49have to compute the power of the
transmitted signal. -
15:49 - 15:53So if all symbols are equiprobable and
independent, it turns out that the -
15:53 - 15:59variance of the signal is G squared times
1 over 2 to the power of M. -
15:59 - 16:03Which is the probability of each symbol,
times the sum over all symbols in the -
16:03 - 16:07alphabet of the magnitude of the symbols
squared. -
16:07 - 16:11Now, it's a little bit tedious but we can
solve it exactly for M. -
16:11 - 16:15And it turns out that the power to
transmit the signal is g squared 2 rds3, -
16:15 - 16:212 to the n to the minus 1.
Now, if you plug this into the formula -
16:21 - 16:24for the probability of error that we seen
before. -
16:24 - 16:29We get that the result is an exponential
function where the argument is minus 3, -
16:29 - 16:33that multiplies 2 to the minus m plus 1,
that multiplies the signals to noise -
16:33 - 16:38ratio.
We can plot this probability of error in -
16:38 - 16:42a log log scale, like we did before.
And we can paramatrize the curve, as a -
16:42 - 16:46function of the number of points in the
constellation. -
16:46 - 16:50So here you have the curve for a four
point constellation, Here's the curve for -
16:50 - 16:5416-points and here's the curve for
64-points. -
16:54 - 16:57Now you can see that for a given signal
to noise ratio the probability of error -
16:57 - 17:01increases with the number of points.
Why is that? -
17:01 - 17:04Well if the signal to noise remains the
same, and we assume that the noise is -
17:04 - 17:07always at the same level, then it means
that the power of the signal remains -
17:07 - 17:11constant as well.
In that case, if the number of points -
17:11 - 17:16increases, g has to become smaller.
In order to accomodate a larger number of -
17:16 - 17:19points for the same power.
But if g becomes smaller, then the -
17:19 - 17:24decision regions becomes smaller, the
separation between points become smaller, -
17:24 - 17:29and the decision process becomes more
vulnerable to noise. -
17:29 - 17:32So in the end here's the final recipe to
design a QAM transmitter. -
17:32 - 17:35First you pick a probability of error
that you can live. -
17:35 - 17:38In general, 10 to the minus 6 is an
acceptable probability of error at the -
17:38 - 17:41symbol level.
Then you find out the signals noise ratio -
17:41 - 17:45that is imposed by the channel's power
constraint. -
17:45 - 17:50Once you have that, you can find the size
of your constellation, by finding M. -
17:50 - 17:54Which, based on the previous equations,
is the log and base 2 of 1 minus 3 over 2 -
17:54 - 17:57times the signal to noise ratio, divided
by the natural logarithm of the -
17:57 - 18:02probability of error.
Of course, you will have to round this to -
18:02 - 18:05a suitable integer value, and potentially
to an even power of 2 in order to have a -
18:05 - 18:10square constellation.
The final data rate of your system will -
18:10 - 18:14be M, the number of bits per symbol,
times W, which, if you remember, is the -
18:14 - 18:18baud rate of the system, and corresponds
to the bandwidth allowed for by the -
18:18 - 18:23channel.
So we know how to fit the bandwidth -
18:23 - 18:28constraint via upsampling.
With QAM, we know how many bits per -
18:28 - 18:31symbol we can use given the power
constraint. -
18:31 - 18:35And so we know the theoretical throughput
of the transmit for a given reliability -
18:35 - 18:39figure.
However, the question remains, how are we -
18:39 - 18:44going to send complex value symbols over
a physical channel? -
18:44 - 18:49It's time, therefore, to Stop the
suspension of this belief, and look at -
18:49 - 18:55techniques to do complex signaling over a
real value channel.
- Title:
- 9.3 - Controlling the power
- Description:
-
Official note on 9.3: "please check the errata for Module 9 wrt slides 52 and 53", which said:
"slides 52 and 53: unfortunately in the video the slides and the lecture I have used erfc(x) instead of Q(x). The function erfc is defined as erfc(x)=(2/π)∫x∞e−t2dt, i.e. erfc(x) is the probability that a random variable, drawn from a Gaussian distribution of variance 1/2, is greater than x in magnitude. For a Gaussian distribution of arbitrary variance σ2, the probability becomes (by reworking the integral) (1/2)erfc(x/(2σ). For convenience, especially in communication textbooks, a function Q(x)=(1/2)erfc(x/2) is often defined, so that the error probability becomes simply Q(x/σ). The results are fundamentally the same, and so is the shape of the error curve, but the numerical values are slightly different because of the normalization factors."
From the official description of 9.. videos:
Welcome to Week 8 of Digital Signal Processing.
This week's module is about digital communication systems and this is where it all comes together; from complex-valued signals, to spectral analysis, to stochastic processing, sampling and interpolation: everything plays a role in the design and implementation of a digital modem. Digital communications is an extremely vast and fascinating topic and it is arguably the pinnacle achievement of DSP in the sense that it's the domain where the most extraordinary quantitative progress has been made thanks to the digital paradigm. The fact that MOOCs such as this one are available to such an incredibly vast audience is just one of the tangible results of digital communication systems. It is only fitting, therefore, to devote the last module of our class to this subject.
We will start with the basics of data modulation and demodulation and we will progress to describing how your ADSL box works by way of its direct predecessor, the voiceband modem that spearheaded the Internet revolution by allowing for the first time the delivery of substantial data rates in the home.
![]() |
Claude Almansi edited English subtitles for 9.3 - Controlling the power | |
![]() |
Claude Almansi edited English subtitles for 9.3 - Controlling the power | |
![]() |
Claude Almansi commented on English subtitles for 9.3 - Controlling the power | |
![]() |
Claude Almansi edited English subtitles for 9.3 - Controlling the power | |
![]() |
Claude Almansi added a translation |