Return to Video

3.3 - Hilbert Space and approximation [21:40] 2nd edition

  • 0:01 - 0:04
    Now that we have constructed Hilbert
    spaces and ordinal
  • 0:04 - 0:08
    bases we can see some of their
    distinguishing features.
  • 0:08 - 0:10
    One of them is norm conservation which is
  • 0:10 - 0:13
    called Parseval's theorem and is an
    extension of Peter
  • 0:13 - 0:14
    [INAUDIBLE]
  • 0:14 - 0:18
    famous ordinality theorem to infinite
    dimensions.
  • 0:18 - 0:21
    Then we will see the orthogonal projection
    theorem.
  • 0:21 - 0:23
    This is a powerful method to take a vector
    from
  • 0:23 - 0:28
    possibly an infinite dimensional space,
    and project it onto a subspace.
  • 0:28 - 0:30
    Once we have this, we will consider
  • 0:30 - 0:34
    some examples of approximations and
    othonormal basis.
  • 0:34 - 0:39
    Module 3.3 Hilbert Space and
    approximation.
  • 0:41 - 0:43
    Your review of this sub module as
    following:
  • 0:43 - 0:46
    first I'm going to talk about the norm
    conservation, when
  • 0:46 - 0:50
    we have expansion into orthonormal basis,
    and a
  • 0:50 - 0:55
    so-called equivalence formula, which is
    known as Parseval's formula.
  • 0:55 - 1:00
    Then we talk about approximation by
    projection, and we give an example.
  • 1:02 - 1:05
    Parseval's theorem has a very simple form.
  • 1:05 - 1:10
    It says that if you expand x in an
    orthonormal basis, so we have a vector
  • 1:10 - 1:16
    in a Hilbert space, we expand it in an
    orthonormal basis with this vector w.
  • 1:16 - 1:20
    Here, we are on the finite dimensional
    case of dimension capital K.
  • 1:20 - 1:25
    Then, for orthonormal basis, the square
    norm of x
  • 1:25 - 1:30
    is equal to the sum of squared
    coefficients, okay?
  • 1:30 - 1:35
    This of course a generalization of
    Pythagorean theorem, which says that if I
  • 1:35 - 1:40
    have the vector here, this is x, this
  • 1:40 - 1:45
    is, so length in an orthonormal basis or,
    that would be alpha 0
  • 1:45 - 1:50
    in e 0, alpha with respect to
  • 1:50 - 1:55
    e1, then x square is equal to alpha
  • 1:55 - 1:59
    0 square plus alpha 1 square.
    Okay?
  • 1:59 - 2:06
    Very old result, very useful result.
    Let's actually verify it algebraically.
  • 2:07 - 2:11
    We start with the canonical basis, e0 and
    e1, just as before.
  • 2:12 - 2:20
    We write x as a linear combination of e0
    and e1, and we go to a new basis.
  • 2:20 - 2:20
    The new
  • 2:20 - 2:24
    basis is v0 and v1.
    It' s an autonomal basis.
  • 2:24 - 2:24
    Okay.
  • 2:24 - 2:28
    You can verify that these guys are at the
    right angle.
  • 2:28 - 2:31
    And this basis is given by v0 is cosine
  • 2:31 - 2:36
    theta sine theta v1 is minus sine theta
    cosine theta.
  • 2:36 - 2:38
    You can see this here on the projections.
  • 2:38 - 2:44
    And so x in the new basis is equal to beta
    0 v 0.
  • 2:44 - 2:45
    Vector 1,
  • 2:45 - 2:45
    V1.
  • 2:45 - 2:49
    What are the expressions of this, well we
    know this because the
  • 2:49 - 2:55
    basis is orthonormal so the expansion
    coefficients beta 0 is equal to
  • 2:55 - 2:55
    [UNKNOWN]
  • 2:55 - 3:01
    product between V0 and X beta 1 the inner
    product between V1 and X.
  • 3:01 - 3:03
    Or in compact form.
  • 3:03 - 3:09
    We can just write these inner products as
    row, column, scaler products.
  • 3:09 - 3:16
    OK, so the the zero is cosine theta.
    Sin theta d one is
  • 3:16 - 3:21
    minus sin theta cos theta as we have seen.
    And so we write this as R
  • 3:21 - 3:25
    times alpha where R is the rotation
    matrix.
  • 3:25 - 3:26
    Okay?
  • 3:26 - 3:29
    A unitary matrix that corresponds to these
    basis vectors.
  • 3:30 - 3:36
    Alright, and it's not hard to verify that
    the rotation matrix times this transpose
  • 3:36 - 3:41
    is equal to identity, meaning it is equal
    through transposition to its inverse.
  • 3:41 - 3:45
    So it's a unitary matrix, as we probably
    well know by now.
  • 3:47 - 3:51
    Okay.
    So let's look look at norm conservation.
  • 3:51 - 3:56
    So the square norm in the canonical basis
    is just as we announced,
  • 3:56 - 4:00
    so x squared is equal to alpha 0 squared
    plus alpha 1 squared.
  • 4:00 - 4:04
    The square norm in the rotated basis is
    the same, but with respect to beta.
  • 4:04 - 4:05
    Okay.
  • 4:05 - 4:08
    So we are going to show that these two
    things are the same.
  • 4:10 - 4:11
    Or, verify Parseval's formula.
  • 4:13 - 4:16
    Well, one way to write beta 0 squared plus
    beta 1 squared
  • 4:16 - 4:20
    is to say it's a vector beta transpose
    times beta, right, okay.
  • 4:20 - 4:24
    I'm just, so I'm making myself absolutely
    clear It's
  • 4:24 - 4:29
    the scalar product of the vector beta,
    okay, with itself.
  • 4:29 - 4:30
    Okay.
  • 4:30 - 4:31
    That's the thing here.
  • 4:31 - 4:34
    Now, of course, beta is equal to R times
    alpha.
  • 4:34 - 4:38
    Beta transpose is R times alpha transpose.
  • 4:38 - 4:42
    You go to the extra step of reordering
    here the products, so
  • 4:42 - 4:48
    the alpha transpose comes in front, R
    transpose here times R times alpha.
  • 4:48 - 4:53
    This of course simplified to identity that
    we know because R is a unitary matrix.
  • 4:53 - 4:55
    So this is equal to alpha transposed as
    alpha.
  • 4:56 - 5:01
    And of course we verify what we set out to
    do, okay.
  • 5:01 - 5:03
    Now we did this in two dimensions,
  • 5:03 - 5:07
    it's obvious that this will hold in n
    dimensions for an arbitrary n.
  • 5:07 - 5:13
    It turns out it also tell it also holds
    for infinite dimensional canonical basis.
  • 5:13 - 5:20
    Okay, so that's Parseval's formula, very
    important formula in signal processing.
  • 5:21 - 5:27
    Okay, what it really means is that if you
    have a vector X,
  • 5:27 - 5:33
    and you look at this vector in, you know,
    this basis,
  • 5:35 - 5:41
    and you look at in some other basis, as we
    just did, which is a rotation, because all
  • 5:41 - 5:42
    [UNKNOWN]
  • 5:42 - 5:45
    basis are essentially rotations of each
    other and maybe
  • 5:45 - 5:50
    some symmetry, then the length of the
    vector doesn't change.
  • 5:50 - 5:54
    Okay, so that's of course norm
    conservational so means distance
  • 5:54 - 5:57
    conservation through these transforms or
  • 5:57 - 6:00
    through the expansion into orthonormal
    basis.
  • 6:03 - 6:04
    Alright.
  • 6:04 - 6:06
    The next step we want to do is
    approximation.
  • 6:06 - 6:10
    We had briefly mentioned this at the
    beginning of this module.
  • 6:10 - 6:14
    So, we have a vector in R3 here.
  • 6:14 - 6:21
    And, the vector X should be represented in
    a subspace spanned by e0 and e1.
  • 6:23 - 6:28
    Well, that's a subspace S of V.
    Spanned by e0 e1,
  • 6:28 - 6:33
    now shown in red, and the approximation is
    x hat.
  • 6:34 - 6:38
    It belongs to S, and it's the orthogonal
    projection
  • 6:38 - 6:42
    x to the plane spanned by e0 and e1.
  • 6:42 - 6:42
    Okay?
  • 6:42 - 6:45
    And that orthonormal projection we'll
    denote by x hat.
  • 6:48 - 6:51
    How can we do this?
    It is very simple.
  • 6:51 - 6:53
    We take an orthonormal basis for the
    subspace.
  • 6:53 - 6:57
    Okay, so remember we have the big space V.
  • 6:57 - 6:59
    WE have the subspace c.
  • 6:59 - 7:02
    So we take an orthonormal basis for the
    subspace and
  • 7:02 - 7:05
    the orthogonal projection is simply going
    to be given by.
  • 7:05 - 7:13
    X hat expanded in the ordinal basis that
    spans the subspace, Okay?
  • 7:13 - 7:16
    So the set of vectors sk, is the
    orthogonal basis for
  • 7:16 - 7:20
    s and here is the expansion form well in
    this ordinal basis.
  • 7:21 - 7:24
    This orthogonal projection is the best
    approximation.
  • 7:24 - 7:30
    Over S to the vector x.
    And it's best in the l
  • 7:30 - 7:36
    2 sense, or, it will minimize the
    quadratic norm of the error.
  • 7:38 - 7:38
    Okay.
  • 7:38 - 7:43
    So in a word, orthogonal projection has
    minimum-norm error.
  • 7:43 - 7:43
    So.
  • 7:43 - 7:49
    Among all vectors y that belong to s, the
    one that minimizes
  • 7:49 - 7:53
    the square of the difference here, so the
    square norm of the
  • 7:53 - 7:58
    difference x minus y is this vector x hat
    that we have
  • 7:58 - 8:01
    written out in terms of an ordinal basis
    for the subspace s.
  • 8:03 - 8:06
    Very important property is that the error
    x
  • 8:06 - 8:10
    minus x hat is orthogonal to the
    approximation, okay?
  • 8:10 - 8:15
    So we'll see it in the next slide, but
    this is an extremely important formula.
  • 8:15 - 8:21
    It's called the orthogonality principle.
    It is used all over signal processing.
  • 8:21 - 8:25
    When we minimize quadratic error.
    All right.
  • 8:25 - 8:28
    So let's see this very graphically; we
    have
  • 8:28 - 8:34
    s a subspace, this guy is subspace, so V,
    V is the ambient space
  • 8:34 - 8:40
    in this particle case R2 as is R1, it's a
    one dimensional subspace.
  • 8:40 - 8:41
    Okay.
  • 8:41 - 8:45
    So we want to find the closest point to x.
    So x is what we want to approximate.
  • 8:45 - 8:50
    We'd like to find the closest point
    somewhere here in S.
  • 8:50 - 8:50
    Okay?
  • 8:50 - 8:53
    How do we do this?
    It has to be closest in the 2
  • 8:53 - 9:01
    norm, so we create a circle around the tip
    here of x, and we grow the circle.
  • 9:01 - 9:02
    Okay.
  • 9:02 - 9:03
    We are still not at s.
  • 9:03 - 9:05
    And at some point, we reach s.
  • 9:05 - 9:08
    This is cl-, either it is a closest one,
    right?
  • 9:08 - 9:12
    Because the circles measure quadratic
    distance.
  • 9:12 - 9:15
    And the first time we hit s, it is exactly
    here.
  • 9:15 - 9:19
    Okay.
    So we have x-hat in blue.
  • 9:19 - 9:25
    And we notice that x minus x-hat, the red
    vector is orthogonal to the blue vector.
  • 9:25 - 9:26
    That's your
  • 9:26 - 9:26
    [INAUDIBLE]
  • 9:26 - 9:28
    principle we have seen on the previous
    slide.
  • 9:31 - 9:35
    Let's look at a very concrete example.
    It's polynomial approximation.
  • 9:36 - 9:39
    So we go back to our favorite interval, -1
    to 1.
  • 9:39 - 9:44
    Okay, so we will get these guys, and for
    this
  • 9:44 - 9:50
    interval, we define.
    As subspace pn minus one to one
  • 9:50 - 9:57
    which are polynomials up to degree n minus
    one on the interval minus one two,
  • 9:57 - 10:01
    okay?
    So a basis for this is simply to take
  • 10:03 - 10:09
    as the successive monomials tk for k going
    from zero to capital n minus one.
  • 10:09 - 10:11
    Okay, so a naive basis here.
  • 10:11 - 10:17
    Is really 1, t, t squared, t cubed,
    etcetera.
  • 10:17 - 10:18
    Okay.
  • 10:18 - 10:20
    Now, these guys are not orthonormal to
    each other.
  • 10:20 - 10:20
    Okay?
  • 10:20 - 10:24
    So this naive basis is not orthogonal.
  • 10:24 - 10:28
    Okay, which is self-evident, because, for
    example, On the
  • 10:28 - 10:33
    interval -1 to 1.
    Let's take this interval.
  • 10:33 - 10:35
    We have the first element.
  • 10:35 - 10:36
    That's this guy.
    Okay.
  • 10:36 - 10:42
    The second guy, he's orthogonal, because
    the first one was symmetric, the second
  • 10:42 - 10:49
    one is antisymmetric.
    But the third guy is a quadratic function.
  • 10:49 - 10:52
    And it's not properly scaled.
  • 10:52 - 10:53
    I apologize.
    But the quadratic function,
  • 10:53 - 10:57
    of course, is also symmetric.
    So for example, the inner product between
  • 10:57 - 11:04
    1 and t square on the interval minus 1 to
    1 is not equal to 0.
  • 11:04 - 11:06
    Okay?
  • 11:06 - 11:08
    Please check this more explicitly if
  • 11:08 - 11:11
    you like, but it's fairly geometrically
    evident.
  • 11:12 - 11:14
    Okay, now we're are going to try to
    approximate
  • 11:14 - 11:17
    something that does not live on the
    polynomial space.
  • 11:17 - 11:18
    So that would be for
  • 11:18 - 11:20
    example trigonometric functions.
  • 11:20 - 11:23
    So take x, what we are going to
    approximate as
  • 11:23 - 11:28
    sin t over minus the interval minus one to
    one.
  • 11:28 - 11:31
    And we would like to approximate it on p
    3.
  • 11:31 - 11:34
    So up to polynomials of a third degree
  • 11:37 - 11:43
    Okay, so the way to do it is, we build an
    orthonormal basis from the naive basis.
  • 11:43 - 11:45
    We project x over the orthonormal basis.
  • 11:45 - 11:48
    We compute the approximation error.
    Okay?
  • 11:48 - 11:53
    So same thing as usual.
    So S.
  • 11:53 - 11:57
    Here is this P3 space.
  • 11:57 - 12:02
    Our sin is somewhere out there, and we
    want to compute this.
  • 12:02 - 12:08
    To do this, we first construct an ordinal
    basis for the sub spaces.
  • 12:08 - 12:09
    Okay.
  • 12:09 - 12:10
    We can compare this to the
  • 12:10 - 12:14
    naive approximation, which would be Taylor
    approximation.
  • 12:14 - 12:19
    It's well known, very useful, but it's not
    optimal, as we will see in just a minute.
  • 12:19 - 12:27
    Okay, so from the naive basis, remember,
    we have the naive basis, it's 1,
  • 12:27 - 12:33
    t, T square, t cube etc.
  • 12:33 - 12:38
    We can compute so that's a viral basis we
    compute an orthogonal basis.
  • 12:38 - 12:44
    There is a procedure to do this which is
    called the Gram-Schmidt algorithm, okay.
  • 12:44 - 12:46
    You explain in the appendix of this
    lecture, we
  • 12:46 - 12:48
    are not going to do it in the main
    lecture.
  • 12:48 - 12:52
    And it's a recursive procedure where you
    take the first one You
  • 12:52 - 12:54
    normalize it.
    That's fine.
  • 12:54 - 12:58
    You take the second one, and you make sure
    it's ordinal to the first one.
  • 12:58 - 13:00
    You normalize it, and you keep going.
  • 13:00 - 13:03
    And the result of this is that you get
    ordinal null
  • 13:03 - 13:06
    vectors, u 0, which is just a scaled
    version of 1.
  • 13:06 - 13:10
    The second one, these two ordinals, you
    don't
  • 13:10 - 13:13
    have to change much except for the
    scaling.
  • 13:13 - 13:16
    The third one is a transformation of
    t-square.
  • 13:16 - 13:17
    T-square,
  • 13:17 - 13:21
    I mean, like this, more explicit,
    t-square.
  • 13:21 - 13:24
    Into, you know, a second order point on u,
    which
  • 13:24 - 13:29
    is constructed such that u2 is orthogonal
    to u0 and u1.
  • 13:29 - 13:32
    And you can keep going like this.
    It's a standard construction.
  • 13:32 - 13:33
    It's called Legendre polynomials.
  • 13:33 - 13:36
    And just from the name, you know this is
  • 13:36 - 13:40
    a 19th century construction, so it's
    extremely well known.
  • 13:40 - 13:43
    And the appendix gives the details.
    Okay, so now we have an
  • 13:43 - 13:45
    ordinal basis for the subspace.
  • 13:46 - 13:50
    And, let's just watch these Legendre's
    polynomials, they're sort of cute,
  • 13:50 - 13:55
    so the first one remember, it's, 1 over
    square root of 2.
  • 13:56 - 13:59
    So here we go.
    That's 0.7 something.
  • 13:59 - 14:01
    It's the black line.
  • 14:01 - 14:04
    The second one is proportional to t, but
    it has been scaled.
  • 14:06 - 14:08
    The third
  • 14:08 - 14:13
    one, which is quadratic, has been moved
    about.
  • 14:13 - 14:15
    So now it is actually orthogonal to
  • 14:15 - 14:18
    It's automatically, the yellow color is
    automatically
  • 14:18 - 14:21
    orthogonal to the red curve because one
  • 14:21 - 14:23
    is symmetrics the other one is
    anti-symmetric.
  • 14:23 - 14:29
    But the shift that was added if we go back
    here, let me just show you if I
  • 14:29 - 14:34
    shift here, this shift makes sure that the
    inner product between the yellow curve and
  • 14:34 - 14:35
    the black curve is zero.
    Okay?
  • 14:37 - 14:41
    And it's a fourth, is a third degree
    polynomial, is the green guy.
  • 14:43 - 14:47
    It is antisymmetric, so it will be
    automatically orthogonal to the black one.
  • 14:47 - 14:53
    The yellow one, but it has to be adjusted
    so that is orthogonal to the red one.
  • 14:53 - 14:55
    And we can keep going.
  • 14:55 - 14:59
    Okay, so that's a four story guy.
    Four order polynomial, and so on.
  • 14:59 - 15:01
    So Legendre polynomials go off to
    infinity, but
  • 15:01 - 15:03
    we'll just look at a few of them.
  • 15:03 - 15:03
    Okay?
  • 15:05 - 15:06
    Here's a phase one.
  • 15:06 - 15:08
    And it's a very cute picture.
  • 15:08 - 15:10
    And this set of polynomials on this
    interval
  • 15:10 - 15:15
    -1 to 1, okay, they're important on this
    interval.
  • 15:15 - 15:18
    It's defined in such, it's constructed
    actually
  • 15:18 - 15:21
    in such a way that the inner product
  • 15:24 - 15:30
    of two of these functions is equal to
    delta i minus j.
  • 15:30 - 15:34
    So it's equal to zero when i is different
    from.
  • 15:34 - 15:39
    j, and it's equal to one when, i is equal
    to j, okay?
  • 15:39 - 15:43
    It's not self-evident when you look at
    the, at the functions
  • 15:43 - 15:46
    except for the symmetries that I pointed
    out, a minute ago.
  • 15:48 - 15:49
    Alright.
  • 15:49 - 15:50
    So now we can compute
  • 15:50 - 15:53
    our expansion coefficients.
    Remember, we want to write the formula
  • 15:53 - 15:59
    where x hat is going to be some sum of
    alpha-k.
  • 15:59 - 16:06
    u k we call these guys here, and k goes
    from 0 to capital K minus 1.
  • 16:06 - 16:11
    So that's the orthogonal projection onto
    the subspace spanned by the u k.
  • 16:11 - 16:14
    Alright, so we have to take the integrals
    between -1,
  • 16:14 - 16:15
    1.
  • 16:15 - 16:18
    Of the function, these polynomials, we
    have just defined
  • 16:18 - 16:18
    [UNKNOWN]
  • 16:18 - 16:22
    polynomials and the function we want to
    approximate sine t.
  • 16:22 - 16:23
    Okay.
  • 16:23 - 16:29
    So sin is of course an odd function as we
    know
  • 16:29 - 16:33
    as so alpha zero is going to be
    automatically equal to zero.
  • 16:33 - 16:35
    Alpha one is going to be different from
    zero,
  • 16:35 - 16:41
    because both u one and sine are odd
    functions.
  • 16:41 - 16:43
    So that's what you get if you
  • 16:43 - 16:47
    do the integral, 0.73 something.
  • 16:47 - 16:50
    And the third coefficient, alpha two, is
    also
  • 16:50 - 16:53
    equal to zero, because this character here
    is.
  • 16:54 - 16:58
    Symmetric, and this one is antisymmetric,
    so it's automatically equal to 0.
  • 16:58 - 17:04
    Okay, so what do we get?
    So, when we do the orthogonal projection
  • 17:04 - 17:09
    on these three basis vectors,
  • 17:09 - 17:14
    so, u0, u1, u2, we get an approximation,
    which is
  • 17:14 - 17:19
    simply alpha 1, u1 And it's given by this
    formula.
  • 17:19 - 17:23
    If we do Taylor series, then the first
    order approximation of
  • 17:23 - 17:29
    Taylor series simply to take sin t equal
    to t, okay.
  • 17:29 - 17:32
    And so we're going to compare these two
    approximates, they
  • 17:32 - 17:34
    look very similar but one has been scaled
    a little
  • 17:34 - 17:36
    bit, okay.
    Alright.
  • 17:37 - 17:43
    So now we see that the approximation of
    sine, which is the blue curve, the smooth
  • 17:43 - 17:49
    blue curve here, t is the red curve, and
    green is simply a scaled version.
  • 17:49 - 17:51
    Doesn't look like a big deal.
  • 17:51 - 17:54
    It's, you know, 10% smaller, but you can
    immediately see that it's
  • 17:54 - 17:58
    actually hugging the blue curve more
    closely over the interval -1 to 1,
  • 17:58 - 17:59
    right?
  • 17:59 - 18:01
    So we are approximating this over this
    interval.
  • 18:01 - 18:04
    If we change the interval So, our
    approximation would look different.
  • 18:04 - 18:05
    Okay?
  • 18:05 - 18:07
    But over this interval, as we can see
  • 18:07 - 18:11
    here, we plugged the absolute value of the
    difference.
  • 18:11 - 18:14
    The red one is sin t minus t.
  • 18:14 - 18:18
    Okay, it goes off quite a bit at the end
    intervals here.
  • 18:18 - 18:22
    It's very nice in the region around zero
    region, okay.
  • 18:22 - 18:23
    And we see that the
  • 18:23 - 18:29
    green approximation which is sin t minus
    our projection onto
  • 18:29 - 18:35
    the subspace of the legorn-, the legendre
    polynomials of orders zero
  • 18:35 - 18:40
    one and two That error is overall, it is
    smaller.
  • 18:40 - 18:43
    Never goes goes off to these values, okay.
  • 18:43 - 18:47
    Sometimes it's bigger, but overall, it
    actually turns out to be smaller, okay?
  • 18:49 - 18:55
    So, to compare this numerically, we can
    compute the norm of
  • 18:55 - 19:01
    sine t minus alpha 1 t, and it's 0.0337 In
    Taylor
  • 19:01 - 19:06
    serie, it's almost three times bigger.
    It's 0.08.
  • 19:06 - 19:09
    Necessarily we have to be as good or
  • 19:09 - 19:13
    better than Taylor series because it's a
    theorem.
  • 19:13 - 19:14
    It's the orthogonal
  • 19:14 - 19:18
    projection theorem we find the minimum
    norm approximation.
  • 19:18 - 19:19
    Okay?
  • 19:19 - 19:23
    And with this, we have Showing on a very
    practical
  • 19:23 - 19:28
    example how to do orthogonal approximation
    using an orthogonal basis.
  • 19:30 - 19:33
    Now, this was all a lot of work.
  • 19:33 - 19:38
    We defined Hilbert spaces, we had a lot of
    definitions, and so on.
  • 19:38 - 19:42
    So why do we do all this?
    It's a great question.
  • 19:42 - 19:49
    So, first is at both finite-length and
    periodic signals live in C N.
  • 19:49 - 19:52
    So we can use all of linear algebra and
  • 19:52 - 19:56
    all the geometry of linear algebra to do
    this.
  • 19:56 - 19:56
    And
  • 19:56 - 20:01
    Infinite-length signals, that we like for
    general signal processing live in
  • 20:01 - 20:05
    a more general Hilbert space, which is
    small l2 of Z.
  • 20:05 - 20:08
    Okay so we have a common geometric frame
    work for
  • 20:08 - 20:13
    both finite lengths, periodic signals and
    infinite length sequences, okay.
  • 20:13 - 20:19
    So we have one way to think about the
    whole bunch of different problems, okay.
  • 20:19 - 20:21
    Then we'll see that the expansion into
    orthogonal
  • 20:21 - 20:24
    bases is very central to signal
    processing.
  • 20:24 - 20:28
    So em, we can use different bases.
  • 20:28 - 20:30
    As different observation tools for
    signals.
  • 20:30 - 20:33
    We're going to see something called the
    Short
  • 20:33 - 20:38
    Time Fourier Transform, to be defined in
    Module 4.
  • 20:38 - 20:43
    And the Short Time Fourier Transform as
    the second half says, is
  • 20:43 - 20:46
    something like a Fourier transform, but
    it's a very particular way to look
  • 20:46 - 20:48
    at signals.
    It's very popular.
  • 20:48 - 20:51
    For doing speech analysis and the like.
    Okay?
  • 20:51 - 20:54
    And when we do subspace projections, we
    will
  • 20:54 - 20:58
    see that we can do filtering, which will
  • 20:58 - 21:00
    be explained, of course, in detail, in
    this
  • 21:00 - 21:03
    class, and we can do, for example, image
    compression.
  • 21:03 - 21:04
    Okay.
  • 21:04 - 21:10
    So the notions we have seen, one was to
    build bases, that's important.
  • 21:10 - 21:12
    These are like our
  • 21:12 - 21:14
    tools to look at signals.
  • 21:14 - 21:18
    Okay, and another one was subspace
    projection which was
  • 21:18 - 21:22
    something that will come a lot when we do
    approximation.
  • 21:22 - 21:24
    And in particular when we do compression.
  • 21:25 - 21:28
    Okay, let me just finish this properly
    here.
  • 21:28 - 21:31
    We have the origin, we have a sub space s,
    we have
  • 21:31 - 21:36
    x and we have the orthogonal projection
    and we have the orthogonality principle.
Title:
3.3 - Hilbert Space and approximation [21:40] 2nd edition
Description:

From EPFL's Digital Signal Processing course on Coursera, in its 2nd 2014 edition

more » « less
Video Language:
English
  • Revision 1 = provided subtitles
    Revision 2 = same, but with title added

  • Revision 1 = provided subtitles
    Revision 2 = same, but with title added

English subtitles

Incomplete

Revisions