< Return to Video

The Jacobian matrix

  • 0:01 - 0:02
    - [Narrator] In the last video we were
  • 0:02 - 0:04
    looking at this particular function.
  • 0:04 - 0:05
    It's a very non linear function.
  • 0:05 - 0:07
    And we were picturing
    it as a transformation
  • 0:07 - 0:11
    that takes every point x,
    y in space to the point
  • 0:11 - 0:13
    x plus sign y, y plus sign of x.
  • 0:13 - 0:17
    And moreover, we zoomed
    in on a specific point.
  • 0:17 - 0:18
    And let me actually write down what
  • 0:18 - 0:21
    point we zoomed in on, it was (-2,1).
  • 0:21 - 0:25
    That's something we're gonna
    want to record here (-2,1).
  • 0:26 - 0:28
    And I added couple extra
    grid lines around it
  • 0:28 - 0:31
    just so we can see in detail
    what the transformation
  • 0:31 - 0:34
    does to points that are in the
    neighborhood of that point.
  • 0:34 - 0:36
    And over here, this
    square shows the zoomed
  • 0:36 - 0:38
    in version of that neighborhood.
  • 0:38 - 0:40
    And what we saw is that even though the
  • 0:40 - 0:42
    function as a whole, as a transformation,
  • 0:42 - 0:45
    looks rather complicated,
    around that one point,
  • 0:45 - 0:47
    it looks like a linear function.
  • 0:47 - 0:50
    It's locally linear so
    what I'll show you here
  • 0:50 - 0:53
    is what matrix is gonna
    tell you the linear
  • 0:53 - 0:55
    function that this looks like.
  • 0:55 - 0:58
    And this is gonna be kind
    of two by two matrix.
  • 0:58 - 1:00
    I'll make a lot of room
    for ourselves here.
  • 1:00 - 1:03
    It'll be a two by two matrix and the way
  • 1:03 - 1:06
    to think about it is to first go back
  • 1:06 - 1:08
    to our original setup
    before the transformation.
  • 1:08 - 1:10
    And think of just a
    tiny step to the right.
  • 1:10 - 1:14
    What I'm gonna think of
    as a little, partial x.
  • 1:14 - 1:16
    A tiny step in the x direction.
  • 1:16 - 1:18
    And what that turns into
    after the transformation
  • 1:18 - 1:22
    is gonna be some tiny
    step in the output space.
  • 1:22 - 1:23
    And here let me actually kind of draw on
  • 1:23 - 1:26
    what that tiny step turned into.
  • 1:26 - 1:27
    It's no longer purely in the x direction.
  • 1:27 - 1:28
    It has some rightward component.
  • 1:28 - 1:30
    But now also some downward component.
  • 1:30 - 1:33
    And to be able to represent
    this in a nice way,
  • 1:33 - 1:35
    what I'm gonna do is
    instead of writing the
  • 1:35 - 1:37
    entire function as something with
  • 1:37 - 1:39
    a vector valued output, I'm gonna go ahead
  • 1:39 - 1:43
    and represent this as a two
    separate scalar value functions.
  • 1:44 - 1:48
    I'm gonna write the scalar
    value functions f1 of x, y.
  • 1:50 - 1:53
    So I'm just giving a
    name to x plus sign y.
  • 1:53 - 1:56
    And f2 of x, y, again all I'm doing is
  • 1:56 - 2:00
    giving a name to the functions
    we already have written down.
  • 2:00 - 2:02
    When I look at this vector, the
  • 2:02 - 2:04
    consequence of taking a tiny d, x step
  • 2:04 - 2:07
    in the input space that corresponds to
  • 2:07 - 2:09
    some two d movement in the output space.
  • 2:09 - 2:11
    And the x component of that movement.
  • 2:11 - 2:13
    Right if I was gonna draw this out
  • 2:13 - 2:16
    and say hey, what's the x
    component of that movement.
  • 2:16 - 2:18
    That's something we think of as a little
  • 2:18 - 2:22
    partial change in f1, the
    x component of our output.
  • 2:23 - 2:25
    And if we divide this, if we take you know
  • 2:25 - 2:27
    partial f1 divided by the size of that
  • 2:27 - 2:29
    initial tiny change, it basically scales
  • 2:29 - 2:31
    it up to be a normal sized vector.
  • 2:31 - 2:33
    Not a tiny nudge but something that's more
  • 2:33 - 2:34
    constant that doesn't shrink as we
  • 2:34 - 2:37
    zoom in further and further.
  • 2:37 - 2:39
    And then similarly the
    change in the y direction,
  • 2:39 - 2:41
    right the vertical component of that step
  • 2:41 - 2:43
    that was still caused by the dx.
  • 2:43 - 2:45
    Right, it's still caused by that initial
  • 2:45 - 2:47
    step to the right, that is gonna be
  • 2:47 - 2:50
    the tiny, partial change in f2.
  • 2:51 - 2:53
    The y component of the output cause
  • 2:53 - 2:55
    here we're all just
    looking in the output space
  • 2:55 - 2:59
    that was caused by a partial
    change in the x direction.
  • 3:01 - 3:02
    And again I kind of
    like to think about this
  • 3:02 - 3:04
    we're dividing by a tiny amount.
  • 3:04 - 3:07
    This partial f2 is really
    a tiny, tiny nudge.
  • 3:07 - 3:09
    But by dividing by the size of the initial
  • 3:09 - 3:11
    tiny nudge that caused it, we're getting
  • 3:11 - 3:12
    something that's basically a number.
  • 3:12 - 3:13
    Something that doesn't shrink when
  • 3:13 - 3:16
    we consider more and
    more zoomed in versions.
  • 3:16 - 3:18
    So that, that's all what happens when
  • 3:18 - 3:20
    we take a tiny step in the x direction.
  • 3:20 - 3:23
    But another thing you could
    do, another thing you can
  • 3:23 - 3:26
    consider is a tiny step
    in the y direction.
  • 3:26 - 3:28
    Right cause we wanna know, hey, if
  • 3:28 - 3:31
    you take a single step
    some tiny unit upward,
  • 3:31 - 3:35
    what does that turn into
    after the transformation.
  • 3:38 - 3:42
    And what that looks like is this vector
  • 3:42 - 3:43
    that still has some upward component.
  • 3:43 - 3:45
    But it also has a rightward component.
  • 3:45 - 3:46
    And now I'm gonna write its components
  • 3:46 - 3:49
    as the second column of the matrix.
  • 3:49 - 3:50
    Because as we know when
    you're representing
  • 3:50 - 3:52
    a linear transformation with a matrix,
  • 3:52 - 3:54
    the first column tells you where the first
  • 3:54 - 3:56
    basis vector goes and the second column
  • 3:56 - 3:58
    shows where the second basis vector goes.
  • 3:58 - 4:00
    If that feels unfamiliar, either
  • 4:00 - 4:01
    check out the refresher video or
  • 4:01 - 4:04
    maybe go and look at some of
    the linear algebra content.
  • 4:04 - 4:06
    But to figure out the
    coordinates of this guy,
  • 4:06 - 4:08
    we do basically the same thing.
  • 4:08 - 4:11
    Let's say first of all, the
    change in the x direction
  • 4:11 - 4:15
    here, the x component
    of this nudge vector.
  • 4:15 - 4:19
    That's gonna be given as a
    partial change to f1, right,
  • 4:19 - 4:21
    to the x component of the output.
  • 4:21 - 4:22
    Here we're looking in the outputs base.
  • 4:22 - 4:25
    We're dealing with f1, f1 and f2
  • 4:25 - 4:27
    and we're asking what that change was
  • 4:27 - 4:31
    that was caused by a tiny
    change in the y direction.
  • 4:32 - 4:35
    So the change in f1 caused
    by some tiny step in the y
  • 4:35 - 4:39
    direction divided by the
    size of that tiny step.
  • 4:39 - 4:42
    And then the y component
    of our output here.
  • 4:42 - 4:44
    The y component of the
    step in the outputs base
  • 4:44 - 4:46
    that was caused by the initial tiny
  • 4:46 - 4:48
    step upward in the input space.
  • 4:48 - 4:50
    Well that is the change of f2,
  • 4:52 - 4:56
    second component of our
    output as caused by dy.
  • 4:56 - 4:59
    As caused by that little partial y.
  • 4:59 - 5:00
    And of course all of this is very specific
  • 5:00 - 5:03
    to the point that we started at right.
  • 5:03 - 5:05
    We started at the point (-2,1).
  • 5:05 - 5:08
    So each of these partial derivatives
  • 5:08 - 5:09
    is something that really we're saying,
  • 5:09 - 5:13
    don't take the function,
    evaluate it at the point (2,-1),
  • 5:13 - 5:17
    and when you evaluate each one of these
  • 5:17 - 5:19
    at the point (2,-1)
    you'll get some number.
  • 5:19 - 5:20
    And that will give you a very
  • 5:20 - 5:22
    concrete two by two matrix that's gonna
  • 5:22 - 5:24
    represent the linear
    transformation that this
  • 5:24 - 5:27
    guy looks like once you've zoomed in.
  • 5:27 - 5:28
    So this matrix here that's full of all
  • 5:28 - 5:32
    of the partial derivatives
    has a very special name.
  • 5:32 - 5:36
    It's called as you may
    have guessed, the Jacobian.
  • 5:36 - 5:39
    Or more fully you'd call
    it the Jacobian Matrix.
  • 5:39 - 5:40
    And one way to think about it is that it
  • 5:40 - 5:43
    carries all of the partial
    differential information right.
  • 5:43 - 5:45
    It's taking into account
    both of these components
  • 5:45 - 5:48
    of the output and both possible inputs.
  • 5:48 - 5:49
    And giving you a kind of a grid of
  • 5:49 - 5:51
    what all the partial derivatives are.
  • 5:51 - 5:52
    But as I hope you see, it's much
  • 5:52 - 5:55
    more than just a way of recording
  • 5:55 - 5:56
    what all the partial derivatives are.
  • 5:56 - 5:58
    There's a reason for organizing it
  • 5:58 - 6:00
    like this in particular and it really
  • 6:00 - 6:02
    does come down to this
    idea of local linearity.
  • 6:02 - 6:05
    If you understand that the Jacobian Matrix
  • 6:05 - 6:06
    is fundamentally supposed to represent
  • 6:06 - 6:09
    what a transformation
    looks like when you zoom
  • 6:09 - 6:11
    in near a specific point,
    almost everything else
  • 6:11 - 6:14
    about it will start to fall in place.
  • 6:14 - 6:15
    And in the next video, I'll go ahead
  • 6:15 - 6:16
    and actually compute this just to
  • 6:16 - 6:18
    show you what the process looks like.
  • 6:18 - 6:19
    And how the result we get kind of
  • 6:19 - 6:21
    matches with the picture we're
  • 6:21 - 6:22
    looking at, see you then.
Title:
The Jacobian matrix
Description:

more » « less
Video Language:
English
Team:
Khan Academy
Duration:
06:22
Amara Bot edited English subtitles for The Jacobian matrix

English subtitles

Revisions