Return to Video

Task ID #eb8d0f6c-6b50-4fcb-b39d-ae9357a4933c

  • 0:05 - 0:10
    SPEAKER: Hello and welcome back
    to the second part of lecture 2
  • 0:10 - 0:13
    which is about the transport layer.
  • 0:14 - 0:21
    The transport layer segments application
    data into transportable chunks
  • 0:21 - 0:27
    for transmission and also reassembles
    segments as required
  • 0:27 - 0:35
    on the receiver side
    the transport layer uses port numbers.
  • 0:35 - 0:40
    We also refer to those as ports
    for short
  • 0:40 - 0:46
    to track individual
    conversations and identify applications.
  • 0:46 - 0:52
    It is important to not confuse
    these port numbers or ports
  • 0:52 - 0:57
    with physical ports on network devices
    such as switches or routers.
  • 0:58 - 1:02
    Unfortunately we use
    the same term for both
  • 1:02 - 1:06
    but based on the context
    it's it's usually clear
  • 1:07 - 1:09
    what is meant or what
    it is referring to.
  • 1:11 - 1:15
    The transport layer provides
    reliability if required.
  • 1:15 - 1:18
    Well it depends on the kind of
    transfer protocol that is used
  • 1:19 - 1:23
    and we do have different transport
    layer protocols
  • 1:23 - 1:26
    so we can cater to different
    requirements of application.
  • 1:27 - 1:33
    And since the transport layer
    is responsible for transporting data
  • 1:33 - 1:37
    from the source to the destination
  • 1:37 - 1:43
    we also often refer to it
    as an end to end concept.
  • 1:48 - 1:50
    Alright.
  • 1:50 - 1:56
    A very important concept on
    the transport layer is port numbers.
  • 1:57 - 2:04
    So port numbers don't exist physically.
    It's not a physical port.
  • 2:04 - 2:08
    They are a logical concept
    used by operating systems
  • 2:08 - 2:11
    for the identification of
    different applications.
  • 2:13 - 2:17
    Ports are identical but some
    are actually recognized
  • 2:17 - 2:20
    as specific applications for example.
  • 2:20 - 2:24
    We talked about this in the first
    part of the lecture DCP port 80
  • 2:25 - 2:31
    is recognised as a HTTP
    port 53 three can be UDP or DCP
  • 2:31 - 2:34
    to recognise it's DNS and so on
  • 2:34 - 2:37
    Some applications can also have
    multiple port numbers
  • 2:37 - 2:42
    for example HTTP can also use
    other port numbers such as 8080.
  • 2:42 - 2:48
    And down here in the example you,
    so if you figured it out
  • 2:48 - 2:50
    Here you see two or more examples.
  • 2:50 - 2:56
    So for electronic mail we have
    port 110 for example
  • 2:56 - 3:01
    if we use the proper three protocol
    and Internet chat
  • 3:01 - 3:06
    Internet chat application might use Port 531.
  • 3:07 - 3:16
    OK, and the port numbers they are actually
    encoded in the transport layer headers.
  • 3:26 - 3:31
    The port numbers are sixteen
    bit integer values.
  • 3:32 - 3:37
    So the range is 0 to 65,535
  • 3:38 - 3:41
    and this range is actually separated
    into three different regions.
  • 3:43 - 3:46
    So three classes of ports.
  • 3:46 - 3:50
    We have what's called
    the well known ports.
  • 3:50 - 3:54
    Those are the ports from 0 to 1023.
  • 3:54 - 3:57
    They're used for common services
    and applications.
  • 3:57 - 4:06
    So HTTP port 80 is one example
    port 53 for DNS is another example
  • 4:07 - 4:15
    or SDP we have port 25,
    it's well known port for that protocol
  • 4:15 - 4:18
    Then above that range
    of well-known ports
  • 4:18 - 4:27
    we have the range of registered ports
    which range from 1024 to 49,151
  • 4:27 - 4:31
    So those are ports for less common
    use services so applications.
  • 4:32 - 4:38
    Couple of examples here,
    so OpenVPN uses Port 1194
  • 4:38 - 4:46
    CIP which is used in the context of
    voice over IP uses port 5060.
  • 4:47 - 4:50
    And above that range
    we have what's called
  • 4:51 - 4:58
    dynamic or private ports so all the ports,
    49,151 until the end of the range.
  • 4:58 - 5:04
    They're dynamic ports and they're
    used for client initiated sessions.
  • 5:04 - 5:11
    So these ports are dynamically
    assigned to client applications.
  • 5:12 - 5:16
    And if you want to know
    more there's a full list available
  • 5:16 - 5:22
    you can go to that Wikipedia page.
  • 5:22 - 5:25
    Alright, so here's another example
    on the slide,
  • 5:25 - 5:32
    So in this example we have clients that
    use private ports to initiate sessions.
  • 5:33 - 5:37
    And we have some uh
    applications you're running on
  • 5:37 - 5:41
    well known ports and...
  • 5:41 - 5:46
    Yeah. Keep in mind the source port does
    not need to match the destination port.
  • 5:46 - 5:52
    Some protocols where that is
    the case or maybe the case
  • 5:52 - 5:56
    many probably from applications
    that are peer to peer applications
  • 5:57 - 6:02
    but in general the source code
    does not need to match
  • 6:02 - 6:06
    destination port and is often different.
    In this example down here
  • 6:06 - 6:12
    we have one server that runs an HTTP
    server port 80
  • 6:12 - 6:17
    and SMCP server port 25,
    and we have two clients
  • 6:17 - 6:21
    client one on the side,
    client two on the other side
  • 6:21 - 6:27
    and a client who makes an HTTP
    request to the server
  • 6:27 - 6:33
    and it picks a random
    port out of this dynamic range
  • 6:33 - 6:38
    which is here in this example port
    49,152.
  • 6:39 - 6:43
    And then of course the HTTP request
    must be sent to the port
  • 6:43 - 6:47
    on which the server application
    is listening
  • 6:47 - 6:54
    and that's port 80 of course
    and the server responds.
  • 6:54 - 6:57
    So we assume that that HTTP request
    goes to your server
  • 6:57 - 7:00
    and the server responds.
  • 7:00 - 7:04
    The server response obviously
    then comes from source port 80
  • 7:04 - 7:07
    and it goes to the clients port.
  • 7:07 - 7:11
    So the destination port
    will be 49,152
  • 7:11 - 7:15
    And basically the same thing
    happens over here with clients two
  • 7:15 - 7:18
    who wants to send an email.
  • 7:18 - 7:22
    The client has selected a
    dynamic port here of the range
  • 7:22 - 7:28
    port 51152 and for the request
    the destination port is the port,
  • 7:28 - 7:31
    the well known port of
    the SCDP so port 25
  • 7:31 - 7:34
    and then when the response
    comes back from the server
  • 7:34 - 7:40
    the response comes from port
    twenty five and it goes to
  • 7:40 - 7:44
    that dynamic port that the client
    put in the source profile
  • 7:44 - 7:52
    of the request which is Port 51152
    so hopefully this makes
  • 7:52 - 7:57
    the concept of dynamic ports
    and well-known ports clearer.
  • 7:58 - 8:04
    So let's move on to the transport
    layer protocols that we will discuss.
  • 8:04 - 8:06
    In this part of
    the lecture we will talk about
  • 8:06 - 8:11
    the two most common transport
    layer protocols
  • 8:11 - 8:16
    the Transmission Control Protocol, TCP
    and the User Data protocol, UDP.
  • 8:16 - 8:20
    TCP is used when
    the delivery of data must be reliable
  • 8:20 - 8:24
    for example filed downloads,
    for feature streaming ,
  • 8:24 - 8:28
    for loading web pages.
    Address UDP is used when
  • 8:28 - 8:33
    the delivery of data must be timely
    and doesn't need to be reliable.
  • 8:33 - 8:39
    So things like voice over IP,
    video communications
  • 8:39 - 8:42
    especially real time sort of video
    communications
  • 8:42 - 8:46
    they make use of UDP
    as well as online games
  • 8:46 - 8:50
    where delay is to be avoided.
  • 8:50 - 8:56
    So in fact first person shooter games
    they are usually based on UDP.
  • 8:56 - 8:59
    There are other transport protocols.
  • 8:59 - 9:06
    So it is actually not just TCP and UDP
    but these other protocols
  • 9:06 - 9:08
    are not that widely used.
  • 9:08 - 9:13
    So two examples are Stream
    Control Transmission Protocol, SCTP.
  • 9:13 - 9:20
    Actually SCTP is fairly
    well used in the some of the
  • 9:21 - 9:25
    Telcom companies because
    they use that as transport protocol
  • 9:25 - 9:28
    for signalling. That's just used
    in the signaling network
  • 9:28 - 9:33
    but in a wider internet it
    is not very widely used.
  • 9:34 - 9:36
    We also have the Datagram
    Congestion Control Protocol
  • 9:36 - 9:42
    DCCP, another transport protocol
    that's not very widely used.
  • 9:43 - 9:49
    So in the remainder of this part
    we'll talk about a TTP protocol first
  • 9:49 - 9:54
    because it is way more complicated
    and it's a lot more to say,
  • 9:55 - 10:00
    and then we'll talk about UDP which is
    actually a fairly simple protocol
  • 10:00 - 10:06
    and then we'll end up with a little bit
    of a comparison between the two
  • 10:07 - 10:13
    and for which type of applications
    we should use TCP
  • 10:13 - 10:18
    and for which we should use UDP,
    We'll discuss that at the end.
  • 10:19 - 10:23
    Alright let's talk about TCP.
  • 10:26 - 10:29
    TCP is a connection oriented protocol.
  • 10:29 - 10:33
    It means that the communications
    between two devices
  • 10:33 - 10:38
    must be explicitly initiated
    and terminated.
  • 10:39 - 10:42
    Not all transport layer protocols
    are connection oriented
  • 10:42 - 10:45
    so UDP is not connection oriented.
  • 10:46 - 10:52
    So the first thing
    to establish a TCP connction
  • 10:54 - 10:59
    is a handshake process that's known
    as a three way handshake
  • 10:59 - 11:05
    and consists of three steps
    or three segments that are sent around
  • 11:05 - 11:08
    and those are illustrated
    in this figure here.
  • 11:09 - 11:12
    So clearly identify but three
    numbers one two three.
  • 11:12 - 11:17
    So at a first step the initiator
    of the connection which is
  • 11:17 - 11:21
    off the client will send
    a SEE to the other side
  • 11:22 - 11:26
    of the server if the server
    receive the SEE
  • 11:26 - 11:31
    the server will respond with the SYN AC
    AC for acknowledgment
  • 11:32 - 11:40
    and then the handshake is completed
    with another act sent by A here
  • 11:40 - 11:43
    over to B.
  • 11:43 - 11:48
    After those three packets
    have been exchanged
  • 11:48 - 11:52
    the TCP connection will sort of
    move into an established state
  • 11:52 - 12:00
    and then data can be exchanged
    between the two sides okay.
  • 12:00 - 12:03
    So this is the way TCP
    connections are setup.
  • 12:05 - 12:10
    On the next slide we'll
    talk about how connections
  • 12:10 - 12:15
    are terminated so after a
    conversation is complete
  • 12:16 - 12:22
    the connection is terminated
    using either three or four steps.
  • 12:22 - 12:26
    These steps illustrated
    this little picture down here.
  • 12:26 - 12:31
    And so what happens is when a
    connection is terminated
  • 12:31 - 12:35
    one side that wants to terminate
    will send what's called a FIN.
  • 12:35 - 12:39
    The other side will receive
    the FIN and send it back.
  • 12:40 - 12:47
    And sometime later will
    and also sent a FIN and finally
  • 12:48 - 12:52
    our station A here will send
    an AC for that FIN
  • 12:52 - 12:55
    as received a B
    and at this point in time here
  • 12:55 - 13:01
    the connection is
    completely terminated.
  • 13:02 - 13:08
    So this is either a three way
    or a four way process because
  • 13:09 - 13:14
    in many cases the AC here
    is sent by B at the FIN
  • 13:14 - 13:18
    they can be combined
    into a single segment.
  • 13:18 - 13:20
    So we actually only need three
    segments, A will send FIN
  • 13:20 - 13:27
    B will send a combined AC FIN
    and then A will send back AC.
  • 13:27 - 13:34
    But in some cases we actually may have
    the full sort of four step process.
  • 13:37 - 13:43
    And on the next sort of
    slide I will discuss why
  • 13:43 - 13:47
    we actually need a sort of slightly
    more complicated process.
  • 13:47 - 13:54
    The fourth step process rather than
    just having three messages.
  • 13:54 - 14:03
    So here's two questions for you regarding
    the setup and tear down of TCP connections
  • 14:03 - 14:06
    Why do we actually need
    a three way handshake?
  • 14:06 - 14:09
    isn't two handshakes sufficient?
  • 14:09 - 14:13
    So two handshakes as in
    A to send a signal to B
  • 14:13 - 14:17
    and B responds back.
  • 14:17 - 14:25
    So why isn't that two way
    handshake sufficient?
  • 14:26 - 14:34
    The reason this we need at least three
    packets so that both sides
  • 14:34 - 14:39
    can be sure that the connection
    is to be established.
  • 14:40 - 14:43
    Think about if we only had
    a two way handshake.
  • 14:43 - 14:47
    So we only had this first packet
    here and a second packet here.
  • 14:47 - 14:51
    Well there's no guarantee that
    that AC here from B to A
  • 14:52 - 14:57
    actually arrives at , the AC could
    be lost inside the network.
  • 14:57 - 15:04
    And so then we had the problem that
    A would treat the connection as...
  • 15:04 - 15:07
    sorry, B would treat
    the connection as established.
  • 15:07 - 15:13
    Assuming that the AC would arrive at A
    but A actually never receives the AC
  • 15:14 - 15:17
    and A would treat the connection
    as not established
  • 15:17 - 15:19
    because it hasn't received the AC
  • 15:20 - 15:23
    So only view that third packet here
    going from A to B
  • 15:24 - 15:26
    both A and B can be sure
  • 15:27 - 15:30
    that the connection is in
    an established state.
  • 15:30 - 15:33
    Second question down here is why
    do we have full message.
  • 15:33 - 15:35
    Why do we have four messages
    here down
  • 15:36 - 15:39
    rather than having only three messages.
  • 15:39 - 15:42
    And again the four messages
    is not always happening.
  • 15:42 - 15:46
    We may have three messages
    at times.
  • 15:47 - 15:53
    But why have four messages
    in the extreme case, why is that?
  • 15:53 - 16:01
    Well simple answer is that TCP
    actually supports half closed connections.
  • 16:01 - 16:06
    So it supports the case
    where one side is closing
  • 16:06 - 16:10
    its side of the connection
    but the other side still sending data
  • 16:11 - 16:16
    and this only works with a
    four way sort of message tear down.
  • 16:16 - 16:21
    So in this example, imagine for example
    that A wants to close
  • 16:21 - 16:24
    because A does not have any data
    to send any more
  • 16:24 - 16:27
    so A sends a FIN which is acknowledged
    by B with an AC
  • 16:27 - 16:30
    but B actually still has data that
    needs to be sent to A
  • 16:30 - 16:34
    so rather than sending this FIN
    immediately here
  • 16:34 - 16:39
    B will continue sending data
    and then only when B has sent all the data
  • 16:39 - 16:42
    it will close its end
    of the connection sending the FIN,
  • 16:43 - 16:48
    and then that is acknowledged by the AC
    from A and then at that stage here,
  • 16:48 - 16:50
    the connection is fully closed.
  • 16:50 - 16:54
    So that the four message tear
    down allows us to do
  • 16:54 - 16:57
    a half close on the connection basically
  • 16:57 - 17:03
    or close one side
    and keep the other side open.
  • 17:05 - 17:11
    There is a little activity
    on Cisco network and then a CAT here
  • 17:11 - 17:15
    about TCP connection
    and termination process.
  • 17:16 - 17:19
    I will quickly show you but I want you
    to do the whole activity
  • 17:19 - 17:23
    I'll leave that for you as homework.
  • 17:23 - 17:25
    Bear with me for a second.
  • 17:27 - 17:30
    So here is the activity.
  • 17:30 - 17:34
    The first activity is basically
    the three way handshake
  • 17:34 - 17:40
    and you're meant to sort
    of drag those boxes over here
  • 17:40 - 17:42
    into those fields here.
  • 17:43 - 17:47
    Until that sort of process
    is correctly labeled.
  • 17:48 - 17:49
    And so it's pretty trivial.
  • 17:49 - 17:52
    So I mean A sends a SEE to B, right.
  • 17:52 - 17:57
    So then what it means is that we
    have a SEE received here
  • 17:57 - 18:01
    and we can check for correctness.
  • 18:02 - 18:04
    So that's correct.
  • 18:04 - 18:08
    And then you can basically drag
    and drop those other things over here
  • 18:08 - 18:10
    up to here.
  • 18:10 - 18:15
    Better learn how that handshake works,
    you better memorize it,
  • 18:15 - 18:19
    and I think the other one,
    so the second part of the activity
  • 18:19 - 18:21
    is about the termination session.
  • 18:21 - 18:26
    So again there's two boxes here
    called FIN and AC
  • 18:27 - 18:31
    and you'd have to sort of
    just drag those into those fields
  • 18:31 - 18:34
    to describe the termination process.
  • 18:37 - 18:42
    OK, let's go back to the lecture slides.
  • 18:42 - 18:48
    So now I want to talk about
    the various sort of properties
  • 18:48 - 18:54
    that TCP gives us or gives
    the application and TCP
  • 18:54 - 18:58
    has quite a bit of functionality
    as you will see sort of.
  • 18:58 - 19:04
    The first thing is that TCP provides
    in order, delivery of segments
  • 19:04 - 19:10
    to the application and it does
    that based on sequence numbers.
  • 19:10 - 19:14
    So what happens is that sender here
  • 19:16 - 19:18
    divides that data up into the segments
  • 19:18 - 19:22
    and the segments are numbered
    with sequence numbers
  • 19:22 - 19:28
    so example from one to six
    and it sort of said in the first lecture
  • 19:28 - 19:30
    an IP that works
  • 19:30 - 19:34
    segments and packets that can take
    different parts through the network.
  • 19:34 - 19:38
    So we'll have two possible
    routes here
  • 19:38 - 19:41
    from the source to the destination
    some segments here,
  • 19:41 - 19:46
    some segments or packets take this route,
    but others might take this route.
  • 19:46 - 19:48
    So if they take different
    routes then
  • 19:48 - 19:52
    they may actually arrive out of
    order at the destination.
  • 19:52 - 19:59
    So in this case we receive segment 1, 2, 6
    5, 4 and then 3
  • 19:59 - 20:03
    so the orders obviously jumbled up.
  • 20:03 - 20:07
    If we or the receiver here
    if the stuck at the receiver would pass up
  • 20:07 - 20:12
    the segments in this jumbled up order
    then obviously you can imagine that
  • 20:12 - 20:17
    the application but get
    a lot of garbage basically
  • 20:17 - 20:19
    and couldn't interpret that data.
  • 20:19 - 20:22
    So whatever you're doing
    like if you send an email
  • 20:22 - 20:24
    this would be completely garbled up.
  • 20:24 - 20:29
    So what TCP does is it
    reorders the segments
  • 20:29 - 20:33
    back into the original order
    based on a sequence
  • 20:33 - 20:36
    and as before it passes
    the segments to the application.
  • 20:36 - 20:40
    So all the segments that are
    passed to the applications
  • 20:40 - 20:46
    they are passed in the order
    they were sent by the sender.
  • 20:46 - 20:51
    So there's no reordering
    on top of TCP
  • 20:51 - 20:55
    or in other words applications
    that use TCP
  • 20:55 - 21:01
    can be assured that segments or packets
    are not delivered...
  • 21:01 - 21:08
    Not in the original order to
    the receiving application.
  • 21:08 - 21:13
    The other thing TCP provides us with
    is reliable transport
  • 21:13 - 21:17
    so the sequence numbers are used
    in conjunction with
  • 21:17 - 21:22
    acknowledgement or Acs
    or acknowledgement numbers
  • 21:22 - 21:29
    to provide reliable data
    transport so all the data transmitted
  • 21:30 - 21:37
    using TCP must be acknowledged
    and an acknowledgement is cumulative.
  • 21:37 - 21:41
    In TCP it means that it
    also acknowledges
  • 21:41 - 21:46
    all the preceding segments that
    were received
  • 21:46 - 21:51
    after the last acknowledgement that
    has been received.
  • 21:51 - 21:57
    And receivers always acknowledge with
    the next expected badge.
  • 21:57 - 21:58
    Keep that in mind.
  • 21:59 - 22:02
    So if we look at the example
    over here we have a sender
  • 22:02 - 22:06
    and we have a receiver
    and this example also
  • 22:06 - 22:09
    introduces us to the concept
    of window size
  • 22:09 - 22:13
    we'll come back to that on some
    of the following slides.
  • 22:13 - 22:18
    So window is basically the amount of
    data that TCP can have in flight
  • 22:18 - 22:21
    and acknowledge and so in this
    case it's 300 bytes.
  • 22:21 - 22:24
    And so this is why the sender
    can send two
  • 22:25 - 22:28
    1500 byte segments here
    over to the receiver.
  • 22:28 - 22:30
    and the receiver receive those two
  • 22:31 - 22:34
    and then the receiver will
    send an acknowledgement
  • 22:34 - 22:37
    for both of these and you can
    see that the acknowledged number here
  • 22:37 - 22:41
    is 3001, which is the next expected byte.
  • 22:41 - 22:44
    So the receiver has received all the bytes
    for one to 3000.
  • 22:44 - 22:48
    The next expected
    byte is 3001.
  • 22:48 - 22:51
    Well when the sender gets
    acknowledgment the sender
  • 22:51 - 22:54
    sort of sends more segments
    sends another two segments
  • 22:54 - 22:59
    down here with the bytes
    3001 to 6000
  • 22:59 - 23:03
    and again the receiver will acknowledge.
  • 23:03 - 23:07
    both of these segments
    here with one acknowledgement
  • 23:07 - 23:15
    and it will have the number
    6001 because that is the next byte
  • 23:15 - 23:23
    the receiver expects
    the sender to send OK.
  • 23:24 - 23:30
    Let's sort of go a bit more
    into the details here.
  • 23:30 - 23:36
    So when segments are not
    acknowledged within the time limit
  • 23:36 - 23:39
    so in the best case they
    are acknowledged
  • 23:39 - 23:42
    like in a previous slide this is
    when everything goes perfectly fine
  • 23:42 - 23:46
    but if things are not going that well
    and segments are not acknowledged
  • 23:46 - 23:51
    within some time limit they need
    to be re-transmitted by the sender.
  • 23:51 - 23:55
    Segments can be lost due to
    network congestion
  • 23:55 - 23:57
    or interruptions to the medium.
  • 23:58 - 24:01
    If somebody I don't know,
    takes out a cable or something like that
  • 24:01 - 24:05
    or falls in the hardware
    of course.
  • 24:05 - 24:09
    Data received after a loss
    is not acknowledged.
  • 24:09 - 24:14
    This is illustrated in
    the figure over here.
  • 24:14 - 24:19
    So here we have the case that
    everything was fine at the start
  • 24:19 - 24:24
    but then when the sender sends
    two more segments here
  • 24:24 - 24:28
    the first of those segments is actually
    lost because it's dropped
  • 24:28 - 24:32
    somewhere in the network and it
    never arrives at the receiver.
  • 24:32 - 24:38
    So then what the receiver will do
    is despite having received
  • 24:38 - 24:45
    that later segment here covering
    the bytes between 4501 and 6000.
  • 24:46 - 24:52
    Since TCP does not acknowledge bytes
    after loss
  • 24:52 - 24:59
    it will send back another acknowledgement
    with the number 3001
  • 24:59 - 25:04
    because that is the number up to
    at which point we've received
  • 25:04 - 25:10
    you know, a continuous stream of segments
    and then we've lost a segment
  • 25:10 - 25:13
    and we received another segment
    after that
  • 25:13 - 25:18
    but we do not acknowledge any
    segments received after loss.
  • 25:18 - 25:24
    We'll sort of acknowledge whatever we
    have received before the loss.
  • 25:24 - 25:31
    Actually there is a mechanism to
    acknowledge segments
  • 25:31 - 25:33
    received after a loss.
  • 25:33 - 25:38
    It's called selective acknowledgements
    but it's out of the scope of the unit.
  • 25:38 - 25:43
    So this is a bit more complicated than
  • 25:43 - 25:46
    the simple sort of
    acknowledgement we discuss here
  • 25:47 - 25:50
    and it's implemented to
    be much more efficient
  • 25:52 - 25:56
    since of course what
    happens in this case here.
  • 25:56 - 26:02
    Once the receiver acknowledges
    or it sends acknowledgement number 3001.
  • 26:02 - 26:08
    Of course what happens is the sender
    will resend this segment here
  • 26:09 - 26:17
    starting with byte 3001 as well as
    the next segment starting with byte 4001.
  • 26:17 - 26:20
    So despite the fact that
    this segment the second here
  • 26:20 - 26:24
    was already received with cumulative
    acknowledgements
  • 26:24 - 26:26
    we'll have to resend
  • 26:26 - 26:30
    or the sender has to resend
    this again as well.
  • 26:30 - 26:33
    And so with selective ACs
    we can do this way more efficiently
  • 26:33 - 26:35
    but it's also
    way more complicated.
  • 26:35 - 26:41
    So it's out of scope of
    the discussion here.
  • 26:41 - 26:44
    Next I want to talk about
    another feature of TCP
  • 26:44 - 26:48
    which is called congestion control
    TTP uses congestion control
  • 26:48 - 26:51
    to manage the rate
    of transmission.
  • 26:51 - 26:56
    So you can think about it as
    an accelerator and brake
  • 26:56 - 27:02
    on the rate of the transmission
    and the TCP congestion window
  • 27:02 - 27:04
    specifies the maximum
    number of unacknowledged segments
  • 27:04 - 27:07
    that can be in-flight
    from sender to receiver.
  • 27:08 - 27:12
    Why do we actually
    have congestion control?
  • 27:12 - 27:15
    Well let's do a little sort
    of thought experiment here.
  • 27:16 - 27:21
    What if TCP senders could only send
    one packet at a time without AC
  • 27:21 - 27:24
    If the round trip delay
    between the sender and receiver
  • 27:24 - 27:29
    was something like two hundred
    milliseconds then you know it would mean
  • 27:29 - 27:32
    that TCP could only
    send five packets per second.
  • 27:32 - 27:39
    That would be very very slow .
    certainly we wouldn't congest
  • 27:39 - 27:44
    the network but the TCP performance
    would be horribly slow.
  • 27:44 - 27:49
    So what if on the other hand TCPs
    senders could sent
  • 27:49 - 27:54
    as fast as a LAN connection permits for
    example of one gigabit per second.
  • 27:54 - 27:58
    Well the gateway to
    the Internet is usually a bottleneck,
  • 27:58 - 28:00
    and the gateway to the Internet,
  • 28:00 - 28:06
    If you think about home networks
    for example it's unlikely to be able
  • 28:06 - 28:10
    to send with one gigabits
    per second into the Internet.
  • 28:10 - 28:15
    So then we get what is called
    congestion on the gateway.
  • 28:15 - 28:21
    So packets are building up in
    the queues and eventually we have
  • 28:22 - 28:30
    full queues and any further
    packets arriving will be dropped.
  • 28:30 - 28:36
    And you also need to consider
    that we share the resources
  • 28:37 - 28:40
    we share the network with
    many many other users.
  • 28:40 - 28:46
    So it's just a little picture here
    to illustrate the links
  • 28:46 - 28:50
    that carry traffic between
    different continents,
  • 28:50 - 28:53
    and you can see there's quite
    a number of links between
  • 28:53 - 28:56
    United States and Asia,
    and United States with Europe.
  • 28:57 - 29:01
    But it's not that many links connecting
    Australia to the rest of the world.
  • 29:02 - 29:07
    So these are usually fiber
    links and you can imagine that
  • 29:07 - 29:11
    those links you know that carry
    the traffic of,
  • 29:11 - 29:15
    of billions or tens
    or hundreds of millions
  • 29:15 - 29:17
    of concurrent TCP connections.
  • 29:17 - 29:22
    So all of these TCP connections
    share the links
  • 29:23 - 29:29
    and so the question is how does
    a TCP sender
  • 29:29 - 29:35
    find that the perfect rate,
    so that fair share of a 100%
  • 29:35 - 29:41
    utilized bottleneck link speed.
  • 29:44 - 29:49
    So finding that you know that
    perfect weight where we sort of
  • 29:49 - 29:55
    fairly share that think with lots
    and lots of other TCP connections.
  • 29:55 - 29:59
    At the same time we'll utilize all
    the bandwidth or the capacity
  • 29:59 - 30:07
    that it has while at the same time we'll
    try to avoid congestion in routers.
  • 30:07 - 30:13
    Well that is the job of the congestion
    control algorithm inside TCP.
  • 30:13 - 30:17
    And there are many congestion
    control algorithms.
  • 30:17 - 30:23
    And on this slide also want to briefly
    illustrate the new redo algorithm.
  • 30:23 - 30:26
    It's one of the traditional
    algorithms that was
  • 30:26 - 30:32
    the default algorithm for a long
    time in most operating systems.
  • 30:32 - 30:36
    So that algorithm has two phases
    and has a slow start phase
  • 30:36 - 30:42
    and there's a congestion avoidance phase
    and a slow start is actually not that slow
  • 30:42 - 30:45
    despite the fact that
    it's named slow start.
  • 30:45 - 30:49
    So what slow start as is
    or what TCP does
  • 30:49 - 30:52
    Slow start is it starts with an initial.
  • 30:52 - 30:58
    Congestion window of two segments or
    these days will actually use 10 segments,
  • 30:59 - 31:01
    as the initial congestion window.
  • 31:01 - 31:04
    And then the sender will increase
    the congestion window by one segment
  • 31:04 - 31:07
    for every packet acknowledged
    by the receiver.
  • 31:07 - 31:12
    So this will lead to a relatively
    quick increase in throughput
  • 31:12 - 31:16
    up to the maximum possible fare share.
  • 31:16 - 31:20
    By the time when the sender
    detects packet loss.
  • 31:20 - 31:27
    It halves the window and then
    it goes into congestion avoidance
  • 31:27 - 31:33
    In the and avoidance
    phase without loss,
  • 31:33 - 31:38
    the window is increased by one
    segment for each round trip time.
  • 31:38 - 31:44
    Round trip time means the time
    it takes for a packet to go from A to B
  • 31:44 - 31:47
    and a response to come
    back from B to A.
  • 31:47 - 31:50
    So that's that's a road trip
    that's a round trip time.
  • 31:50 - 31:54
    So for each round trip sender will
    increase the window by one segment
  • 31:54 - 32:02
    when the sender transmits to fast
    and congests the link again,
  • 32:02 - 32:06
    then it will mean that you know
    a congestion at the router occurs,
  • 32:07 - 32:10
    queues also,
    and there will be packet drops.
  • 32:10 - 32:15
    When the sender detects those
    packet drops it has its window
  • 32:16 - 32:21
    So this quickly shrinking off the window
    will quickly reduce
  • 32:21 - 32:25
    the throughput of the connection
    but it will also quickly reduce
  • 32:25 - 32:27
    the congestion on the bottleneck.
  • 32:27 - 32:28
    That's the idea.
  • 32:29 - 32:33
    And then after that we'll
    have that sort of slow increase
  • 32:33 - 32:36
    in one segment
    each round trip time again.
  • 32:37 - 32:40
    Where the sort of sender starts basically
    sending more and more
  • 32:40 - 32:43
    until we sort of hit the limit again.
  • 32:43 - 32:50
    And to a sort of to better illustrate that
    it's easiest to see this in a graph.
  • 32:50 - 32:53
    This is an actual graph
    of the congestion window
  • 32:53 - 32:57
    of a single TCP connection
    going through a bottleneck
  • 32:57 - 32:59
    and you can
    see at the start here,
  • 32:59 - 33:04
    slow start where we have a rapid increase
    of the congestion window
  • 33:04 - 33:08
    and then we have the first packet loss
    and congestion we know sort of drops,
  • 33:08 - 33:13
    halved, it's halved again
    and then we go into congestion avoidanc
  • 33:13 - 33:15
    and you can see us saw tooth pattern
  • 33:15 - 33:18
    and congestion
    avoidance will basically...
  • 33:18 - 33:23
    Will slowly increase
    the congestion window over time,
  • 33:23 - 33:27
    but when it's one to time
    and then at some stage
  • 33:27 - 33:29
    we'll hit congestion again
    if there's packet loss,
  • 33:29 - 33:31
    will quickly
    or the sender will quickly
  • 33:32 - 33:35
    reduce the window to half of its size
  • 33:36 - 33:39
    and then we'll start
    the upward probing again
  • 33:39 - 33:43
    until we hit loss again
    half a window and so on.
  • 33:43 - 33:46
    So with a single flow through
    a bottleneck we get to see
  • 33:46 - 33:49
    a perfect saw tooth pattern
    of course with multiple flows.
  • 33:49 - 33:59
    This will look much messier
    so to come back to the point of congestion
  • 33:59 - 34:02
    of congestion again so make it
    very clear what it means is
  • 34:03 - 34:05
    so congestion occurs when
    the number of packets arriving
  • 34:05 - 34:10
    at a router is higher than number of ACs
    that can be sent on the next link.
  • 34:11 - 34:14
    So if we have lots of different
    devices here,
  • 34:14 - 34:18
    all send packets to the router
    then sent to the Internet.
  • 34:18 - 34:21
    And this is the one link to the Internet,
    let's say,
  • 34:21 - 34:29
    and the link speed here of this link
    is less than the combined link speeds
  • 34:30 - 34:33
    for all these different devices then.
  • 34:33 - 34:37
    Well we have to buffer
    packets on the router
  • 34:37 - 34:42
    if the packet rates are too high
    or higher than this link, right?
  • 34:43 - 34:48
    And if those rates are persistently
    higher then this link
  • 34:48 - 34:52
    will then of course eventually
    will get filled with all of our buffers.
  • 34:52 - 34:56
    And then the router has no choice
    but to drop packets.
  • 34:58 - 35:04
    And then of course with TCP congestion
    control we have the fact that
  • 35:04 - 35:09
    the TCP under senders
    here will take that bus
  • 35:09 - 35:11
    as indication for congestion.
  • 35:11 - 35:15
    The window will be halved
    and all of those devices here
  • 35:15 - 35:17
    will send a lot less packets.
  • 35:17 - 35:22
    Which then means the queues,
    they can be drained,
  • 35:22 - 35:26
    and we won't have any loss
    sort of in there (INAUDIBLE)
  • 35:26 - 35:31
    but again the window size
    will increase again.
  • 35:31 - 35:37
    All those devices there was some sense of
    faster and faster rates
  • 35:37 - 35:41
    based on the upward probing
    of the congestion control algorithm
  • 35:41 - 35:45
    until the queues become full again
    and we'll have the next packet loss,
  • 35:45 - 35:49
    and then we'll send less again
    and so on.
  • 35:51 - 35:55
    Now you might say okay
    well if the problem is packet loss
  • 35:55 - 36:00
    if packets are full then why not make
    the router buffers really really big.
  • 36:00 - 36:04
    So we can basically avoid any any
    packet losses
  • 36:04 - 36:08
    so we can have a case where either the
    combined sender rates
  • 36:08 - 36:09
    can be higher than that
  • 36:09 - 36:12
    then you communicate for
    a very very long time.
  • 36:13 - 36:17
    If we have big buffers
    you can have it sort of
  • 36:17 - 36:19
    for varying something
    like packet drops.
  • 36:19 - 36:24
    And this is what quite a number of people
    actually used to think for years
  • 36:24 - 36:29
    and it led to a fairly large buffers
    which caused another problem though
  • 36:29 - 36:31
    which is referred to as buffered load.
  • 36:32 - 36:38
    If buffer sizes are very large
    then that means also the latency
  • 36:38 - 36:42
    will be quite high because
    packets are stuck in those buffers
  • 36:42 - 36:44
    for quite some time.
  • 36:44 - 36:48
    And so remember your email will always
    eventually fill the buffers
  • 36:48 - 36:53
    to full capacity and those large
    buffers will take a long time to clear.
  • 36:54 - 37:00
    Any applications that require
    reliability but also really benefit
  • 37:00 - 37:04
    from low latency that
    say stock trading for example.
  • 37:04 - 37:07
    They have an issue with the high latency
  • 37:07 - 37:10
    that's caused by this buffered load
  • 37:10 - 37:18
    so buffers have to be reasonably small to
    maintain a reasonably low network latency.
  • 37:19 - 37:23
    So we can't just make buffers really
    really big that will cause problems
  • 37:23 - 37:33
    for applications that you know rely
    on or benefit from lower latency.
  • 37:34 - 37:37
    So we talked about (INAUDIBLE)
    which has been
  • 37:37 - 37:42
    the standard congestion control
    mechanism for some time.
  • 37:42 - 37:44
    It's still used by Windows.
  • 37:44 - 37:47
    It was still used by Mac OS
    until fairly recently.
  • 37:48 - 37:51
    But actually dozens
    perhaps hundreds
  • 37:51 - 37:58
    including sort of research
    research kind of algorithms
  • 37:58 - 38:01
    that exist,
    Linux and Mac OS
  • 38:01 - 38:03
    now use a different algorithm
    called cubic.
  • 38:03 - 38:06
    And there's also an algorithm called BBR,
    it's been designed by Google
  • 38:06 - 38:10
    in recent years,
    it has created a bit of a hype
  • 38:11 - 38:13
    Not all aim for maximum performance.
  • 38:13 - 38:18
    So some might have slightly different
    aims and there's also algorithms
  • 38:18 - 38:23
    that use estimates of network delay
    as an indicator of congestion as well
  • 38:23 - 38:25
    not just losses indicate of congestion
  • 38:25 - 38:29
    but also estimates of network
    like for example BBR.
  • 38:29 - 38:34
    And despite the fact that
    TCP is a fairly old protocol
  • 38:35 - 38:38
    TCP congestion control is
    still a highly active
  • 38:38 - 38:41
    research area in data
    communications.
  • 38:42 - 38:48
    There's also something called
    active queue management.
  • 38:48 - 38:57
    So the idea is to improve things by
    actually routers actively telling
  • 38:57 - 39:05
    TTP senders that there is congestion
    sort of if routers could tell senders
  • 39:05 - 39:10
    and sort of some configure buffer
    that is when there's congestion then,
  • 39:10 - 39:17
    senders could back off earlier and we
    could avoid the packet loss.
  • 39:17 - 39:21
    So there are some algorithm
    of active queue management
  • 39:21 - 39:24
    and there is something called TCP
    early congestion notification.
  • 39:25 - 39:29
    And the idea is that the route of
    the mock packets when the queue length
  • 39:29 - 39:32
    or the estimate layers
    above computable threshold.
  • 39:32 - 39:35
    And then the TTP receiver
    will echo those marks
  • 39:35 - 39:39
    back to the sender
    and the sender can reduce
  • 39:39 - 39:42
    the congestion window before
    we actually get to that stage
  • 39:42 - 39:44
    where we have packet loss.
  • 39:44 - 39:48
    So using this mechanism will
    actually improve performance.
  • 39:49 - 39:56
    But it requires that routers
    support this mechanism.
  • 39:56 - 40:02
    And of course senders and receivers
    must also support the mechanism
  • 40:04 - 40:09
    Active queue management can actually
    improve performance quite a bit
  • 40:09 - 40:13
    but many people don't actually notice.
  • 40:13 - 40:16
    So to illustrate that point,
    I created this little slide here.
  • 40:16 - 40:23
    So we have the normal
    sort of FIFO queues
  • 40:23 - 40:26
    and also two graphs
    appear for FIFO queues
  • 40:26 - 40:28
    first in first out and then down here,
  • 40:28 - 40:33
    those two graphs are for
    an active queue management mechanism.
  • 40:33 - 40:35
    called F Queue codal.
  • 40:36 - 40:38
    This is from an experiment
    where we have...
  • 40:39 - 40:42
    when we look at the UpLink let's say
  • 40:42 - 40:46
    the uplink from your home network
    to the Internet
  • 40:46 - 40:48
    and there's three traffic flows.
  • 40:48 - 40:52
    There is a gaming flow based
    on UDP going upstream.
  • 40:52 - 40:57
    That's the dark blue line here so those
    two graphs are throughput graphs,
  • 40:57 - 41:01
    so just write the throughput
    of the three different traffic flows
  • 41:01 - 41:05
    so that flow here is a game traffic
    it's veryy constant sort of throughput
  • 41:05 - 41:12
    and the other two traffic flows
    are TCP connections.
  • 41:12 - 41:16
    So the light brown here, that's the
    throughput of the TCP connections.
  • 41:16 - 41:19
    And those two graphs in
    the right hand side here
  • 41:19 - 41:25
    show the RTT that those traffic
    flows experience.
  • 41:25 - 41:30
    And the sort of fixed
    delay for this experiment
  • 41:30 - 41:36
    was set to 100 milliseconds
    of RTT and anything above
  • 41:36 - 41:40
    100 milliseconds is added,
    constitutes delay added by
  • 41:40 - 41:45
    queueing the packets inside
    the router and so you can see
  • 41:45 - 41:49
    with our traditional sort of
    first in first out strategy
  • 41:49 - 41:52
    we'll get fairly high delays
    and like all
  • 41:52 - 41:55
    the three different traffic
    flows experienced
  • 41:55 - 41:58
    the same types of delays
    and the like it's fairly high,
  • 41:58 - 42:01
    so we almost reached
    300 milliseconds here
  • 42:01 - 42:05
    much much higher than
    the sort of the base delay.
  • 42:06 - 42:11
    So when we do the same type
    of experiments but with F Queue codal
  • 42:11 - 42:16
    to manage the queue then A you see
    in the throughput graph here,
  • 42:16 - 42:21
    we got a bit of more fairness
    in the sharing here I suppose
  • 42:21 - 42:26
    of the TCP flows closer to
    the fair share
  • 42:26 - 42:29
    whereas here, that's a fair bit
    of going up and down.
  • 42:30 - 42:34
    And the other thing you can see while
    one thing is quite hard to see
  • 42:34 - 42:38
    but for the actual game traffic
    here the delay is really minimal.
  • 42:38 - 42:41
    So the dark blue dots they
    only extend up to here.
  • 42:41 - 42:47
    OK so we'll barely reach
    125 milliseconds for the game traffic
  • 42:47 - 42:50
    which is of course very important
    if you play first person shooter games
  • 42:50 - 42:58
    and for the TCP flows we get a little bit
    more delay here but basically after that
  • 42:58 - 43:01
    so slow start fair, well,
    we'll never really exceed
  • 43:01 - 43:08
    200 milliseconds of delay so you
    can see the positive effect here
  • 43:08 - 43:13
    will reduce delay and we'll get
    a fairer sharing here
  • 43:13 - 43:18
    and that's actually home routers where
    you can you can turn this on
  • 43:18 - 43:22
    so you can actually change from FIFO
    to F Queue codal.
  • 43:22 - 43:28
    so this is usually this
    usually behind some
  • 43:28 - 43:30
    of the quality of of
    service settings that
  • 43:30 - 43:32
    you can do with your routers here.
  • 43:32 - 43:35
    You can investigate and see
    if your router supports it
  • 43:35 - 43:37
    and you might actually get
    better quality of service
  • 43:37 - 43:41
    by turning those mechanisms on.
  • 43:42 - 43:45
    Another mechanism that TCP
    has is CCP flow control
  • 43:46 - 43:49
    it's very similar to
    congestion control but it's
  • 43:49 - 43:52
    to prevent the sender from
    overflowing the receiver
  • 43:52 - 43:55
    instead of offering that
    at the network bottleneck.
  • 43:56 - 44:01
    So I think at the sender receiver
    having vastly different performance
  • 44:01 - 44:06
    for example a Netflix cache with
    a low cost smart TV.
  • 44:06 - 44:09
    The way it works is the receiver
    advertise the receive window
  • 44:09 - 44:13
    the number of bytes it will accept
    before the next acknowledgement
  • 44:13 - 44:15
    or window update.
  • 44:15 - 44:19
    And then this is based on
    the windowing mechanism
  • 44:19 - 44:21
    much like congestion control
  • 44:21 - 44:24
    and overall the sender will be
    restricted to the minimum
  • 44:25 - 44:28
    of the congestion window and
    the receive window of course
  • 44:28 - 44:31
    So that's why we're
    basically trying to avoid
  • 44:32 - 44:36
    the network bottleneck and the receiver.
  • 44:36 - 44:40
    Alright and that's the basic
    data about the TCP protocol.
  • 44:40 - 44:46
    Also the next couple of slides
    will discuss the other
  • 44:46 - 44:50
    major transport protocol
    the UDP datagrams protocol and...
  • 44:50 - 44:54
    Well as you will see here
    UDP is much simpler.
  • 44:54 - 44:58
    So UDP is used when the data
    must arrive in a timely manner
  • 44:58 - 45:02
    unlike TCP,
    UDP is connectionless protocol.
  • 45:02 - 45:06
    So there's no connection set up,
    connection tear down,
  • 45:06 - 45:10
    There's no notion of a
    connection with UDP.
  • 45:10 - 45:16
    It's a best effort protocol and has
    no equivalent to TCP acknowledgement.
  • 45:16 - 45:20
    So it's not necessarily
    less reliable but there's
  • 45:20 - 45:26
    no reliability built in,
    there's no reliability guaranteed.
  • 45:26 - 45:30
    If there is packet loss
    then UDP datagrams
  • 45:30 - 45:35
    they're just lost and there's no
    re transmitting mechanism.
  • 45:35 - 45:41
    Also if datagrams are reordered in
    the network at the receiver
  • 45:41 - 45:45
    there's no attempt to reorder those
    datagrams back into the original order.
  • 45:45 - 45:48
    It also has no congestion or flow control
  • 45:49 - 45:53
    but on the positive side with UDP
    we very low pay packet overhead
  • 45:53 - 45:58
    so you have a much smaller
    and simpler packet header.
  • 45:58 - 46:02
    The one thing that UDP has in common
    with TCP, is port number.
  • 46:02 - 46:07
    So UDP has exactly the same sort of
    port numbers
  • 46:07 - 46:11
    that TCP has, so same thing
    and then you will see
  • 46:11 - 46:15
    it's got the same port number,
    fields and header as I'll show you that
  • 46:15 - 46:18
    in aslide or two.
  • 46:18 - 46:21
    So yeah, to stress the point
    about UDP right.
  • 46:21 - 46:25
    Reliability or lack thereof.
  • 46:25 - 46:29
    So UDP will not reassemble data packets
    to the original order
  • 46:29 - 46:34
    and it will not resend lost datagrams
    because it's connection is unreliable.
  • 46:34 - 46:38
    So this is sort of the same
    that we looked at before
  • 46:38 - 46:41
    with TCP where datagrams
    can take different
  • 46:42 - 46:46
    paths through the network
    and with UDP if the order
  • 46:46 - 46:48
    is jumbled up because
    of its different parts.
  • 46:48 - 46:52
    Well then the datagrams will be
    delivered in this jumbled up order
  • 46:52 - 46:59
    to the application and then the
    application has to sort that issue out.
  • 47:00 - 47:02
    So why would an application actually use
  • 47:02 - 47:05
    this kind of unreliable
    transport protocol.
  • 47:05 - 47:07
    Well there's a couple of cases where
  • 47:07 - 47:11
    we prefer UDP over TCP.
    And the first case is because
  • 47:12 - 47:17
    resending data is useless and we
    want to avoid any additional delay.
  • 47:17 - 47:20
    So if you think about
    teleconferencing
  • 47:20 - 47:27
    something like Skype or Discord
    or whatever additional delay
  • 47:27 - 47:30
    for re transmissions are
    more annoying than
  • 47:30 - 47:34
    the drop outs in the voice
    similar online games.
  • 47:34 - 47:37
    There's no point in resending
    packets after some actions
  • 47:38 - 47:43
    because you don't
    really want a lag game.
  • 47:43 - 47:48
    So you'd rather take into account
    that there maybe packet loss
  • 47:48 - 47:52
    and for example
    use some redundancy.
  • 47:52 - 47:57
    That's what many games use,
    so send data across multiple packets
  • 47:57 - 47:59
    so it doesn't matter if one is lost.
  • 47:59 - 48:02
    But we don't add any extra delay because
  • 48:03 - 48:07
    on games that might be
    quite delay sensitive
  • 48:07 - 48:11
    if you think about first person
    shooter games or similar games.
  • 48:12 - 48:17
    So that's one reason we want
    to keep the delay sort of
  • 48:17 - 48:23
    really short and we don't need
    to recent data.
  • 48:24 - 48:31
    Second case is we want to avoid
    the complexities of TCP
  • 48:31 - 48:34
    and the omits of TCP.
  • 48:34 - 48:38
    So a full TCP implementation
    is very complex and it may
  • 48:38 - 48:43
    be too complex for this human based
    LCP or the RAM.
  • 48:44 - 48:50
    And the application can implement
    a simple acknowledgement scheme
  • 48:50 - 48:53
    on top of UDP to transport
    the item that's required.
  • 48:54 - 48:58
    An example of such a protocols,
    it should be your file transfer protocols.
  • 48:58 - 48:58
    the FTP.
  • 48:59 - 49:07
    It's basically a simple sort of reliable
    protocol that sits on top of UDP.
  • 49:07 - 49:12
    but it's simpler than
    a full TCP implementation
  • 49:15 - 49:19
    And the third case where
    you might want to use UDP
  • 49:19 - 49:23
    is because you want to avoid
    the setup and the tear down.
  • 49:23 - 49:27
    of connections
    that we have in TCP
  • 49:27 - 49:30
    because setting up
    and tearing down TCP connections
  • 49:30 - 49:32
    requires a minimum of six packets.
  • 49:32 - 49:38
    OK that's a fair bit of sort of bandwidth,
    it might be unnecessary
  • 49:38 - 49:45
    and also setting up connections requires
    the server to be in state of connections
  • 49:45 - 49:52
    so that uses view CPU and also RAM
    and you want to avoid that
  • 49:52 - 49:57
    If we have frequent so short
    message exchanges
  • 49:57 - 50:01
    it's actually more efficient
    and cheaper in terms of bandwidth.
  • 50:01 - 50:06
    So resources to use UDP and a prime
    example of this is DNS lookups.
  • 50:08 - 50:10
    So with DNS we have serverss that
    have to deal
  • 50:10 - 50:13
    with thousands of requests per second.
  • 50:13 - 50:16
    And the DNS requests plus
    replies usually only two packets
  • 50:16 - 50:21
    so one packet for the request
    and then there's one packet for the reply
  • 50:21 - 50:26
    and that's a quarter of the packets
    that we would need with TCP
  • 50:26 - 50:30
    So remember we'll not only have
    those two packets but an additional
  • 50:30 - 50:35
    six packets for setting up a
    connection and then tearing it down.
  • 50:35 - 50:39
    Plus if we have those short message
    exchanges then flow congestion
  • 50:39 - 50:42
    so flow control and congestion control
  • 50:42 - 50:44
    they're really useless
    for these short flows
  • 50:44 - 50:46
    I mean that that they don't work.
  • 50:46 - 50:50
    They only work for so longer term flows.
  • 50:50 - 50:55
    And well last but not least if you
    use UDP we can actually implement
  • 50:55 - 51:00
    a reliable transport protocol on top
    of UDP without having to change
  • 51:01 - 51:07
    the operating system kernel because
    remember the protocols stack up to
  • 51:07 - 51:11
    the transport layer actually implemented
    in the operating system kernel.
  • 51:11 - 51:16
    And it's harder to make changes
    there and it's impossible
  • 51:17 - 51:21
    if the operating system is closed source
    so for example like Mac OS and windows.
  • 51:21 - 51:25
    So there is a protocol called Quick.
    It's a new transport protocol,
  • 51:25 - 51:27
    it'ss developed
    at Google to optimise
  • 51:27 - 51:32
    HTTP performance and to
    you most likely use it
  • 51:32 - 51:35
    every day if you actually
    use the Chrome browser.
  • 51:35 - 51:39
    And so the problem for Google
    was they wanted to do
  • 51:40 - 51:46
    an improved protocol to improve
    performance but pushing Quick
  • 51:46 - 51:50
    into the operating system
    kernels of all sorts of
  • 51:51 - 51:57
    clients that would be very
    difficult for Google to do.
  • 51:58 - 52:02
    But they own the Chrome browser
    so they can very easily
  • 52:02 - 52:06
    implement a transport protocol on
    top of UDP inside the browser.
  • 52:06 - 52:12
    So that's why they chose that avenue
    of course in terms of resources
  • 52:12 - 52:14
    the implementation inside
    the browser might
  • 52:14 - 52:19
    take up a few more cycles
    in terms of CPU.
  • 52:20 - 52:23
    But the good thing for Google
    is they fully control that
  • 52:23 - 52:26
    environment that can push
    updates at any point in time
  • 52:26 - 52:29
    and they just have to update
    chrome rather than having to
  • 52:29 - 52:35
    update lots and lots of different
    operating systems.
  • 52:35 - 52:39
    Almost at the end so just have a quick
    look at the UDP and TCP protocol
  • 52:39 - 52:43
    headers here so you can
    see that the TTP protocol header
  • 52:43 - 52:45
    is much bigger
    obviously because we have much
  • 52:45 - 52:50
    more functionality in TCP
    and the UDP head down here.
  • 52:50 - 52:54
    You can see that both headers
    have the source port
  • 52:54 - 52:59
    and the destination port as
    the first two header fields and then
  • 52:59 - 53:04
    the UDP hasn't got much else besides
    the length and the checksum
  • 53:05 - 53:08
    but TCP of course
    we have the sequence numbers.
  • 53:08 - 53:13
    We have the acknowledgement numbers
    we have the window size
  • 53:13 - 53:18
    and a bunch of flex to deal
    with all the threeway handshake
  • 53:18 - 53:21
    tear downs and so on.
  • 53:21 - 53:26
    So let's discuss TCP by this UDP.
  • 53:26 - 53:28
    Well neither protocol is better.
  • 53:28 - 53:31
    It's just what's appropriate
    for the application.
  • 53:32 - 53:37
    So if you're an application developer
    you must decide what to use.
  • 53:41 - 53:46
    If your application, you know,
    requires fast protocol, low overheads,
  • 53:46 - 53:49
    you don't need acknowledgments
    you don't need to reset lost data
  • 53:49 - 53:52
    and you want to deliver data
    as fast as it arrives
  • 53:52 - 53:54
    for example for things like IP tuner
  • 53:54 - 53:58
    for streaming rather
    than UDP is your choice.
  • 53:58 - 54:05
    If you need reliability you'd
    acknowledgements reending of those data
  • 54:05 - 54:08
    and data needs to be
    delivered to the application.
  • 54:08 - 54:11
    in the order it was sent
    which is the case for example
  • 54:11 - 54:14
    for applications
    like email or web.
  • 54:14 - 54:22
    Then you should use TCP of course.
    A little a homework for you
  • 54:22 - 54:29
    the following Cisco or network activity
    let me just quickly switch to that.
  • 54:29 - 54:41
    So it's basically activity
    to select
  • 54:41 - 54:46
    the right transport protocol
    for a number of different applications.
  • 54:46 - 54:49
    So over here have all
    those applications HDP
  • 54:49 - 54:54
    telnet FTP and we all discussed
    a couple of those in the first lecture
  • 54:54 - 54:59
    and you basically
    have to drag those boxes over here
  • 54:59 - 55:04
    to indicate where the something
    is either TCP or UDP or both.
  • 55:05 - 55:07
    So one example HDP.
  • 55:07 - 55:10
    We discussed that uses TCP, right.
  • 55:10 - 55:11
    And you can check your answers.
    That's correct.
  • 55:13 - 55:20
    Well I'll leave the rest
    for you to do at home.
  • 55:20 - 55:26
    And I will conclude lecture as
    much of lecture objectives
  • 55:26 - 55:31
    and you should be able to describe
    a number of things
  • 55:31 - 55:34
    I won't go through all these
    lecture objectives in detail
  • 55:34 - 55:39
    so just read all of those
    and make sure you understand
  • 55:39 - 55:44
    all those concepts and you
    can describe those concepts then...
  • 55:45 - 55:48
    Well today's lecture we looked at
    the application presentation,
  • 55:48 - 55:54
    such layers and the two major
    architect socommunications.
  • 55:54 - 55:57
    And we also looked at the transport layer
  • 55:57 - 56:01
    and the two main transport layer
    protocols, TCP and UDP.
  • 56:02 - 56:07
    The readings for this week
    introduction to the chapters 9 and 10.
  • 56:07 - 56:11
    And don't forget participation quiz.
  • 56:11 - 56:18
    One is to this Sunday and in the lapse
    in the second week
  • 56:18 - 56:22
    we'll examining some network traffic
    using a tool called Bioshock
  • 56:22 - 56:27
    so we look at actual DNS
    packets and it's a three way handshake
  • 56:27 - 56:32
    and how those things
    actually look on the web.
  • 56:33 - 56:39
    Oh well we'll look at those
    things flash up.
  • 56:39 - 56:45
    And then the next week we will continue
    descending down the old same model.
  • 56:45 - 56:49
    And so we'll talk about the network
    LAN next week.
  • 56:49 - 56:54
    Specifically you'll look at IP addressing
    and something called subnetting
  • 56:55 - 57:00
    And we will start discussing the role
    of routers in data communications.
  • 57:02 - 57:07
    Well this is an online lecture
    so you don't really sort of
  • 57:07 - 57:10
    bring pen and paper because
    I assume you probably
  • 57:10 - 57:13
    have a pen and paper wherever
    you're watching this lecture
  • 57:13 - 57:17
    but you have some pen
    and paper ready for exercise.
  • 57:18 - 57:20
    That's it for me for this week.
  • 57:20 - 57:22
    I'll see you next week's lecture.
Title:
Task ID #eb8d0f6c-6b50-4fcb-b39d-ae9357a4933c
Video Language:
English
Duration:
57:26

English subtitles

Revisions