-
SPEAKER: Hello and welcome back
to the second part of lecture 2
-
which is about the transport layer.
-
The transport layer segments application
data into transportable chunks
-
for transmission and also reassembles
segments as required
-
on the receiver side
the transport layer uses port numbers.
-
We also refer to those as ports
for short
-
to track individual
conversations and identify applications.
-
It is important to not confuse
these port numbers or ports
-
with physical ports on network devices
such as switches or routers.
-
Unfortunately we use
the same term for both
-
but based on the context
it's it's usually clear
-
what is meant or what
it is referring to.
-
The transport layer provides
reliability if required.
-
Well it depends on the kind of
transfer protocol that is used
-
and we do have different transport
layer protocols
-
so we can cater to different
requirements of application.
-
And since the transport layer
is responsible for transporting data
-
from the source to the destination
-
we also often refer to it
as an end to end concept.
-
Alright.
-
A very important concept on
the transport layer is port numbers.
-
So port numbers don't exist physically.
It's not a physical port.
-
They are a logical concept
used by operating systems
-
for the identification of
different applications.
-
Ports are identical but some
are actually recognized
-
as specific applications for example.
-
We talked about this in the first
part of the lecture DCP port 80
-
is recognised as a HTTP
port 53 three can be UDP or DCP
-
to recognise it's DNS and so on
-
Some applications can also have
multiple port numbers
-
for example HTTP can also use
other port numbers such as 8080.
-
And down here in the example you,
so if you figured it out
-
Here you see two or more examples.
-
So for electronic mail we have
port 110 for example
-
if we use the proper three protocol
and Internet chat
-
Internet chat application might use Port 531.
-
OK, and the port numbers they are actually
encoded in the transport layer headers.
-
The port numbers are sixteen
bit integer values.
-
So the range is 0 to 65,535
-
and this range is actually separated
into three different regions.
-
So three classes of ports.
-
We have what's called
the well known ports.
-
Those are the ports from 0 to 1023.
-
They're used for common services
and applications.
-
So HTTP port 80 is one example
port 53 for DNS is another example
-
or SDP we have port 25,
it's well known port for that protocol
-
Then above that range
of well-known ports
-
we have the range of registered ports
which range from 1024 to 49,151
-
So those are ports for less common
use services so applications.
-
Couple of examples here,
so OpenVPN uses Port 1194
-
CIP which is used in the context of
voice over IP uses port 5060.
-
And above that range
we have what's called
-
dynamic or private ports so all the ports,
49,151 until the end of the range.
-
They're dynamic ports and they're
used for client initiated sessions.
-
So these ports are dynamically
assigned to client applications.
-
And if you want to know
more there's a full list available
-
you can go to that Wikipedia page.
-
Alright, so here's another example
on the slide,
-
So in this example we have clients that
use private ports to initiate sessions.
-
And we have some uh
applications you're running on
-
well known ports and...
-
Yeah. Keep in mind the source port does
not need to match the destination port.
-
Some protocols where that is
the case or maybe the case
-
many probably from applications
that are peer to peer applications
-
but in general the source code
does not need to match
-
destination port and is often different.
In this example down here
-
we have one server that runs an HTTP
server port 80
-
and SMCP server port 25,
and we have two clients
-
client one on the side,
client two on the other side
-
and a client who makes an HTTP
request to the server
-
and it picks a random
port out of this dynamic range
-
which is here in this example port
49,152.
-
And then of course the HTTP request
must be sent to the port
-
on which the server application
is listening
-
and that's port 80 of course
and the server responds.
-
So we assume that that HTTP request
goes to your server
-
and the server responds.
-
The server response obviously
then comes from source port 80
-
and it goes to the clients port.
-
So the destination port
will be 49,152
-
And basically the same thing
happens over here with clients two
-
who wants to send an email.
-
The client has selected a
dynamic port here of the range
-
port 51152 and for the request
the destination port is the port,
-
the well known port of
the SCDP so port 25
-
and then when the response
comes back from the server
-
the response comes from port
twenty five and it goes to
-
that dynamic port that the client
put in the source profile
-
of the request which is Port 51152
so hopefully this makes
-
the concept of dynamic ports
and well-known ports clearer.
-
So let's move on to the transport
layer protocols that we will discuss.
-
In this part of
the lecture we will talk about
-
the two most common transport
layer protocols
-
the Transmission Control Protocol, TCP
and the User Data protocol, UDP.
-
TCP is used when
the delivery of data must be reliable
-
for example filed downloads,
for feature streaming ,
-
for loading web pages.
Address UDP is used when
-
the delivery of data must be timely
and doesn't need to be reliable.
-
So things like voice over IP,
video communications
-
especially real time sort of video
communications
-
they make use of UDP
as well as online games
-
where delay is to be avoided.
-
So in fact first person shooter games
they are usually based on UDP.
-
There are other transport protocols.
-
So it is actually not just TCP and UDP
but these other protocols
-
are not that widely used.
-
So two examples are Stream
Control Transmission Protocol, SCTP.
-
Actually SCTP is fairly
well used in the some of the
-
Telcom companies because
they use that as transport protocol
-
for signalling. That's just used
in the signaling network
-
but in a wider internet it
is not very widely used.
-
We also have the Datagram
Congestion Control Protocol
-
DCCP, another transport protocol
that's not very widely used.
-
So in the remainder of this part
we'll talk about a TTP protocol first
-
because it is way more complicated
and it's a lot more to say,
-
and then we'll talk about UDP which is
actually a fairly simple protocol
-
and then we'll end up with a little bit
of a comparison between the two
-
and for which type of applications
we should use TCP
-
and for which we should use UDP,
We'll discuss that at the end.
-
Alright let's talk about TCP.
-
TCP is a connection oriented protocol.
-
It means that the communications
between two devices
-
must be explicitly initiated
and terminated.
-
Not all transport layer protocols
are connection oriented
-
so UDP is not connection oriented.
-
So the first thing
to establish a TCP connction
-
is a handshake process that's known
as a three way handshake
-
and consists of three steps
or three segments that are sent around
-
and those are illustrated
in this figure here.
-
So clearly identify but three
numbers one two three.
-
So at a first step the initiator
of the connection which is
-
off the client will send
a SEE to the other side
-
of the server if the server
receive the SEE
-
the server will respond with the SYN AC
AC for acknowledgment
-
and then the handshake is completed
with another act sent by A here
-
over to B.
-
After those three packets
have been exchanged
-
the TCP connection will sort of
move into an established state
-
and then data can be exchanged
between the two sides okay.
-
So this is the way TCP
connections are setup.
-
On the next slide we'll
talk about how connections
-
are terminated so after a
conversation is complete
-
the connection is terminated
using either three or four steps.
-
These steps illustrated
this little picture down here.
-
And so what happens is when a
connection is terminated
-
one side that wants to terminate
will send what's called a FIN.
-
The other side will receive
the FIN and send it back.
-
And sometime later will
and also sent a FIN and finally
-
our station A here will send
an AC for that FIN
-
as received a B
and at this point in time here
-
the connection is
completely terminated.
-
So this is either a three way
or a four way process because
-
in many cases the AC here
is sent by B at the FIN
-
they can be combined
into a single segment.
-
So we actually only need three
segments, A will send FIN
-
B will send a combined AC FIN
and then A will send back AC.
-
But in some cases we actually may have
the full sort of four step process.
-
And on the next sort of
slide I will discuss why
-
we actually need a sort of slightly
more complicated process.
-
The fourth step process rather than
just having three messages.
-
So here's two questions for you regarding
the setup and tear down of TCP connections
-
Why do we actually need
a three way handshake?
-
isn't two handshakes sufficient?
-
So two handshakes as in
A to send a signal to B
-
and B responds back.
-
So why isn't that two way
handshake sufficient?
-
The reason this we need at least three
packets so that both sides
-
can be sure that the connection
is to be established.
-
Think about if we only had
a two way handshake.
-
So we only had this first packet
here and a second packet here.
-
Well there's no guarantee that
that AC here from B to A
-
actually arrives at , the AC could
be lost inside the network.
-
And so then we had the problem that
A would treat the connection as...
-
sorry, B would treat
the connection as established.
-
Assuming that the AC would arrive at A
but A actually never receives the AC
-
and A would treat the connection
as not established
-
because it hasn't received the AC
-
So only view that third packet here
going from A to B
-
both A and B can be sure
-
that the connection is in
an established state.
-
Second question down here is why
do we have full message.
-
Why do we have four messages
here down
-
rather than having only three messages.
-
And again the four messages
is not always happening.
-
We may have three messages
at times.
-
But why have four messages
in the extreme case, why is that?
-
Well simple answer is that TCP
actually supports half closed connections.
-
So it supports the case
where one side is closing
-
its side of the connection
but the other side still sending data
-
and this only works with a
four way sort of message tear down.
-
So in this example, imagine for example
that A wants to close
-
because A does not have any data
to send any more
-
so A sends a FIN which is acknowledged
by B with an AC
-
but B actually still has data that
needs to be sent to A
-
so rather than sending this FIN
immediately here
-
B will continue sending data
and then only when B has sent all the data
-
it will close its end
of the connection sending the FIN,
-
and then that is acknowledged by the AC
from A and then at that stage here,
-
the connection is fully closed.
-
So that the four message tear
down allows us to do
-
a half close on the connection basically
-
or close one side
and keep the other side open.
-
There is a little activity
on Cisco network and then a CAT here
-
about TCP connection
and termination process.
-
I will quickly show you but I want you
to do the whole activity
-
I'll leave that for you as homework.
-
Bear with me for a second.
-
So here is the activity.
-
The first activity is basically
the three way handshake
-
and you're meant to sort
of drag those boxes over here
-
into those fields here.
-
Until that sort of process
is correctly labeled.
-
And so it's pretty trivial.
-
So I mean A sends a SEE to B, right.
-
So then what it means is that we
have a SEE received here
-
and we can check for correctness.
-
So that's correct.
-
And then you can basically drag
and drop those other things over here
-
up to here.
-
Better learn how that handshake works,
you better memorize it,
-
and I think the other one,
so the second part of the activity
-
is about the termination session.
-
So again there's two boxes here
called FIN and AC
-
and you'd have to sort of
just drag those into those fields
-
to describe the termination process.
-
OK, let's go back to the lecture slides.
-
So now I want to talk about
the various sort of properties
-
that TCP gives us or gives
the application and TCP
-
has quite a bit of functionality
as you will see sort of.
-
The first thing is that TCP provides
in order, delivery of segments
-
to the application and it does
that based on sequence numbers.
-
So what happens is that sender here
-
divides that data up into the segments
-
and the segments are numbered
with sequence numbers
-
so example from one to six
and it sort of said in the first lecture
-
an IP that works
-
segments and packets that can take
different parts through the network.
-
So we'll have two possible
routes here
-
from the source to the destination
some segments here,
-
some segments or packets take this route,
but others might take this route.
-
So if they take different
routes then
-
they may actually arrive out of
order at the destination.
-
So in this case we receive segment 1, 2, 6
5, 4 and then 3
-
so the orders obviously jumbled up.
-
If we or the receiver here
if the stuck at the receiver would pass up
-
the segments in this jumbled up order
then obviously you can imagine that
-
the application but get
a lot of garbage basically
-
and couldn't interpret that data.
-
So whatever you're doing
like if you send an email
-
this would be completely garbled up.
-
So what TCP does is it
reorders the segments
-
back into the original order
based on a sequence
-
and as before it passes
the segments to the application.
-
So all the segments that are
passed to the applications
-
they are passed in the order
they were sent by the sender.
-
So there's no reordering
on top of TCP
-
or in other words applications
that use TCP
-
can be assured that segments or packets
are not delivered...
-
Not in the original order to
the receiving application.
-
The other thing TCP provides us with
is reliable transport
-
so the sequence numbers are used
in conjunction with
-
acknowledgement or Acs
or acknowledgement numbers
-
to provide reliable data
transport so all the data transmitted
-
using TCP must be acknowledged
and an acknowledgement is cumulative.
-
In TCP it means that it
also acknowledges
-
all the preceding segments that
were received
-
after the last acknowledgement that
has been received.
-
And receivers always acknowledge with
the next expected badge.
-
Keep that in mind.
-
So if we look at the example
over here we have a sender
-
and we have a receiver
and this example also
-
introduces us to the concept
of window size
-
we'll come back to that on some
of the following slides.
-
So window is basically the amount of
data that TCP can have in flight
-
and acknowledge and so in this
case it's 300 bytes.
-
And so this is why the sender
can send two
-
1500 byte segments here
over to the receiver.
-
and the receiver receive those two
-
and then the receiver will
send an acknowledgement
-
for both of these and you can
see that the acknowledged number here
-
is 3001, which is the next expected byte.
-
So the receiver has received all the bytes
for one to 3000.
-
The next expected
byte is 3001.
-
Well when the sender gets
acknowledgment the sender
-
sort of sends more segments
sends another two segments
-
down here with the bytes
3001 to 6000
-
and again the receiver will acknowledge.
-
both of these segments
here with one acknowledgement
-
and it will have the number
6001 because that is the next byte
-
the receiver expects
the sender to send OK.
-
Let's sort of go a bit more
into the details here.
-
So when segments are not
acknowledged within the time limit
-
so in the best case they
are acknowledged
-
like in a previous slide this is
when everything goes perfectly fine
-
but if things are not going that well
and segments are not acknowledged
-
within some time limit they need
to be re-transmitted by the sender.
-
Segments can be lost due to
network congestion
-
or interruptions to the medium.
-
If somebody I don't know,
takes out a cable or something like that
-
or falls in the hardware
of course.
-
Data received after a loss
is not acknowledged.
-
This is illustrated in
the figure over here.
-
So here we have the case that
everything was fine at the start
-
but then when the sender sends
two more segments here
-
the first of those segments is actually
lost because it's dropped
-
somewhere in the network and it
never arrives at the receiver.
-
So then what the receiver will do
is despite having received
-
that later segment here covering
the bytes between 4501 and 6000.
-
Since TCP does not acknowledge bytes
after loss
-
it will send back another acknowledgement
with the number 3001
-
because that is the number up to
at which point we've received
-
you know, a continuous stream of segments
and then we've lost a segment
-
and we received another segment
after that
-
but we do not acknowledge any
segments received after loss.
-
We'll sort of acknowledge whatever we
have received before the loss.
-
Actually there is a mechanism to
acknowledge segments
-
received after a loss.
-
It's called selective acknowledgements
but it's out of the scope of the unit.
-
So this is a bit more complicated than
-
the simple sort of
acknowledgement we discuss here
-
and it's implemented to
be much more efficient
-
since of course what
happens in this case here.
-
Once the receiver acknowledges
or it sends acknowledgement number 3001.
-
Of course what happens is the sender
will resend this segment here
-
starting with byte 3001 as well as
the next segment starting with byte 4001.
-
So despite the fact that
this segment the second here
-
was already received with cumulative
acknowledgements
-
we'll have to resend
-
or the sender has to resend
this again as well.
-
And so with selective ACs
we can do this way more efficiently
-
but it's also
way more complicated.
-
So it's out of scope of
the discussion here.
-
Next I want to talk about
another feature of TCP
-
which is called congestion control
TTP uses congestion control
-
to manage the rate
of transmission.
-
So you can think about it as
an accelerator and brake
-
on the rate of the transmission
and the TCP congestion window
-
specifies the maximum
number of unacknowledged segments
-
that can be in-flight
from sender to receiver.
-
Why do we actually
have congestion control?
-
Well let's do a little sort
of thought experiment here.
-
What if TCP senders could only send
one packet at a time without AC
-
If the round trip delay
between the sender and receiver
-
was something like two hundred
milliseconds then you know it would mean
-
that TCP could only
send five packets per second.
-
That would be very very slow .
certainly we wouldn't congest
-
the network but the TCP performance
would be horribly slow.
-
So what if on the other hand TCPs
senders could sent
-
as fast as a LAN connection permits for
example of one gigabit per second.
-
Well the gateway to
the Internet is usually a bottleneck,
-
and the gateway to the Internet,
-
If you think about home networks
for example it's unlikely to be able
-
to send with one gigabits
per second into the Internet.
-
So then we get what is called
congestion on the gateway.
-
So packets are building up in
the queues and eventually we have
-
full queues and any further
packets arriving will be dropped.
-
And you also need to consider
that we share the resources
-
we share the network with
many many other users.
-
So it's just a little picture here
to illustrate the links
-
that carry traffic between
different continents,
-
and you can see there's quite
a number of links between
-
United States and Asia,
and United States with Europe.
-
But it's not that many links connecting
Australia to the rest of the world.
-
So these are usually fiber
links and you can imagine that
-
those links you know that carry
the traffic of,
-
of billions or tens
or hundreds of millions
-
of concurrent TCP connections.
-
So all of these TCP connections
share the links
-
and so the question is how does
a TCP sender
-
find that the perfect rate,
so that fair share of a 100%
-
utilized bottleneck link speed.
-
So finding that you know that
perfect weight where we sort of
-
fairly share that think with lots
and lots of other TCP connections.
-
At the same time we'll utilize all
the bandwidth or the capacity
-
that it has while at the same time we'll
try to avoid congestion in routers.
-
Well that is the job of the congestion
control algorithm inside TCP.
-
And there are many congestion
control algorithms.
-
And on this slide also want to briefly
illustrate the new redo algorithm.
-
It's one of the traditional
algorithms that was
-
the default algorithm for a long
time in most operating systems.
-
So that algorithm has two phases
and has a slow start phase
-
and there's a congestion avoidance phase
and a slow start is actually not that slow
-
despite the fact that
it's named slow start.
-
So what slow start as is
or what TCP does
-
Slow start is it starts with an initial.
-
Congestion window of two segments or
these days will actually use 10 segments,
-
as the initial congestion window.
-
And then the sender will increase
the congestion window by one segment
-
for every packet acknowledged
by the receiver.
-
So this will lead to a relatively
quick increase in throughput
-
up to the maximum possible fare share.
-
By the time when the sender
detects packet loss.
-
It halves the window and then
it goes into congestion avoidance
-
In the and avoidance
phase without loss,
-
the window is increased by one
segment for each round trip time.
-
Round trip time means the time
it takes for a packet to go from A to B
-
and a response to come
back from B to A.
-
So that's that's a road trip
that's a round trip time.
-
So for each round trip sender will
increase the window by one segment
-
when the sender transmits to fast
and congests the link again,
-
then it will mean that you know
a congestion at the router occurs,
-
queues also,
and there will be packet drops.
-
When the sender detects those
packet drops it has its window
-
So this quickly shrinking off the window
will quickly reduce
-
the throughput of the connection
but it will also quickly reduce
-
the congestion on the bottleneck.
-
That's the idea.
-
And then after that we'll
have that sort of slow increase
-
in one segment
each round trip time again.
-
Where the sort of sender starts basically
sending more and more
-
until we sort of hit the limit again.
-
And to a sort of to better illustrate that
it's easiest to see this in a graph.
-
This is an actual graph
of the congestion window
-
of a single TCP connection
going through a bottleneck
-
and you can
see at the start here,
-
slow start where we have a rapid increase
of the congestion window
-
and then we have the first packet loss
and congestion we know sort of drops,
-
halved, it's halved again
and then we go into congestion avoidanc
-
and you can see us saw tooth pattern
-
and congestion
avoidance will basically...
-
Will slowly increase
the congestion window over time,
-
but when it's one to time
and then at some stage
-
we'll hit congestion again
if there's packet loss,
-
will quickly
or the sender will quickly
-
reduce the window to half of its size
-
and then we'll start
the upward probing again
-
until we hit loss again
half a window and so on.
-
So with a single flow through
a bottleneck we get to see
-
a perfect saw tooth pattern
of course with multiple flows.
-
This will look much messier
so to come back to the point of congestion
-
of congestion again so make it
very clear what it means is
-
so congestion occurs when
the number of packets arriving
-
at a router is higher than number of ACs
that can be sent on the next link.
-
So if we have lots of different
devices here,
-
all send packets to the router
then sent to the Internet.
-
And this is the one link to the Internet,
let's say,
-
and the link speed here of this link
is less than the combined link speeds
-
for all these different devices then.
-
Well we have to buffer
packets on the router
-
if the packet rates are too high
or higher than this link, right?
-
And if those rates are persistently
higher then this link
-
will then of course eventually
will get filled with all of our buffers.
-
And then the router has no choice
but to drop packets.
-
And then of course with TCP congestion
control we have the fact that
-
the TCP under senders
here will take that bus
-
as indication for congestion.
-
The window will be halved
and all of those devices here
-
will send a lot less packets.
-
Which then means the queues,
they can be drained,
-
and we won't have any loss
sort of in there (INAUDIBLE)
-
but again the window size
will increase again.
-
All those devices there was some sense of
faster and faster rates
-
based on the upward probing
of the congestion control algorithm
-
until the queues become full again
and we'll have the next packet loss,
-
and then we'll send less again
and so on.
-
Now you might say okay
well if the problem is packet loss
-
if packets are full then why not make
the router buffers really really big.
-
So we can basically avoid any any
packet losses
-
so we can have a case where either the
combined sender rates
-
can be higher than that
-
then you communicate for
a very very long time.
-
If we have big buffers
you can have it sort of
-
for varying something
like packet drops.
-
And this is what quite a number of people
actually used to think for years
-
and it led to a fairly large buffers
which caused another problem though
-
which is referred to as buffered load.
-
If buffer sizes are very large
then that means also the latency
-
will be quite high because
packets are stuck in those buffers
-
for quite some time.
-
And so remember your email will always
eventually fill the buffers
-
to full capacity and those large
buffers will take a long time to clear.
-
Any applications that require
reliability but also really benefit
-
from low latency that
say stock trading for example.
-
They have an issue with the high latency
-
that's caused by this buffered load
-
so buffers have to be reasonably small to
maintain a reasonably low network latency.
-
So we can't just make buffers really
really big that will cause problems
-
for applications that you know rely
on or benefit from lower latency.
-
So we talked about (INAUDIBLE)
which has been
-
the standard congestion control
mechanism for some time.
-
It's still used by Windows.
-
It was still used by Mac OS
until fairly recently.
-
But actually dozens
perhaps hundreds
-
including sort of research
research kind of algorithms
-
that exist,
Linux and Mac OS
-
now use a different algorithm
called cubic.
-
And there's also an algorithm called BBR,
it's been designed by Google
-
in recent years,
it has created a bit of a hype
-
Not all aim for maximum performance.
-
So some might have slightly different
aims and there's also algorithms
-
that use estimates of network delay
as an indicator of congestion as well
-
not just losses indicate of congestion
-
but also estimates of network
like for example BBR.
-
And despite the fact that
TCP is a fairly old protocol
-
TCP congestion control is
still a highly active
-
research area in data
communications.
-
There's also something called
active queue management.
-
So the idea is to improve things by
actually routers actively telling
-
TTP senders that there is congestion
sort of if routers could tell senders
-
and sort of some configure buffer
that is when there's congestion then,
-
senders could back off earlier and we
could avoid the packet loss.
-
So there are some algorithm
of active queue management
-
and there is something called TCP
early congestion notification.
-
And the idea is that the route of
the mock packets when the queue length
-
or the estimate layers
above computable threshold.
-
And then the TTP receiver
will echo those marks
-
back to the sender
and the sender can reduce
-
the congestion window before
we actually get to that stage
-
where we have packet loss.
-
So using this mechanism will
actually improve performance.
-
But it requires that routers
support this mechanism.
-
And of course senders and receivers
must also support the mechanism
-
Active queue management can actually
improve performance quite a bit
-
but many people don't actually notice.
-
So to illustrate that point,
I created this little slide here.
-
So we have the normal
sort of FIFO queues
-
and also two graphs
appear for FIFO queues
-
first in first out and then down here,
-
those two graphs are for
an active queue management mechanism.
-
called F Queue codal.
-
This is from an experiment
where we have...
-
when we look at the UpLink let's say
-
the uplink from your home network
to the Internet
-
and there's three traffic flows.
-
There is a gaming flow based
on UDP going upstream.
-
That's the dark blue line here so those
two graphs are throughput graphs,
-
so just write the throughput
of the three different traffic flows
-
so that flow here is a game traffic
it's veryy constant sort of throughput
-
and the other two traffic flows
are TCP connections.
-
So the light brown here, that's the
throughput of the TCP connections.
-
And those two graphs in
the right hand side here
-
show the RTT that those traffic
flows experience.
-
And the sort of fixed
delay for this experiment
-
was set to 100 milliseconds
of RTT and anything above
-
100 milliseconds is added,
constitutes delay added by
-
queueing the packets inside
the router and so you can see
-
with our traditional sort of
first in first out strategy
-
we'll get fairly high delays
and like all
-
the three different traffic
flows experienced
-
the same types of delays
and the like it's fairly high,
-
so we almost reached
300 milliseconds here
-
much much higher than
the sort of the base delay.
-
So when we do the same type
of experiments but with F Queue codal
-
to manage the queue then A you see
in the throughput graph here,
-
we got a bit of more fairness
in the sharing here I suppose
-
of the TCP flows closer to
the fair share
-
whereas here, that's a fair bit
of going up and down.
-
And the other thing you can see while
one thing is quite hard to see
-
but for the actual game traffic
here the delay is really minimal.
-
So the dark blue dots they
only extend up to here.
-
OK so we'll barely reach
125 milliseconds for the game traffic
-
which is of course very important
if you play first person shooter games
-
and for the TCP flows we get a little bit
more delay here but basically after that
-
so slow start fair, well,
we'll never really exceed
-
200 milliseconds of delay so you
can see the positive effect here
-
will reduce delay and we'll get
a fairer sharing here
-
and that's actually home routers where
you can you can turn this on
-
so you can actually change from FIFO
to F Queue codal.
-
so this is usually this
usually behind some
-
of the quality of of
service settings that
-
you can do with your routers here.
-
You can investigate and see
if your router supports it
-
and you might actually get
better quality of service
-
by turning those mechanisms on.
-
Another mechanism that TCP
has is CCP flow control
-
it's very similar to
congestion control but it's
-
to prevent the sender from
overflowing the receiver
-
instead of offering that
at the network bottleneck.
-
So I think at the sender receiver
having vastly different performance
-
for example a Netflix cache with
a low cost smart TV.
-
The way it works is the receiver
advertise the receive window
-
the number of bytes it will accept
before the next acknowledgement
-
or window update.
-
And then this is based on
the windowing mechanism
-
much like congestion control
-
and overall the sender will be
restricted to the minimum
-
of the congestion window and
the receive window of course
-
So that's why we're
basically trying to avoid
-
the network bottleneck and the receiver.
-
Alright and that's the basic
data about the TCP protocol.
-
Also the next couple of slides
will discuss the other
-
major transport protocol
the UDP datagrams protocol and...
-
Well as you will see here
UDP is much simpler.
-
So UDP is used when the data
must arrive in a timely manner
-
unlike TCP,
UDP is connectionless protocol.
-
So there's no connection set up,
connection tear down,
-
There's no notion of a
connection with UDP.
-
It's a best effort protocol and has
no equivalent to TCP acknowledgement.
-
So it's not necessarily
less reliable but there's
-
no reliability built in,
there's no reliability guaranteed.
-
If there is packet loss
then UDP datagrams
-
they're just lost and there's no
re transmitting mechanism.
-
Also if datagrams are reordered in
the network at the receiver
-
there's no attempt to reorder those
datagrams back into the original order.
-
It also has no congestion or flow control
-
but on the positive side with UDP
we very low pay packet overhead
-
so you have a much smaller
and simpler packet header.
-
The one thing that UDP has in common
with TCP, is port number.
-
So UDP has exactly the same sort of
port numbers
-
that TCP has, so same thing
and then you will see
-
it's got the same port number,
fields and header as I'll show you that
-
in aslide or two.
-
So yeah, to stress the point
about UDP right.
-
Reliability or lack thereof.
-
So UDP will not reassemble data packets
to the original order
-
and it will not resend lost datagrams
because it's connection is unreliable.
-
So this is sort of the same
that we looked at before
-
with TCP where datagrams
can take different
-
paths through the network
and with UDP if the order
-
is jumbled up because
of its different parts.
-
Well then the datagrams will be
delivered in this jumbled up order
-
to the application and then the
application has to sort that issue out.
-
So why would an application actually use
-
this kind of unreliable
transport protocol.
-
Well there's a couple of cases where
-
we prefer UDP over TCP.
And the first case is because
-
resending data is useless and we
want to avoid any additional delay.
-
So if you think about
teleconferencing
-
something like Skype or Discord
or whatever additional delay
-
for re transmissions are
more annoying than
-
the drop outs in the voice
similar online games.
-
There's no point in resending
packets after some actions
-
because you don't
really want a lag game.
-
So you'd rather take into account
that there maybe packet loss
-
and for example
use some redundancy.
-
That's what many games use,
so send data across multiple packets
-
so it doesn't matter if one is lost.
-
But we don't add any extra delay because
-
on games that might be
quite delay sensitive
-
if you think about first person
shooter games or similar games.
-
So that's one reason we want
to keep the delay sort of
-
really short and we don't need
to recent data.
-
Second case is we want to avoid
the complexities of TCP
-
and the omits of TCP.
-
So a full TCP implementation
is very complex and it may
-
be too complex for this human based
LCP or the RAM.
-
And the application can implement
a simple acknowledgement scheme
-
on top of UDP to transport
the item that's required.
-
An example of such a protocols,
it should be your file transfer protocols.
-
the FTP.
-
It's basically a simple sort of reliable
protocol that sits on top of UDP.
-
but it's simpler than
a full TCP implementation
-
And the third case where
you might want to use UDP
-
is because you want to avoid
the setup and the tear down.
-
of connections
that we have in TCP
-
because setting up
and tearing down TCP connections
-
requires a minimum of six packets.
-
OK that's a fair bit of sort of bandwidth,
it might be unnecessary
-
and also setting up connections requires
the server to be in state of connections
-
so that uses view CPU and also RAM
and you want to avoid that
-
If we have frequent so short
message exchanges
-
it's actually more efficient
and cheaper in terms of bandwidth.
-
So resources to use UDP and a prime
example of this is DNS lookups.
-
So with DNS we have serverss that
have to deal
-
with thousands of requests per second.
-
And the DNS requests plus
replies usually only two packets
-
so one packet for the request
and then there's one packet for the reply
-
and that's a quarter of the packets
that we would need with TCP
-
So remember we'll not only have
those two packets but an additional
-
six packets for setting up a
connection and then tearing it down.
-
Plus if we have those short message
exchanges then flow congestion
-
so flow control and congestion control
-
they're really useless
for these short flows
-
I mean that that they don't work.
-
They only work for so longer term flows.
-
And well last but not least if you
use UDP we can actually implement
-
a reliable transport protocol on top
of UDP without having to change
-
the operating system kernel because
remember the protocols stack up to
-
the transport layer actually implemented
in the operating system kernel.
-
And it's harder to make changes
there and it's impossible
-
if the operating system is closed source
so for example like Mac OS and windows.
-
So there is a protocol called Quick.
It's a new transport protocol,
-
it'ss developed
at Google to optimise
-
HTTP performance and to
you most likely use it
-
every day if you actually
use the Chrome browser.
-
And so the problem for Google
was they wanted to do
-
an improved protocol to improve
performance but pushing Quick
-
into the operating system
kernels of all sorts of
-
clients that would be very
difficult for Google to do.
-
But they own the Chrome browser
so they can very easily
-
implement a transport protocol on
top of UDP inside the browser.
-
So that's why they chose that avenue
of course in terms of resources
-
the implementation inside
the browser might
-
take up a few more cycles
in terms of CPU.
-
But the good thing for Google
is they fully control that
-
environment that can push
updates at any point in time
-
and they just have to update
chrome rather than having to
-
update lots and lots of different
operating systems.
-
Almost at the end so just have a quick
look at the UDP and TCP protocol
-
headers here so you can
see that the TTP protocol header
-
is much bigger
obviously because we have much
-
more functionality in TCP
and the UDP head down here.
-
You can see that both headers
have the source port
-
and the destination port as
the first two header fields and then
-
the UDP hasn't got much else besides
the length and the checksum
-
but TCP of course
we have the sequence numbers.
-
We have the acknowledgement numbers
we have the window size
-
and a bunch of flex to deal
with all the threeway handshake
-
tear downs and so on.
-
So let's discuss TCP by this UDP.
-
Well neither protocol is better.
-
It's just what's appropriate
for the application.
-
So if you're an application developer
you must decide what to use.
-
If your application, you know,
requires fast protocol, low overheads,
-
you don't need acknowledgments
you don't need to reset lost data
-
and you want to deliver data
as fast as it arrives
-
for example for things like IP tuner
-
for streaming rather
than UDP is your choice.
-
If you need reliability you'd
acknowledgements reending of those data
-
and data needs to be
delivered to the application.
-
in the order it was sent
which is the case for example
-
for applications
like email or web.
-
Then you should use TCP of course.
A little a homework for you
-
the following Cisco or network activity
let me just quickly switch to that.
-
So it's basically activity
to select
-
the right transport protocol
for a number of different applications.
-
So over here have all
those applications HDP
-
telnet FTP and we all discussed
a couple of those in the first lecture
-
and you basically
have to drag those boxes over here
-
to indicate where the something
is either TCP or UDP or both.
-
So one example HDP.
-
We discussed that uses TCP, right.
-
And you can check your answers.
That's correct.
-
Well I'll leave the rest
for you to do at home.
-
And I will conclude lecture as
much of lecture objectives
-
and you should be able to describe
a number of things
-
I won't go through all these
lecture objectives in detail
-
so just read all of those
and make sure you understand
-
all those concepts and you
can describe those concepts then...
-
Well today's lecture we looked at
the application presentation,
-
such layers and the two major
architect socommunications.
-
And we also looked at the transport layer
-
and the two main transport layer
protocols, TCP and UDP.
-
The readings for this week
introduction to the chapters 9 and 10.
-
And don't forget participation quiz.
-
One is to this Sunday and in the lapse
in the second week
-
we'll examining some network traffic
using a tool called Bioshock
-
so we look at actual DNS
packets and it's a three way handshake
-
and how those things
actually look on the web.
-
Oh well we'll look at those
things flash up.
-
And then the next week we will continue
descending down the old same model.
-
And so we'll talk about the network
LAN next week.
-
Specifically you'll look at IP addressing
and something called subnetting
-
And we will start discussing the role
of routers in data communications.
-
Well this is an online lecture
so you don't really sort of
-
bring pen and paper because
I assume you probably
-
have a pen and paper wherever
you're watching this lecture
-
but you have some pen
and paper ready for exercise.
-
That's it for me for this week.
-
I'll see you next week's lecture.