Transport Layer and Congestion Control: Computer Networks Class Notes

Updated: Aug 18

Mobiprep has created last-minute notes for all topics of Computer networks to help you with the revision of concepts for your university examinations. So let’s get started with the lecture notes on Computer networks.

  1. Computer Networks - Basics

  2. Network Devices

  3. Network Models

  4. Physical Layer

  5. Network Layer

  6. Transport Layer and Congestion control

  7. Application Layer

  8. Web Security

  9. Email and IP Security


Our team has curated a list of the most important questions asked in universities such as DU, DTU, VIT, SRM, IP, Pune University, Manipal University, and many more. The questions are created from the previous year's question papers of colleges and universities.

  1. explain the functions of transport layer.

  2. what is a process?

  3. what is a socket?

  4. what is a port?

  5. difference between connection oriented and connectionless protocols.

  6. what are the protocols used in the transport layer?

  7. what is udp? what are the fileds in udp header? explain the operation of udp.

  8. what is tcp? explain tcp segment structure with diagram.

  9. what are the features of tcp?

  10. explain the working of tcp.

  11. define congestion control.

  12. why does congestion occur?

  13. what are the methods of congestion control?

  14. define quality of service.

  15. explain the techniques used to improve qos.

  16. explain resource reservation protocol (rvsp) with its working.

  17. which type of addressing is used in transport layer?

  18. explain leaky bucket algorithm?

  19. explain token bucket algorithm.

  20. what do you understand by jitter control?

Transport Layer and Congestion Control


Question 1) Explain the functions of transport layer.

The transport layer is responsible for process to process delivery of the message. The following are the other functions of transport layer:

The transport layer ensures that the data is receiver at the receiver intact and in-order

  1. It ensures both error control and flow control.

  2. It is responsible for service-point addressing. The service point address is also known as the port address. The port address is used to identify the target process in the destination machine.

  3. It is responsible for segmentation and reassembly. The transport layer at the sender divides the data into blocks called segments. Each segment has a sequence number. At the receiver, the sequence number is used for the reassembly of the segments.

  4. It decides the type of connection between the source and destination (connection-oriented communication or connectionless communication).


 

Question 2) What is a process?

Answer) Process is an application program that is run on the host machine. It might be an instance of a computer program that is being executed. In other words, a program under execution is called a process. Process is an active entity, but program is a passive entity.


 

Question 3) What is a socket?

Answer) A socket refers to an end-point in a communication link. End-point of a communication link is usually a process running in the source or destination machine.

A socket is a combination of an IP address and a port number. Each socket is bound to a port number. The transport layer uniquely identifies the target process in the destination machine using the port number.

Example: 192.168.1.1:1234

In the above example, 192.168.1.1 is the IP address and 1234 is the port number.


 

Question 4) What is a port?

Answer) A port is used to indicate the end-point of the communication link. Each port is given a number called the port number. The port number is a logical entity which is used by the transport layer to identify the target process in the destination machine. Ports are virtual places in the Operating System where the network connections begin and end.

For example, if process A in the source wants to communicate with process B in the destination machine, then the processes A and B must be bound to the same port. Only then, a communication link can be established between the two processes.


 

Question 5) Difference between connection oriented and connectionless protocols.

Answer)


Connection oriented protocols




Connectionless protocols




There exists a dedicated link between the source and the destination.




There is no dedicated path between the source and destination.




All the data packets of a message follow the same path.




The data packets of a message follow different paths.




The data packets are transmitted in a sequential manner.




The data packets are transmitted randomly.




The link between source and destination has to be established in prior.




The link between source and destination need not be established in prior.



It provides reliable transmission of data.

Data transmission is unreliable.

The data packets reach the destination in order.

The data packets reach the destination out of order.

It guarantees data delivery

It does not guarantee data delivery.

Data transmission is slow

Data transmission is fast

More expensive.

Less expensive


 

Question 6) What are the protocols used in the transport layer?

Answer) The three important protocols used in the transport layer are:

  1. TCP (Transmission Control Protocol)

  2. UDP (User Datagram Protocol)

  3. SCTP (Stream Control Transmission Protocol)


 

Question 7) What is UDP? what are the fields in UDP header? explain the operation of UDP.

Answer) UDP- User Datagram Protocol

UDP is a transport protocol. It is connectionless and unreliable. But, it provides limited error checking facilities. UDP is used when a small message has to be sent where reliability is not important. The UDP packets are called ‘datagrams’.

The UDP header has a minimum size of 8 bytes. The following are the different fields in the UDP header:

a. Source port number

  • This field is of 2 byte length.

  • It contains the port number of the sending application.

b. Destination port number

  • This field is of 2 byte length.

  • It contains the port number of the receiving application.

c. Total length

  • This field is of 2 byte length.

  • The total length is the the length of the header and the encapsulated data added together.

d. Checksum

  • This field is of 2 bytes in length.

  • The checksum field is used for error control.


UDP header in computer network class notes
UDP Header

UDP OPERATION

In UDP, each datagram is independent of each other. The datagrams are not numbered. Each datagram travels in a different path.

At the sender, the sending process sends the messages to the outgoing queue. The UDP removes the messages from the outgoing queue one by one and adds the UDP header to them. Then, it sends the datagram to the network layer.

At the receiver, the UDP checks whether an incoming queue has been created for a particular port. If the incoming queue is created for that particular port, UDP sends the received datagram to the port. Else, ‘port unreachable’ message is sent to the sending process. If the incoming queue overflows, UDP drops the datagram and asks the ICMP to send ‘port unreachable’ message to the sending process.


 

Question 8) What is TCP? explain TCP segment structure with diagram.

Answer) TCP is a transport control protocol. It is a connection oriented and reliable transport protocol. TCP offers process-to-process full duplex communication. It provides flow control, error control and congestion control.

TCP Segment Structure

A TCP segment contains a 20 to 60 byte header and the encapsulated data. The TCP header format is given below:


TCP header format in computer networks class notes
TCP header format

HEADER FIELDS

a. Source port address

This field contains the address of the sending process.

b. Destination port address

This field contains the address of the receiving process.

c. Sequence number

Sequence number is a unique number which is assigned to the data segments. It is used for reassembling the data segments at the receiver.

d. Acknowledgement number

The acknowledgement number refers to the sequenced number of the data segment which has to be sent next by the sending process.

e. Header length

This field contains the length of the TCP header (20-60 bytes)

f. Reserved

This field is reserved for future use.

g. Control

  • This field defines 6 different control bits or flags

  • The 6 different flags are SYN, ACK, URG, PSH, RST, and FIN.

  • These control bits enable flow control, connection establishment, termination and mode of data transfer

h. Window size

This field tells the size of the sending window.

i. Checksum

Checksum field is used for error control.

j. Urgent pointer

The urgent pointer is used to indicate the urgent data that has to be sent immediately.


 

Question 9) What are the features of TCP?

Answer) The following are the features of TCP:

  1. TCP provides connection oriented communication between the sender and receiver.

  2. It provides full duplex communication between the sender and receiver.

  3. TCP numbers the segments using the ‘sequence number’ and ‘acknowledgement number’. Sequence number is the number assigned to each segment. The acknowledgement number is used to specify the sequence number of the next segment to be sent.

  4. TCP keeps track of the number of segments sent in each connection using the ‘byte number’.

  5. TCP provides error control and flow control.


 

Question 10) Explain the working of TCP.

Answer) A TCP connection is established using three-way handshake method. The three way handshake method is explained using the diagram given below:


handshake method in computer network class notes
Handshake method

There are three messages involved in the three-way handshake method of connection establishment. They are:

  • SYN

The client sends the SYN segment to the receiver. This is the first step in connection establishment. The SYN segment does not carry any data. In the SYN segment, the SYN flag in the TCP header is alone set.

  • ACK+SYN

In response to the SYN message from the client, the server sends a ACK+SYN segment to the sender. This segment indicates that the receiver is ready to participate in the connection. This segment does not carry any data. In the SYN+ACK segment, both ACK and SYN flags are set.

  • ACK

The ACK segment is sent by the client in response to the SYN+ACK segment. The client can send data along with this ACK segment.

In TCP, connection termination is also done using 3-way handshake method. The three message involved in this process are FIN, FIN+ACK, and ACK.

  • FIN

The client sends the FIN (Finish) segment to the server if it wants to end the connection. The FIN segment may contain the last segment of data. In the FIN segment, the FIN flag is set.

  • FIN+ACK

The server sends the FIN+ACK segment to the client to ensure the receipt of the FIN segment from the client. This segment might contain the last segment of data.

  • ACK

The ACK segment confirms the receipt of the FIN segment from the server. This segment cannot carry data.



Question 11) Define congestion control.

Answer) Congestion occurs when the network traffic is too heavy. i.e. when the number of packets sent to the network is much greater than the capacity of the network. The mechanisms used to control or handle congestion in a network are called congestion control mechanisms. The main objective of congestion control is to reduce congestion in the network, and ensure the delivery of packets to the respective destinations.



 

Question 12) Why does congestion occur?

Answer) Congestion occurs when the network traffic is too heavy. i.e. when the number of packets sent to the network is much greater than the capacity of the network. Congestion occurs when the bandwidth is insufficient for the sent data. Congestion leads to loss of data packets and decrease in the speed of transmission.



 

Question 13) What are the methods of congestion control?

Answer) There are two ways of congestion control:

  • Open loop congestion control ( congestion prevention)

  • Closed loop congestion control (congestion removal)


congestion control in computer networks class notes

Open loop congestion control

  • Retransmission Policy

This method involves the retransmission of the data lost due to corruption. But, retransmission might increase the congestion in the network. This method involves in the design of good retransmission policy with good retransmission timers that reduce congestion in the network.

  • Window policy

The go-back-N window policy leads to retransmission of duplicate packets. This increases the chances of congestion in the network. So, selective repeat windows should be preferred at the sender stations. Because, it involves resending of the lost or corrupted data packets only (not all the packets in the window are retransmitted).

  • Acknowledgement policy

Sending acknowledgements also might lead to congestion in the network. So, the acknowledgement packets must be sent for N packets rather than sending acknowledgement for every packet that is received. This reduces the chances of congestion in the network.

  • Discarding policy

Discarding less sensitive packets (in audio transmission) does not affect the data integrity. So, the less sensitive packets in the data might be discarded to reduce congestion.

  • Admission policy

In this method, the router does not allow a station to transmit data into the network if there is a possibility of congestion. It is a Qos mechanism.


Closed loop congestion control

  • Backpressure

It is a node-to-node congestion control mechanism that starts with a node and propagates, in the opposite direction of data flow, to the source. Here, the congested node stops receiving data from the immediate upstream node which in turn causes congestion in the upstream nodes.

This can be applied only to networks, in which each node knows the upstream node from which the data flow is coming


node to node congestion control in computer networks class notes
  • Choke packet

In this method, a choke packet is sent by the congested node directly to the source to inform of its congestion. Whenever, congestion is detected in the network, the router sends a choke packet to the source to reduce the traffic. This is done to prevent congestion in the network.

  • Implicit signaling

In this method, the source senses that there is a congestion in the network when it does not receive acknowledgement for several of its packets for a while and it slows down.

  • Explicit signaling

In this method, a congested node or a node which detects congestion in the network sends a packet to the source or destination to inform about congestion. This signal is included in the packets that carry data rather than creating a new packet for the signal. This further reduces congestion.

a. Backward signaling

In this method, a bit is set in a packet moving in the direction opposite to the congestion. This bit warns the source of congestion and asks it to slow down.

b. Forward signaling

In this method, a bit is set in a packet moving in the direction of congestion to warn the source about the congestion in the network. This makes the source to slow down.


 

Question 14) Define quality of service.

Answer) Quality of Service (QoS) is used to measure the performance of a system. It refers to the set of technologies that work on a network to reduce the data loss, latency and jitter. QoS manages the network resources by setting priorities of data in the network. It manages the available bandwidth to deliver consistent and predictable data.


 

Question 15) Explain the techniques used to improve Qos.

Answer) The techniques used to improve scheduling are:

Scheduling

  1. Traffic shaping

  2. Admission control

  3. Resource reservation


SCHEDULING

The scheduling technique is used to improve the QoS. A good scheduling technique treats different flows in a fair manner. The following are the different scheduling techniques:

  • FIFO queuing

In FIFO queuing, the packets which arrive first are processed first. A queue is used for this method.


FIFO queuing in computer networks class notes

  • Priority queuing

In this method, the data packets are processed according to their priority. Packets with the highest priority are processed first. Each priority level has a separate queue.

  • Weighted fair queuing

It is a modified version of priority queuing. Each priority level has a separate queue. Each queue is assigned a weight. Queues with higher priorities are assigned more weight than the queues with lower priorities. The system processes all the queues in a round robin fashion with the number of packets selected from each queue depends on the weight of the queue.

Example:

3 packets from the first queue, and 2 from the second queue etc.


TRAFFIC SHAPING

Traffic shaping mechanism is used to control the amount of traffic sent to the network. There are two methods of traffic shaping: leaky bucket and token bucket.

  • LEAKY BUCKET

This algorithm shapes the bursty traffic into fixed-rate traffic by averaging the data rate. It may drop the packets if the bucket becomes full.

If the traffic consists of fixed-size packets, the process removes a fixed number of packets from the queue at each tick of the clock. If the traffic consists of variable-length packets, the fixed output rate must be based on the number of bytes or bits.

The leaky bucket algorithm is shown in the diagram given below:


 leaky bucket algorithm in computer networks class notes

Algorithm

  1. Initialize a counter to n at the tick of the clock.

  2. If n is greater than the size of the packet, send the packet and decrement the counter by the packet size. Repeat this step until n is smaller than the packet size.

  3. Reset the counter and go to step 1.

  • TOKEN BUCKET

The token bucket algorithm allows bursty traffic at a regulated maximum rate. The token bucket algorithm allows idle hosts to accumulate credit for the future in the form of tokens. For each tick of the clock, the system sends n tokens to the bucket. The system removes one token for every cell (or byte) of data sent.


Token bucket in computer networks class notes

RESOURCE RESERVATION

QoS can be improved if the resources (bandwidth, CPU time, buffer etc.,) needed by the data are reserved beforehand. Thus, resource reservation can be used to improve QoS.


ADMISSION CONTROL

Admission control mechanism is used by the router to accept or reject a flow based on predefined parameters called flow specifications. This can be used to improve the QoS.



 

Question 16) Explain resource reservation protocol (RVSP) with its working.

Answer) The Resource Reservation Protocol is a protocol used in the transport layer. It is used to reserve the network resources using the integrated services model. It is used to deliver specific levels of quality of service for data streaming to the users. It is responsible for reservation of bandwidth and assignment of priority to the users.

RSVP is a receiver initiated protocol. The receiver demands the reservation of data flow.

RVSP WORKING

The messages used in RVSP protocol are:

  • Path Messages

The path message is sent by the server to multiple clients. The path message is a multicast message. This message is used to reserve the path from the server to the clients.

  • Resv Messages

The reservation (Resv) message is sent by the clients after receiving the path message from the server. It is used to make resource reservation on the routers that support RVSP.

  • PathTear Messages

The PathTear message is sent by the server or the routers. It is used to remove the reservation of path or resources.

  • ResvTear Messages

The ResvTear message is sent by the clients. It is used to remove the path reservation. It is not mandatory, but the ResvTear message enhances the network performance.

  • PathErr Messages

When there is an error in the path, the router sends the PathErr message to the server to indicate the error in the path. This message does not remove the path reservation, rather it just informs the server about the error in the path.

  • ResvErr Messages

When the reservation request fails, the ResvErr message is sent to all the clients.

  • ResvConfirm Messages

The ResvConfirm message is used to confirm the acceptance of reservation request.


 

Question 17) Which type of addressing is used in transport layer?

Answer) The port address and socket address are used in the transport layer.

A port address is used to identify a specific process in a device. The port number is an integer which can have values between 0 and 65535.

A socket address is a combination of both IP address and a port address. It is used to identify the communication end-point.


 

Question 18) Explain leaky bucket algorithm?

Answer) The leaky bucket algorithm shapes the bursty traffic into fixed-rate traffic by averaging the data rate. It may drop the packets if the bucket becomes full.

If the traffic consists of fixed-size packets, the process removes a fixed number of packets from the queue at each tick of the clock. If the traffic consists of variable-length packets, the fixed output rate must be based on the number of bytes or bits.

The leaky bucket algorithm is shown in the diagram given below:


leaky bucket algorithm in computer networks class notes


Algorithm

  1. Initialize a counter to n at the tick of the clock.

  2. If n is greater than the size of the packet, send the packet and decrement the counter by the packet size. Repeat this step until n is smaller than the packet size.

  3. Reset the counter and go to step 1.


 

Question 19) Explain token bucket algorithm.

Answer) The token bucket algorithm allows bursty traffic at a regulated maximum rate. The token bucket algorithm allows idle hosts to accumulate credit for the future in the form of tokens. For each tick of the clock, the system sends n tokens to the bucket. The system removes one token for every cell (or byte) of data sent.


Token bucket algorithm in computer networks class notes

 

Question 20) What do you understand by jitter control?

Answer) Jitter refers to the variation in the latency in the arrival of data. Some data packets take shorter time to reach the destination, while the remaining packets take a longer time to reach the destination. This is caused by congestion or changes in the route. To control jitter, expected transit time is computed for each hop along the path. When a packet arrives at a router, the router checks how many packets are behind or ahead of its schedule. This information is stored in the packet and updated at each hop. If the packet is ahead of schedule, the router discards the packet. Thus, jitter can be controlled effectively.




9 views0 comments