Скачать книгу

sending datagrams through a single gateway or destination can also jam things up pretty badly. In the latter case, a gateway or destination can become congested even though no single source caused the problem. Either way, the problem is basically akin to a freeway bottleneck – too much traffic for too small a capacity. It’s not usually one car that’s the problem; it’s just that there are way too many cars on that freeway at once!

      But what actually happens when a machine receives a flood of datagrams too quickly for it to process? It stores them in a memory section called a buffer. Sounds great; it’s just that this buffering action can solve the problem only if the datagrams are part of a small burst. If the datagram deluge continues, eventually exhausting the device’s memory, its flood capacity will be exceeded and it will dump any and all additional datagrams it receives just like an inundated overflowing bucket!

      Flow Control

      Since floods and losing data can both be tragic, we have a fail-safe solution in place known as flow control. Its job is to ensure data integrity at the Transport layer by allowing applications to request reliable data transport between systems. Flow control prevents a sending host on one side of the connection from overflowing the buffers in the receiving host. Reliable data transport employs a connection-oriented communications session between systems, and the protocols involved ensure that the following will be achieved:

      ■ The segments delivered are acknowledged back to the sender upon their reception.

      ■ Any segments not acknowledged are retransmitted.

      ■ Segments are sequenced back into their proper order upon arrival at their destination.

      ■ A manageable data flow is maintained in order to avoid congestion, overloading, or worse, data loss.

       The purpose of flow control is to provide a way for the receiving device to control the amount of data sent by the sender.

Because of the transport function, network flood control systems really work well. Instead of dumping and losing data, the Transport layer can issue a “not ready” indicator to the sender, or potential source of the flood. This mechanism works kind of like a stoplight, signaling the sending device to stop transmitting segment traffic to its overwhelmed peer. After the peer receiver processes the segments already in its memory reservoir – its buffer – it sends out a “ready” transport indicator. When the machine waiting to transmit the rest of its datagrams receives this “go” indicator, it resumes its transmission. The process is pictured in Figure 1.11.

Diagram shows sender transmitting data to receiver, receiver sending messages to the sender such as Buffer full, not ready- STOP and Segments processed- GO and again sender transmitting data to receiver.

FIGURE 1.11 Transmitting segments with flow control

      In a reliable, connection-oriented data transfer, datagrams are delivered to the receiving host hopefully in the same sequence they’re transmitted. A failure will occur if any data segments are lost, duplicated, or damaged along the way – a problem solved by having the receiving host acknowledge that it has received each and every data segment.

      A service is considered connection-oriented if it has the following characteristics:

      ■ A virtual circuit, or “three-way handshake,” is set up.

      ■ It uses sequencing.

      ■ It uses acknowledgments.

      ■ It uses flow control.

       The types of flow control are buffering, windowing, and congestion avoidance.

      Windowing

      Ideally, data throughput happens quickly and efficiently. And as you can imagine, it would be painfully slow if the transmitting machine had to actually wait for an acknowledgment after sending each and every segment! The quantity of data segments, measured in bytes, that the transmitting machine is allowed to send without receiving an acknowledgment is called a window.

       Windows are used to control the amount of outstanding, unacknowledged data segments.

      The size of the window controls how much information is transferred from one end to the other before an acknowledgement is required. While some protocols quantify information depending on the number of packets, TCP/IP measures it by counting the number of bytes.

As you can see in Figure 1.12, there are two window sizes – one set to 1 and one set to 3.

Diagram shows sender with window size 1 transmits segment 1 and receiver acknowledges and sender with window size 3 transmits segments 1, 2 and 3 and receiver acknowledges.

FIGURE 1.12 Windowing

      If you’ve configured a window size of 1, the sending machine will wait for an acknowledgment for each data segment it transmits before transmitting another one but will allow three to be transmitted before receiving an acknowledgement if the window size is set to 3.

      In this simplified example, both the sending and receiving machines are workstations. Remember that in reality, the transmission isn’t based on simple numbers but in the amount of bytes that can be sent!

       If a receiving host fails to receive all the bytes that it should acknowledge, the host can improve the communication session by decreasing the window size.

      Acknowledgments

Reliable data delivery ensures the integrity of a stream of data sent from one machine to the other through a fully functional data link. It guarantees that the data won’t be duplicated or lost. This is achieved through something called positive acknowledgment with retransmission– a technique that requires a receiving machine to communicate with the transmitting source by sending an acknowledgment message back to the sender when it receives data. The sender documents each segment measured in bytes, then sends and waits for this acknowledgment before sending the next segment. Also important is that when it sends a segment, the transmitting machine starts a timer and will retransmit if it expires before it gets an acknowledgment back from the receiving end. Figure 1.13 shows the process I just described.

Diagram shows sender sends segments 1, 2 and 3, receiver acknowledges, sender sends segment 4 and connection lost for segment 5, sender sends segment 6, receiver acknowledges and sender re-sends segment 5.

FIGURE 1.13 Transport layer reliable delivery

      In the figure, the sending machine transmits segments 1, 2, and 3. The receiving node acknowledges that it has received them by requesting segment 4 (what it is expecting next). When it receives the acknowledgment, the sender then transmits segments 4, 5, and 6. If segment 5 doesn’t make it to the destination, the receiving node acknowledges that event with a request for the segment to be re-sent. The sending machine will then resend the lost segment and wait for an acknowledgment, which it must receive in order to move on to the transmission of segment 7.

      The Transport layer, working in tandem with the Session layer, also separates the data from different applications, an activity known as session multiplexing, and it happens when a client connects to a server with multiple browser sessions open. This is exactly what’s taking place when you go someplace online like Amazon and click multiple links, opening them simultaneously to get information when comparison shopping. The client data from each browser session must be separate when the server application receives it, which is pretty slick technologically speaking, and it’s the Transport layer to the rescue for that juggling act!

      The Network Layer

      The

Скачать книгу