Solutions: Homework #12 (Compendium of student submissions) 11.13 a. Video on demand: packet latency: If the video is not live, low packet latency may not be a requirement. With sufficient buffering, video on demand can suffer a certain amount of latency without impacting the quality of the video. packet loss: The importance of packet loss to video depends on the packet size. If the packet size is small, in proportion to the amount of data used to reperesent each frame of the video, packet loss is not that big a deal. The video need only be a sufficiently convincing representation of the subject to satisfy the human eye. Limited data loss is acceptable. If the proportion of data in each packet is large however, packet loss may seriously detract from video quality. packet corruption: Packet corruption, like packet loss, is tolerable if the corruption is not substantial. If a few bits in a packet representing a video image are switched, the human eye is not likely to know the difference caused by such a small corruption.. b. Remote conferencing: packet latency: Packet latency is likely to be very detrimental to the experience of remote conferencing users. Propogation and system processing delays already create latency problems for this very latencysensitive application and further latency from network factors must be minimized. packet loss: The impact of packet loss on remote conferencing is likely to be the same as the impact on video (above). packet corruption: The impact of packet corruption on remote conferencing is likely to be the same as the impact on video (above). c. Email: packet latency: As long as the packets eventually arrive, packet latency is not an issue for email. This is a deferred style of communication and users understand and expect slight delays in the delivery of email. packet loss: Packet loss requires that packets be sent again for email messages. The text messages contained in email do not withstand data loss very well. The loss of even a couple of words or even characters can be detrimental to the value of email and should therefore be safeguarded against. packet corruption: For the same reasons that packet loss in email cannot be accepted, packet corruption is not tolerable. d. Chatroom: packet latency: A certain amount of latency in chat applications is acceptable. Still, the acceptable period of latency is probably less than thirty seconds and should average less than about five seconds. Within these approximate limits, packet latency is tolerable. packet loss: Packet loss requires that packets be sent again for chat messages. The written text which comprises chat discussions cannot withstand data loss very well. The loss of even a couple of words or even characters can be detrimental to the value of chat and should therefore be safeguarded against. packet corruption: For the same reasons that packet loss in chat cannot be accepted, packet corruption is not tolerable. e. Web: packet latency: Considering that most Web users wait a maximum of ten of fifteen seconds for a page to load before going on to something different, packet latency for web pages must be kept to a minimum. While some users will wait ten or fifteen seconds, many will not and speedy page loads give website visitors a good impression and experience. Therefore, latency on the web can be very bad.
packet loss: Web content (especially HTML) can be very sensitive to data loss. Pages may not load properly or be intelligible to users if packets are lost and not resent. Therefore, unchecked packet loss is not acceptable. packet corruption: As is true for packet loss, data corruption of web content, even if only very small, can destroy the communication or impair it as to be unusable. Packet corruption is not acceptable on the Web. 11.14 Clothing sales Stock trading Music playback on demand Application QoS Completion time Not so important The price is fixed and the duration of goods relatively long until sold out. Very important The price fluctuates very frequently. The purchase or selling order handled immediatel y. Not important The music file can be deferred. To reduce artifacts in music is more important. Network QoS Throughput Latency Loss Corruption Important It should be able to deal with number of simultaneo us orders from multiple consumers. Very important Large number of orders handled simultaneo usly. Very Important Multiple users able to log on the service simultaneo usly. Tolerable The price and orders would not fluctuate or change during the latency. for accepting the purchase or selling orders. Tolerable for delivering the transaction confirmatio n to customers. Tolerable The service need not be immediate. The content delivered in its integrity. The content delivered in its integrity. to prevent the interruption of the music. Not only the content but also the securityrelated message can be affected. Not only the content but also the security related message can be affected. to prevent noise on music. It can cause a critical distortion on the security-related information for order. 11.22 IIOP supports fully distibuted computing through the establishment of a single TCP connection between two Object-request Brokers (ORBs), with the features of message-with-reply services &
support for remote method invocation (RMI), thus enabling communication between ORBs in the course of distributed object management. Real-time Transport Protocol (RTP) and Real-time Streaming Protocol (RTSP) support real-time multimedia streaming. RTP provides basic support for real-time streaming and defines mixers and translators that can operate with the local network to adjust coding or penetrate firewalls. RTSP supports the applications ability to control multiple RTP sessions, thus enabling complex multimedia broadcasts over the network. Hypertext Transfer Protocol (HTTP). This is a request/response protocol that governs the interaction of a Web browser and a Web server. This protocol involves 4 steps: 1) user activates URL, 2) HHTP request sent to server, 3) HTTP reponse from server, and 4) browser displays document. 11.24 a. If a return acknowledgement message is lost by IP, the source can detect that failure by the absence of an ACK. After waiting in vain for the ACK for a period of time called the time-out, the source will retransmit the TCP packet, again using IP. In fact, the source will repeatedly retransmit the TCP packet until it eventually receives an ACK. The loss can result in increase latency. b. If the source cannot receive an acknowledge message for a packet, it retransmits the packet while transmitting normal packets. Mixing the retransmitted packet with normal packets causes irregular order in delivering TCP packets. c. The flaw can be solved by the use of a message with reply service, such as RMI. A message with reply service has implicit control, since the reply from the recipient provides an implicit acknowledgement that it is ready to receive another message. With this implicit acknowledgement, the source can send another packet to the recipient, even if the number of MaxUnackedPackets is zero. d. Yes. Reducing MaxUnackedPackets (via congestion detection) would reduce offered traffic and the size of queues in packet switches. This is flow control i.e., reducing throughput. 11.26 Feature Old TCP (TCP) New TCP (NTCP) Transport Protocol Data 1 9 Unit (TPDU) Connection collision 1 connection 2 connection Quality of Service Specific Options Open ended Important Data Expedited Urgent Explicit flow control Always Sometimes 1.NTCP uses nine different TPDU types whereas TCP only has one. This difference results in TCP being simpler, but it also needs a larger header, because all fields must be present in al TPDUs. The minimum size of the TCP header is 20 bytes; the minimum size of the NTCP header is 5 bytes.
2.A second difference regards what happens if two processes simultaneously attempt to set up connections between the same two TSAPs (i.e., a connection collusion). With NTCP, two independent, full-duplex connections are established. With TCP, a connection is identified by a pair of TSAPs, so only one connections is established. 3. The issues of quality of service is also handled differently in the two protocols. NTCP has a rather elaborate and open-ended mechanism for a three-way negotiation of the quality of service. This negotiation involves the calling process, the called process, and the transport service itself. Many parameters can be specified, and both target and minimum acceptable values can be given. TCP, in contrast, does not have a quality of service field at all, but the underlying IP service has an 8-bit field that allows a choice to be made out of a limited combination of speed and reliability. 4.This concerns how important data that need special processing are dealt with. NTCP has two independent message streams, regular data and expedited data, multiplexed together. Only one expedited message may be outstanding at any instant. TCP uses the Urgent field to indicate that some number of bytes within the current TPDU are special and processed out of order. 5.TCP always uses an explicit flow control mechanism with the window size specified in each TPDU. NTCP can use a credit scheme, but it can also rely on the window scheme of the network layer to regulate the flow. There may be problems if the grant of a large window and its subsequent retraction arrive in the wrong order. In TCP there is no solution to this problem. In NTCP it is solved by the sequence number that is included in the retraction, thus allowing the sender to determine that the small window followed, rather than preceded, the large one. 11.29 a. The amount of space in a queue is limited. If a packet arrives at a host whose buffer is full, then the packet must be discarded. The larger the buffer, the longer queue is possible and fewer packets will be discarded. Similarly, faster processing of received packets and the consequent faster emptying of the buffer will also reduce the number of packets lost. b. When a packet is discarded because the queue has reached its maximum length, then that packet must be retransmitted. Each such retransmission artificially increases the traffic offered to the system. c. The queue size affects the maximum latency of packets delivered during periods of congestion because latency is defined to include the time that the packets are waiting in a queue. As the length of the queue increases so does the message latency. d. If the queue is not sufficiently large to handle all packets, then some packets will be discarded and need to be re-transmitted. This is a situation to be avoided because it reduces the overall efficiency of the network and, as a result, creates a cascading effect of even further congestion. Packets received and stored in a queue do not need to be retransmitted. Message latency is reduced because network congestion is reduced. The message latency is still affected by the packets waiting in the queue for processing, however. 12.10 Bitrate = meters 1 bits bits megabits 2 x10^8 6.67x10^7 = 66.67 second 3 meters second second
12.11 a. Problem Areas Possible Problems Solutions Computer The computer hardware and software are slow and can not read or send at speeds acceptable to the Modem and ISP There are several possible solutions: 1. Upgrade the software which will optimize your connection speeds 2. Control the number of applications being run on the machine. An increase in the number of processes will decrease the total time allocated for each time slice. Thus, on slow processors the send/receive process will receive Modem Internet Service Provider ISP (phone) Distance to service provider Protocol used with service provider The modem is sending and receiving data too slow. There are several possible problems: The ISP s software is slow to reply to your requests due to congestion The ISP does not have enough modems due to the number of calls The propagation delay of the signal is a hard upper limit regardless of the protocol The connection to the ISP may be slow such as phone. Upgrading to Cable/DSL or Fiber Optics will speed up the connection. Also, the protocol used with each of these connection types will determine the speed. relatively few CPU cycles. Upgrade the modem. This is not a complete solution since Modems have a relatively low bandwidth upper limit of 56 KBPS Change service providers or pay more to the current service provider to guarantee a better connection Move closer to the ISP Upgrading could cause significant price increases and introduce large setup costs laying new wires. b.
Problem Areas Internet Remote site Unreasonable expectations Possible Problems Internet in general is congested Remote requested website can not handle the number of requests is receives or it is using slow technology. Technology can only go so fast. You might have to readjust expectations since there are hard limits to this communication. c. Without spending money Try the communication at different times of the day. If the speed remains the same then it is most likely your problem since speed problems outside your control are usually intermittent and dependant on the time of day. Try the communication on a machine with a supposedly faster connection (like from SIMS). If the communication is relatively the same then it is most likely the remote webserver. Spending money Upgrade the modem. If this does not fix the problem you can always return it! Try a different service provider. Buy a new computer 12.13 Bitrate (bits/sec) Delay (sec) Bitrate delay product Msg size (bits) Limited by Comments a) 2.88E+04 4.00E-02 1.15E+03 8.00E+03 bitrate because message is greater than the bitrate-delay product b) 2.88E+04 4.00E-02 1.15E+03 3.20E+02 propagation delay because message is smaller than the bitrate-delay product c) 4.50E+07 1.00E-02 4.50E+05 7.20E+06 bitrate same as a) d) 4.50E+07 1.00E-02 4.50E+05 1.12E+07 bitrate same as a)