Computer Networks I (spring 2008)

Similar documents
Final for ECE374 05/06/13 Solution!!

First Midterm for ECE374 03/09/12 Solution!!

First Semester Examinations 2011/12 INTERNET PRINCIPLES

Digital Audio and Video Data

Note! The problem set consists of two parts: Part I: The problem specifications pages Part II: The answer pages

6.6 Scheduling and Policing Mechanisms

Transport Layer Protocols

Computer Networks Homework 1

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science

The network we see so far. Internet Best Effort Service. Is best-effort good enough? An Audio Example. Network Support for Playback

15-441: Computer Networks Homework 2 Solution

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

Congestion Control Review Computer Networking. Resource Management Approaches. Traffic and Resource Management. What is congestion control?

Midterm Exam CMPSCI 453: Computer Networks Fall 2011 Prof. Jim Kurose

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

Classes of multimedia Applications

Understanding Slow Start

Exam 1 Review Questions

Voice-Over-IP. Daniel Zappala. CS 460 Computer Networking Brigham Young University

Note! The problem set consists of two parts: Part I: The problem specifications pages Part II: The answer pages

Scheduling for QoS Management

1. The subnet must prevent additional packets from entering the congested region until those already present can be processed.

Sources: Chapter 6 from. Computer Networking: A Top-Down Approach Featuring the Internet, by Kurose and Ross

Computer Networks - CS132/EECS148 - Spring

Lecture 16: Quality of Service. CSE 123: Computer Networks Stefan Savage

CSE 473 Introduction to Computer Networks. Exam 2 Solutions. Your name: 10/31/2013

Review: Lecture 1 - Internet History

Mul$media Networking. #3 Mul$media Networking Semester Ganjil PTIIK Universitas Brawijaya. #3 Requirements of Mul$media Networking

(Refer Slide Time: 02:17)

Lecture 33. Streaming Media. Streaming Media. Real-Time. Streaming Stored Multimedia. Streaming Stored Multimedia

Network Management Quality of Service I

Sources: Chapter 6 from. Computer Networking: A Top-Down Approach Featuring the Internet, by Kurose and Ross

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)

COMP 3331/9331: Computer Networks and Applications. Lab Exercise 3: TCP and UDP (Solutions)

Advanced Networking Voice over IP: RTP/RTCP The transport layer

First Midterm for ECE374 02/25/15 Solution!!

internet technologies and standards

Voice over IP. Demonstration 1: VoIP Protocols. Network Environment

Ethernet. Ethernet Frame Structure. Ethernet Frame Structure (more) Ethernet: uses CSMA/CD

First Midterm for ECE374 03/24/11 Solution!!

Nortel Technology Standards and Protocol for IP Telephony Solutions

Computer Networks UDP and TCP

White paper. Latency in live network video surveillance

Encapsulating Voice in IP Packets

Internet Control Protocols Reading: Chapter 3

Key Components of WAN Optimization Controller Functionality

Voice over IP: RTP/RTCP The transport layer

RTP / RTCP. Announcements. Today s Lecture. RTP Info RTP (RFC 3550) I. Final Exam study guide online. Signup for project demos

CS640: Introduction to Computer Networks. Applications FTP: The File Transfer Protocol

D. SamKnows Methodology 20 Each deployed Whitebox performs the following tests: Primary measure(s)

Computer Networks - CS132/EECS148 - Spring

CS268 Exam Solutions. 1) End-to-End (20 pts)

ICOM : Computer Networks Chapter 6: The Transport Layer. By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 UPRM

CH.1. Lecture # 2. Computer Networks and the Internet. Eng. Wafaa Audah. Islamic University of Gaza. Faculty of Engineering

Multimedia Networking and Network Security

Requirements of Voice in an IP Internetwork

Real-time apps and Quality of Service

Lecture 2-ter. 2. A communication example Managing a HTTP v1.0 connection. G.Bianchi, G.Neglia, V.Mancuso

Burst Testing. New mobility standards and cloud-computing network. This application note will describe how TCP creates bursty

Transport and Network Layer

COMP 361 Computer Communications Networks. Fall Semester Midterm Examination

Computer Networks. Chapter 5 Transport Protocols

Ethernet. Ethernet. Network Devices

Project Code: SPBX. Project Advisor : Aftab Alam. Project Team: Umair Ashraf (Team Lead) Imran Bashir Khadija Akram

Names & Addresses. Names & Addresses. Hop-by-Hop Packet Forwarding. Longest-Prefix-Match Forwarding. Longest-Prefix-Match Forwarding

Basic Multiplexing models. Computer Networks - Vassilis Tsaoussidis

Network Layer: Network Layer and IP Protocol

Distributed Systems 3. Network Quality of Service (QoS)

Data Networks Summer 2007 Homework #3

Indepth Voice over IP and SIP Networking Course

Mixer/Translator VOIP/SIP. Translator. Mixer

Overview. Securing TCP/IP. Introduction to TCP/IP (cont d) Introduction to TCP/IP

Client Server Registration Protocol

ECE 358: Computer Networks. Homework #3. Chapter 5 and 6 Review Questions 1

Multimedia Communications Voice over IP

LAN Switching Computer Networking. Switched Network Advantages. Hubs (more) Hubs. Bridges/Switches, , PPP. Interconnecting LANs

Lehrstuhl für Informatik 4 Kommunikation und verteilte Systeme

EXPERIMENTAL STUDY FOR QUALITY OF SERVICE IN VOICE OVER IP

Middleboxes. Firewalls. Internet Ideal: Simple Network Model. Internet Reality. Middleboxes. Firewalls. Globally unique idenpfiers

RARP: Reverse Address Resolution Protocol

Multimedia Requirements. Multimedia and Networks. Quality of Service

An Introduction to VoIP Protocols

10CS64: COMPUTER NETWORKS - II

Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade

Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage

ECE 358: Computer Networks. Solutions to Homework #4. Chapter 4 - The Network Layer

Per-Flow Queuing Allot's Approach to Bandwidth Management

Basic principles of Voice over IP

technology standards and protocol for ip telephony solutions

CSE 123: Computer Networks

SFWR 4C03: Computer Networks & Computer Security Jan 3-7, Lecturer: Kartik Krishnan Lecture 1-3

Protocolo HTTP. Web and HTTP. HTTP overview. HTTP overview

DATA COMMUNICATIONS AND NETWORKING. Solved Examples

Internet Packets. Forwarding Datagrams

GATE CS Topic wise Questions Computer Network

Chapter 6 Congestion Control and Resource Allocation

Security (II) ISO : Security Architecture of OSI Reference Model. Outline. Course Outline: Fundamental Topics. EE5723/EE4723 Spring 2012

CS 5480/6480: Computer Networks Spring 2012 Homework 1 Solutions Due by 9:00 AM MT on January 31 st 2012

Lecture 2: Protocols and Layering. CSE 123: Computer Networks Stefan Savage

Wide Area Networks. Learning Objectives. LAN and WAN. School of Business Eastern Illinois University. (Week 11, Thursday 3/22/2007)

Transcription:

521261A Computer Networks I (spring 2008) Problem set #5

Problem solving session #5 Time: Fri 25.4.2008 at 12:15-13:45 Location: TS101 Intermediate exam #5 Time: Mon 28.4.2008 2008 at 10:15-11:45 11:45 Location: L2 Material: lectures #9-#13#13 Remember to bring a photo ID and a calculator complying with ih the department s rules

Problem #1 (Q) A web page consisting of a base HTML page (10 kb in size) and ten JPEG images (each 50 kb in size) is requested by a browser in a 10 Mbps network where RTT between the browser and the serveris 100 ms. Compute the totalt HTTP response time for: (a) non-persistent HTTP with no parallel connections; (b) persistent HTTP with pipelining. For simplicity, you can ignore any server idle times, network congestion and TCP slow start (i.e. TCP is assumed to transmit at full speed immediately right after the connection is set up).

Problem #1 (A) (a) non-persistent HTTP with no parallel connections Total HTTP response time = response time for HTML page + response times for the ten images = connection setup (RTT) + request for HTML page (RTT) + transmission delay for HTML page + 10 x (connection setup (RTT) + request for image (RTT) + transmission delay) = 2xRTT + 10 10 3 x8/10 10 6 + 10 x (2xRTT + 50 10 3 x8/10 10 6 ) = 208 ms + 10 x (200 ms + 40 ms) = 208 ms + 2400 ms = 2.608 s (b) persistent HTTP with pipelining Total HTTP response time = response time for HTML page + response times for the ten images = 2xRTT + transmission delay to request and receive base HTML file + 1xRTT + transmission delay to request and receive the ten images = 208 ms + 100 ms + 400 ms = 0.708 s

Problem #2 (Q) Consider the figure right, where an institutional network is connected to the Internet. Suppose that the average object size is 900000 bits and that t the average request rate from the institution s browsers to the origin servers is in total 1.5 requests per second. Suppose that the amount of time from when the router on the Internet side of the access link forwards an HTTP request to the origin servers till it receives the response is two seconds s on average. Model the total average response time as the sum of the average access delay (that is, the delay from the Internet router to the institution router) and the average Internet delay. For the average access delay use Δ/(1 Δβ), where Δ is the average time required to send an object over the access link and β is the arrival rate of objects to the access link. (a) Compute the total average response time. (b) Now suppose a web proxy server (cache) is installed in the institutional LAN. Suppose that the hit rate of the proxy server (that is, the proportion of the HTTP requests satisfied by the proxy server) is 0.4. Compute the total average response time, assuming that the response time is approximately zero when the request is satisfied by the proxy server.

Problem #2 (A) (a) The average time to transmit an object of 900000 bits over the 1.5 Mbps link: Δ = (0.9x10 6 bits) / (1.5x10 6 bits/s) = 0.6 s Arrival rate of objects to the access link: β =1.51/s Average access delay = Δ/(1 Δβ) = 0.6 s / (1 0.6 s x 1.5 1/s) = 0.6 s / (1 0.9) = 6.0 s Average Internet delay = 2.0 s Total average response time = average access delay + average Internet delay = 6.0s + 2.0 s = 8.0 s [Δβ =0.6s x 1.5 1/s = 0.9 is the traffic intensity on the access link]

Problem #2 (A) (cont.) (b) 60% of requests are satisfied by the origin servers (i.e. 40% by the proxy server), hence now the arrival rate of objects to the access link: β = 60% 1.5 1/s = 0.9 1/s Average access delay = Δ/(1 Δβ) = 0.6 s / (1 0.6 s x 0.9 1/s) = 0.6 s / (1 0.54) = 1.30 s Average response time for 60% requests satisfied by origin servers = average access delay +average Internet delay = 1.30 s + 2.0 s = 3.30 s Average response time for 40% requests satisfied by proxy server = 0.0 s Total average response time = 0.6 x 3.30 s + 0.4 x 0.0 s = 1.98 s [Now traffic intensity on the access link is 0.54]

Problem #3 (Q) (a) Consider the two modes of communication between a managing entity and a managed device, request response and trapping. What are the pros and cons of these two approaches in terms of: (i) () overhead; (ii) notification time when exceptional events occur; and (iii) robustness with respect to lost messages between the managing entity and the device? (b) Why do you think the designers of SNMP chose UDP rather than TCP as the transport protocol of choice for SNMP? (c) Because SNMP uses two different port numbers (UDP ports 161 and 162), a single system can easily run both a manager and an agent. What would happen if the same port number were used for both?

Problem #3 (A) (a) (i) Overhead Request response mode will generally have more overhead (measured in terms of the number of messages exchanged) for several reasons. First, each piece of information received by the manager requires two messages: the poll and the response. Trapping generates only a single message to the sender. If the manager really only wants to be notified when a condition occurs, polling has more overhead, since many of the polling messages may indicate that the waited-for condition has not yet occurred. Trapping generates a message only when the condition occurs. (ii) Notification time Trapping will also immediately notify the manager when an event occurs. With polling, the manager needs will need to wait for half a polling cycle (on average) between when the event occurs and the manager discovers (via its poll message) that the event has occurred. (iii) Robustness wrt. lost messages If a trap message is lost, the managed device will not send another copy. If a poll message, or its response, is lost the manager would know there has been a lost message (since the reply never arrives). Hence the manager could repoll, if needed.

Problem #3 (A) (cont.) (b) Often, the time when network management is most needed d is in times of stress, when the network may be severely congested and packets are being lost. With SNMP running over TCP, TCP's congestion control would cause SNMP to back-off and stop sending messages at precisely the time when the network manager needs to send SNMP messages. (c) If the same port were used for both traps and requests, separating the manager from the agent in the same system would be difficult.

Problem #4 (Q) Consider the following two FEC schemes. Scheme A generates a redundant d chunk for every four original chunks, by exclusive ORing the four original chunks. Scheme B piggybacks into the original i stream a lower bit-rate stream whose bit rate is 25% of the bit-rate of the original i stream. (a) How much additional bandwidth does each scheme require? (b) How much playback delay each scheme add? (c) Which scheme will provide better audio quality if the first packet is lost is every group of five packets in transmission? (d) Which scheme will provide better audio quality if the first packet is lost is every group of two packets in transmission?

Problem #4 (A) (a) Scheme A requires 25% more bandwidth (one redundant chunk is generated for every four original chunks) Scheme B requires 25% more bandwidth, as well. (b) Scheme A has a playback delay of 5 packets, to receive all 5 (4+1) packets. Scheme B has a playback delay of 2 packets, to receive the next packet containing the lower bit-rate chunk. (c) Scheme A. Scheme A will be able to reconstruct the original high-quality audio encoding. Scheme B will use the low quality audio encoding for the lost packets and will therefore have lower overall quality. (d) Scheme B. In scheme A many of the original packets will be lost and audio quality will be very poor. In scheme B every audio chunk will be available at the receiver, although only the low quality version will be available for every other chunk, but audio quality will still be acceptable.

Problem #5 (Q) Consider the figure below, where a sender begins sending packetized audio at t=1. The first packet arrives at the receiver at t=8. Each vertical/horizontal line segment has length of 1, 2, or 3 time units. (a) What are the delays (from sender to receiver, ignoring any playout delays) of 2nd, 3rd, 4th and 5th packets sent? (b) If audio playout begins as soon as the 1st packet arrives at the receiver at t=8, which of the first 8 packets will not arrive in time for playout? (c) Repeat (b) for playout beginning at t=9. (d) What is the minimum playout delay at the receiver that results in all of the first 8 packets arriving in time for playout?

Problem #5 (A) (a) Delays in slots (time units): 2nd packet 8, 3rd packet 8, 4th packet 7, 5th packet 9 (b) Packets 2, 3, 5, 6, 7 and 8 will not be received in time for playout, if playout begins at t=8. (c) Packets 5 and 6 will not be received in time for playout, if playout begins at t=9. (d) If playout begins at t=10, no packets will arrive after their playout time.

Problem #6 (Q) Consider the system on the next slide, where a token bucket polices a stream of packets. The token bucket can hold at most two tokens and it is initially full at t=0. New tokens arrive at a rate of 1 token per time slot. The output link speed is such that if two packets obtain tokens at the beginning of a time slot, they can both pass to the output link in the same slot. The timing details of the system are as follows: a. Packets (if any) arrive at the beginning of the slot. Thus, in the diagram below, packets 1 and 2 arrive in slot 0. If there are already packets in the queue, then the arriving packets join the end of the queue. Packets proceed toward the front of the queue in FIFO manner. b. If, after the arriving packets (if any) have been added d to the queue, there are any queued packets, one or two of those packets, depending on the number of available tokens, will each remove a token from the token buffer and pass to the output link during that slot. Thus, as shown in the diagram below, packets 0 and 1 each remove a token from the buffer (since there are initially two tokens) and pass to the output link during slot 0. c. A new token is added to the token bucket if it is not full, at the token generation rate of 1 token/slot. d. Time then advances to the next time slot, and steps a-d are repeated.

Problem #6 (Q) (cont.) In form of a table, show for each time slot from t=0 to t=8: (i) The packets that are in the queue and the number of tokens in the bucket, immediately after the arrivals have been processed (see step a above) but before any of the packets have passed through the queue and removed a token. Thus, for the t=0 time slot in the diagram, packets 1 and 2 are in the queue, and there are two tokens in the buffer. (ii) The packets that appear on the output after the token(s) have been removed from the queue. Thus, for the t=0 time slot in the diagram, packets 1 and 2 appear on the output link from the leaky buffer during slot 0. Thus, the first row in the table is: slot 0; packets 1 and 2 in queue; 2 tokens in bucket before output; packets 1 and 2 on output.

Problem #6 (A) Time slot Packets in queue Number of tokens before output Packets on output 0 1, 2 2 1, 2 1 3 1 3 2 4, 5 1 4 3 5, 6 1 5 4 6 1 6 5 none 1 none 6 7 2 7 7 8, 9, 10 2 8, 9 8 10 1 10

Problem #7 (Q) Consider the figure below. In questions (a)-(d), assuming the designated service, show in a table the time at which each of the packets 2 through 12 leaves the queue. Further, for each packet show also the delay between its arrival and the beginning g of the slot in which it is transmitted, and compute the average delay over the 12 packets. (a) FIFO service. (b) Priority service, so that odd-numbered (1, 3...) packets are high priority and even- numbered (2, 4...) packets are low priority. (c) Round robin service, so that packets 1, 2, 3, 6, 11, and 12 are from class 1, and other packets are from class 2. (d) WFQ service, so that odd-numbered packets are from class 1, and even-numbered packets are from class 2. Class 1 has a WFQ weight of 2, while class 2 has WFQ weight of 1. Note that it may not be possible to achieve an ideal WFQ schedule for each packet.

Problem #7 (A) (a) FIFO service. Packet Arrival slot Transmission slot Delay 1 0 0 0 2 1 1 0 3 1 2 1 4 2 3 1 5 3 4 1 6 3 5 2 7 3 6 3 8 5 7 2 9 6 8 2 10 7 9 2 11 8 10 2 12 8 11 3 average delay = 19/12 = 1.583

Problem #7 (A) (cont.) (b) Priority service, so that odd-numbered (1, 3...) packets are high h priority and even-numbered (2, 4...) packets are low priority. Packet Class Arrival slot Transmission slot Delay 1 H 0 0 0 2 L 1 2 1 3 H 1 1 0 4 L 2 5 3 5 H 3 3 0 6 L 3 7 4 7 H 3 4 1 8 L 5 9 4 9 H 6 6 0 10 L 7 10 3 11 H 8 8 0 12 L 8 11 3 average delay = 19/12 = 1.583

Problem #7 (A) (cont.) (c) Round robin service, so that packets 1, 2, 3, 6, 11, and 12 are from class 1, and other packets are from class 2. Packet Class Arrival slot Transmission slot Delay 1 1 0 0 0 2 1 1 1 0 3 1 1 3 2 4 2 2 2 0 5 2 3 4 1 6 1 3 5 2 7 2 3 6 3 8 2 5 7 2 9 2 6 9 3 10 2 7 11 4 11 1 8 8 0 12 1 8 10 2 average delay = 19/12 = 1.583

Problem #7 (A) (cont.) (d) WFQ service, so that odd-numbered packets are from class 1, and even-numbered packets are from class 2. Class 1 has a WFQ weight of 2, while class 2 has WFQ weight of 1. Note that it may not be possible to achieve an ideal WFQ schedule for each packet. WFQ weights effectively mean that for a set of three arrival slots we would want to transmit two packets from class 1 and one packet from class 2, so that class 1 packets go before class 2 packets. We implement WFQ by dividing time into disjoint sets of three arrival slots (0-2, 3-5, 6-8, 9-11) and for each set of arrival slots we consider the packets that are available for transmission during those three slots. Slots 0-2: it is possible to sent two class 1 packets (packets 1, 3) and one class 2 packet (packet 2). Slots 3-5: it is possible to send two class 1 packets (packets 5, 7) and one class 2 packet (packet 4). Slots 6-8: it is possible to send only one class 1 packet (packets 9), so we send two class 2 packets (packets 6,8) after sending packet 9. Slots 9-11: there is only one more class 1 packet (11) to send, followed by last two class 2 packets (10,12)

Problem #7 (A) (cont.) Packet Class Arrival slot Transmission slot Delay 1 1 0 0 0 2 2 1 2 1 3 1 1 1 0 4 2 2 5 3 5 1 3 3 0 6 2 3 7 4 7 1 3 4 1 8 2 5 8 3 9 1 6 6 0 10 2 7 10 3 11 1 8 9 1 12 2 8 11 3 average delay = 19/12 = 1.583 We see that the average delay of a packet is the same in all cases! This illustrates an important conservation n law of queuing u systems: as long as the queue u is kept busy whenever there is a packet queued, the average packet delay will be the same, regardless of the scheduling discipline. Of course, specific packets will suffer higher or lower delays under different scheduling disciplines, but the average will always be the same.

Problem #8 (Q) A host produces packets at a rate of 30 MB/s for the duration of 100 ms. Show the data rate of the resulting packet flow on a timeline, when the traffic generated by the host is controlled with: (a) a token bucket, which is filled to its original capacity of 1 MB and which has token arrival rate of 5 MB/s. (b) the token bucket of (a) followed with a 10 MB/s leaky bucket.

Problem #8 (A) (a) The amount of data produced by the host: D = 30 MB/s x 100 ms = 3.0 MB Let s first compute how long the token bucket can support the 30 MB/s burst rate: orig_cap + token_rate x burst_len = burst_rate x burst_len 1 MB + 5 MB/s x T1 = 30 MB/s x T1 T1 = 1 MB / (30 MB/s - 5 MB/s) = 40 ms The amount of data transferred during T1: D1 = 30 MB/s x 40 ms = 1.2 MB This means that 1.8 MB of data (D - D1) is remaining to be transferred according to the token arrival rate of 5 MB/s: T2 = 1.8 MB / 5 MB/s = 360 ms The data rate of the output of the token bucket on a timeline: 0 40 ms: 30 MB/s 40 400 ms: 5 MB/s

Problem #8 (A) (cont.) (b) The cumulative output C(t) of the token bucket is: C(t) () = 30 MB/s x t, 0 < t 40 ms C(t) = 1.2 MB + 5 MB/s x (t 40 ms), 40 ms < t 400 ms C(t) () = 3.0 MB, t > 400 ms C(t) is fed to the leaky bucket, which processes L(t) = 10 MB/s x t amount of data in time t. The output of the leaky bucket is the maximum 10 MB/s as long as there is data in the leaky bucket. The amount of data left D(t) in the leaky bucket is D(t) = C(t) L(t): [D(t) = 20 MB/s x t, 0 < t 40 ms] D(t) = 1.2 MB + 5 MB/s x (t 40 ms) 10 MB/s x t, 40 ms< t 400 ms [D(t) = 3.0 MB 10 MB/s x t, t > 400 ms] 1.2 MB + 5 MB/s x (t 40 ms) 10 MB/s x t 0 t (1.2 MB 0.2 MB) / 5 MB/s = 200 ms Hence, the output of the leaky bucket on a timeline: 0 200 ms: 10 MB/s 200 400 ms:5 MB/s

Problem #9 (Q) RTP is used to transmit CD quality audio, which makes a pair of 16 bit samples 44100 times in a second, one sample for each of the stereo channels. (a) How many packets per second must RTP transmit, if audio data is transmitted in 1024 byte chunks? (b) What is the bit rate (in bps) of the resulting IP traffic?

Problem #9 (A) (a) Each sample occupies 4 bytes (2 x 16 bit samples). This gives a total of 1024/4 = 256 samples per packet. There are 44100 samples/s, so with 256 samples/packet, it takes 44100/256 ~ 172.3 packets to transmit one second s worth of music. (b) one IP datagram = 1024 bytes (data) + 12 bytes (RTP header) + 8 bytes (UDP header) + 20 bytes (IP header) = 1064 bytes data rate = 44100/256 x 1064 bytes x 8 bits/byte = 1.47 Mbps

Problem #10 (Q) Consider the network below. The correspondent node sends UDP segment to the mobile bl node using Mobile bl IP. Consider the IP datagrams A, B and C. What are the source and destination IP addresses, and the payload of these datagrams?

Problem #10 (A) source IP destination IP payload A: 102.67.7.8 128.119.40.186 UDP segment B: HA s IP 79.129.13.2 datagram A C: 79.129.13.2 102.67.7.8 mobile host s reply

Problem #11 (Q) Consider the principle of using a KDC shown below. Is it necessary for the KDC to know for sure it is talking to Alice when it receives a request for a secret key that Alice can share with Bob? Why or why not?

Problem #11 (A) No. Suppose that Trudy had sent the message I m Alice and I want to talk to Bob. The KDC would just return K A,KDC (K A,B ) which can be decrypted only by Alice because she is the only other entity holding the secret key K A,KDC.

Problem #12 (Q) Diagram below shows how Alice uses symmetric key cryptography, public key cryptography and digital signature to provide secrecy, sender authentication and message integrity. it With a similar il diagram show the corresponding operations that Bob must perform on the package received from Alice.

Problem #12 (A) 1. Bob deconcatenates (divides) incoming packet into K B+ (K S ) and K S (m,k A- (H(m))) 2. Bob decrypts K B+ (K S ) with his private key K B- to obtain session key K S 3. Bob decrypts K S (m,k A- (H(m))) with session key K S to obtain m,k A- (H(m)) 4. Bob deconcatenates m,k A- (H(m)) into m and K A- (H(m)) 5. Bob decrypts K A- (H(m)) with Alice s public key K A+ to obtain H(m) 6. Bob applies hash function H to m and compares the outcome to H(m) obtained in step 5