Topics: The Internet Network Edge Network Core Internet Backbone Loss & Delay Internet Protocol Stack CHAPTER 1: OVERVIEW OF COMPUTER NETWORKS
|
|
|
- Charla Anthony
- 10 years ago
- Views:
Transcription
1 Topics: The Internet Network Edge Network Core Internet Backbone Loss & Delay Internet Protocol Stack CHAPTER 1: OVERVIEW OF COMPUTER NETWORKS 1
2 THE INTERNET The Internet is a computer network that connects hosts or end systems throughout the world. These hosts could be PCs, mobiles or fridges that run applications. Each end system is connected together by a network of communication links and packet switches. Each of these communication links can have different transmission rates, which may be physically limited by the medium of transmission. The Internet is radically different to the traditional telephony network and bears some remarkable resemblance to the postal system- It provides Best Effort Service Data transmitted across the network is sent in segments of data called packets, much like envelopes or packages sent in the postal system. This type of transfer of data is known to be packet switched whereas a telephone call runs on a circuit switched network. This idea is based on minimalism. The network equipment (such as a router or a switch) is mostly stateless It is decentralised. The old telephony system use to work based on setting up a physical connection from the source to the destination. Using an old rotary telephone, the number of clicks back would determine how far a lever would turn at a particular exchange. For example, the first number 9, may rotate the lever 9 places in the first exchange and then connect it to another telephone exchange. The next number 3 may rotate the lever in the second exchange 3 places and so on, until all the numbers have been entered and a circuit has been established. This setup has one fundamental problem if one of the exchanges fail, then the telephone call cannot take place. A postal system however does not experience this problem. The failure of one sorting facility does not inhibit the flow of other messages. They merely get rerouted to another facility. The rationale for the Internet is due significantly to this reason. Historically, the Internet was used by the U.S military to ensure that communication systems would not be fail miserably in the case of a nuclear attack by the Russians. 2
3 NUTS & BOLTS VIEW VS SERVICES VIEW OF THE INTERNET Any means of communication requires a set of rules. The Internet, which is a network of networks is no different. These rules are known as protocols: A protocol defines the format and the order of messages exchanged between two or more communicating entities, as well as the actions taken on the transmission and/or receipt of a message or other event. There are many types of network protocols used to handle different type of data transmissions including TCP, HTTP and UDP. Given the various number of protocols, it is important that standards are set. This is done so by the IETF standards documents called requests for comments (RFCs). The internet can also be thought of as an infrastructure that provides services to (distributed) applications through the use of various access mediums and protocols. These distributed applications could include web surfing, instant messaging, VoIP, P2P or video streaming. 3
4 NETWORK EDGE The network edge consists of hosts. Hosts can be loosely categorised as clients and servers: A client contains a client program that requests and receives a service from a server program running on another end system. A server handles the requests for a service. Applications which operate on hosts that use clients/servers are referred to as a client-server model based network. and newsgroups are typical based on the client-server model. There has also been an increasing use of peer-to-peer applications in which end systems interact and run programs that perform both client and server functions. This is seen by the likes of BitTorrent and KaZaa. ACCESS NETWORKS There are different ways of the network edge or the hosts to connect to the Internet. Different access networks have different transmission speeds and characteristics such as being shared or dedicated. DIAL-UP Dial-up uses a traditional telephone line to access the Internet at speeds up to 56 kbps. Because this method of access uses the whole telephone line, it is not possible to surf the internet and be on the telephone simultaneously. Dial up requires a modem which physically dials the ISP s number over a traditional telephone line. Since the telephone line is built for analog signals, the modems on both ends must form analog to digital and digital to analog conversions. DSL The most prevalent broadband residential access, is the digital subscriber line. DSL works by hooking unto the existing telephone line using a modem. The telephone exchange is terminated with a filter which filters the phone signals and the other signals are fed into a digital subscriber line access multiplexer (DSLAM). Using this mechanism (frequency division multiplexing), it is possible to hold both telephone and data signals simultaneously. ADSL- Asynchronous DSL is commonly deployed to residential users who generally prefer faster download speed rather than fast upload speeds (hence the asymmetric nature of the Internet link). This is done by assigning the downstream channel with a larger frequency band: A high speed downstream channel: 50 khz-1mhz A upstream channel: 4 khz-50 khz Ordinary voice channel: 0-4 khz 4
5 Since the channels are side-by-side in the frequency spectrum, a splitter is often used on the customer s end to separate the data and the telephone signals properly. The actual downstream and upstream transmission rate available to the residence is a function of the distance between the home and the exchange, the gauge of the twisted pair line and the degree of electrical interference. To boost the data rates, DSL relies on advanced signal processing technology and error correction algorithms, which can lead to high packet delays. CABLE While DSL and dialup make use of the existing telephone infrastructure, cable Internet makes use of the cable television infrastructure. Hybrid fiber coax (HFC) is often use fiber optics connect the cable head end to neighbourhood-level junctions, from which traditional coaxial cable is then used to reach individual houses and apartments. Cable modems divide the HFC network into two asymmetric channels, a downstream and upstream channel. The downstream rate is usually faster than the upstream rate. 5
6 As shown in the figure, cable Internet is a shared broadcast medium (whereas the DSL is a point to point connection and hence is a dedicated rather than shared transmission link). Every packet sent by a home travels on the upstream channel to the head end. For this reason, if several users are simultaneously downloading a video file on the downstream channel, the actual rate at which each user receives its video file will be significantly lower than the aggregate cable downstream. The converse is true for a few simultaneous active users. ETHERNET Ethernet users use twisted-pair copper wire to connect to an Ethernet switch. It is possible to achieve data transmission rates of 10 Gbps. These connections are typically found in small home networks, corporate and university campuses. WIRELESS ACCESS NETWORKS This access method is a shared network which connects hosts to a access point. When using wireless LAN technology, the n provides Internet access speeds of up to 150 Mbps over a distance of a few tens of metres. 3G networks have also been deployed to provide packet switched wide-area wireless Internet access speeds. WiMAX is a long distance cousin of the WiFi protocol. It operates independently of the cellular network and delivers speeds of up to 10 Mbps over tens of kilometres. 6
7 PHYSICAL MEDIA All physical media (the link that lies between the transmitter and receiver) have bandwidth limits. If we are to transmit digital signals, we must use Fourier series or transforms an infinite number of sinusoidal waves with an infinite number of harmonics to generate the square wave signals across a physical media. Given that we cannot make infinite number of sinusoids, we can make approximations. But with approximations comes uncertainties; an undesirable property for equipment dealing with digital signals. This is the reason why different physical media have different bandwidth limitations, and some are more error prone than others! TWISTED PAIR The least expensive and most commonly used guided transmission medium. Twisted pair consists of two insulated copper wires, each about 1 mm thick, arranged in a regular spiral pattern to reduce the electrical interference from similar pairs close by. Typically, a number of pairs are bundled together in a cable by wrapping the pairs in a protective shield. Note that each pair constitutes a single communication link, since we must have a complete circuit and a reference voltage for which we can setup a potential difference and measure the signal. The typical phone line is a category 3 twisted pair, whereas Ethernet cables are usually unshielded twisted pair category 5 or 6, and allow data rates of 1 Gbps for distances up to a hundred metres. The difference between the categories is dependent on the twist per unit length- Cat 5 cable typically has three twists per inch of each twisted pair of 24 gauge (AWG) copper wires within the cables. This will affect the data rate and issues to do with cross talk. COAXIAL CABLE A coaxial cable consists of two concentric copper conductors. With this construction and special insulation and shielding, these cables can have high bit rates. In cable television and Internet access, the transmitter shifts the digital signal to a specific frequency band, and the resulting analog signal is sent from the transmitter to one or more receivers. Coaxial cable can be used as a guided bi-directional shared medium. 7
8 FIBER OPTICS Fiber optics is a thin, flexible medium that conducts pulses of light, with each pulse representing a bit. Each fiber can support up to 50 Gbps and are immune to electromagnetic interference. Moreover, they have low signal attenuation up to 100 kilometres and can be amplified using an optical amplifier. Fiber optics are based upon the use of total internal reflection to direct the light through the link. This requires the pulses of light to be incident on the fiber at a critical angle. There are different grades of fibers; each of which will require different transmission wavelengths. WIRELESS Wireless media encodes a signal using the electromagnetic spectrum. It is bidirectional and its propagation is affected by obstructions, interferences and reflections. There are several types of wireless links: Terrestrial microwave Wifi Wide-Area Satellite 8
9 RADIO TRANSMISSION This is an attractive medium because it requires no physical medium, is capable of penetrating walls and provides connectivity to mobile users over a potentially long distance. The characteristics of a radio signal depends significantly on the propagation environment and the distance over which a signal is to be carried. Environmental considerations determine path loss and shadow fading, multipath fading due to signal reflection and interference. In the VLF, LF and MF bands, radio waves follow the curvature of the earth whereas in the HF band, they bounce off the ionosphere. FREE-SPACE OPTICS Free-space optics involves the use of laser communication systems in free air space such as across building roof tops. Again the characteristics of the transmission is dependent on environmental considerations such as convection currents that can interfere with the laser beam. SATELLITE Satellites receive transmissions on one frequency band, regenerates the signal using a repeater and transmits the signal on another frequency. Depending on the type of communication, satellites are located at different altitudes. This will affect the number of satellites needed for global coverage and the round trip delay time. Satellite communication has largely been unsuccessful commercially due to the significant upfront costs in building and sending satellites into space. 9
10 MODULATION & DEMODULATION All signals must be modulated and demodulated especially if we are to represent digital data using analog signals. A binary signal may be frequency modulated, amplitude modulated or phase modulated. Depending on the modulation, its characteristics will be demodulated to obtain the original binary signal again. There are more complex methods of modulation/demodulation that increase the bit rate (baud-rate x bits/symbol) despite a low baud rate (symbols/sec). This is done by encoding multiple bits in one particular symbol. For example in figure below, each point represents one possible combination of amplitude and phase that could stand for a particular binary combination. These methods of modulations were first widely implemented on 56 kbps dialup modems has QPSK, QAM-16 and QAM-64. FREQUENCY DIVISION MULTIPLEXING Each of the channels operate at the same frequency, but by multiplexing the frequencies, we can shift the frequency band for each channel such that the channels operate a different frequency band. TIME DIVISION MULTIPLEXING With TDM, each channel is assigned a time slot in which transmission or receipt can be performance. 10
11 WAVELENGTH DIVISION MULTIPLEXING This is much like FDM, but is given a different name since it is implemented mostly in fibers. By using different wavelengths of light, it is possible to send multiple signals simultaneously down a fiber. This allows the fiber to be utilised to its full capacity since no electronic device is capable of utilising the entire link capacity. Unlike the other types of multiplexing, this type of multiplexing requires no electrical power and can be done by using prisms. This is what makes fiber so attractive. 11
12 NETWORK CORE CIRCUIT SWITCHING VS PACKET SWITCHING In circuit-switched networks, the resources needed along a path, to provide for communication between the end systems are reserved for the duration of the communication session between the hosts. In packet-switched networks, these resources are not reserved; a session s messages use the resources on demand, and as a consequence may have to wait for access to the communication link. Since resources are assigned as needed and hence someone who is active may be assigned resources which may otherwise be hogged by an idle user. This also allows for the opportunity for a user to be able to use the link s bandwidth to its full capacity. Previously we used the postal service and telephone analogy. The telephone service was described as a circuit switched network because A bona fide connection for which the switches on the path between the sender and receiver maintain a stateful connection state for that connection. It has a circuit-like guaranteed performance It reserves a constant transmission rate in the network s links for the duration of the connection. The diagram shows the principle behind circuit switching in which data can only take one path once the circuit is setup. Since the bandwidth has been reserved for this sender-to-receiver connection, the sender can transfer the data to the receiver at the guaranteed constant rate. If we are to have n circuits on one link, then we must divide the network resources by either frequency or time. This requires that the n circuits reserve 1/n of the link s resources and hence 1/n of the link s bandwidth for the duration of the connection. Given this situation, an idle user would be wasting resources, and saturation of the link would not be possible. 12
13 Example Consider a file that is bits. If it is sent over a circuit switched network with a TDM of 24 slots and the link bit rate is Mbps. If it takes 500 msec to establish an end-to-end circuit, how long will the transmission take? Each circuit has a transmission rate of = 64 kkkkkkkk. This means that it takes = 10 seconds to transmit the file. The whole transmission including establishing the connection takes 10.5 seconds. 24 For packet switched networks, each end to end data stream is divided into packets. Packet switches tend to use a store-and-forward transmission; and packets must be received entirely before it can be transmitted on the outbound link. If there are Q links between two hosts, each of rate R bps, then to send L bits the total delay due to the store-and-forward transmission mechanism would take tt = QQQQ RR. Since each output consists of a buffer or queue, then it may also experience queuing delays due to a busy output link. These delays are variable and depend on the level of congestion in the network. In the case of a full buffer, packet loss will occur either the arriving packet or one of the already-queued packets will be dropped. Let us see why packet switched networks are more efficient when it comes to statistical multiplexing. Consider a 1 Mb/s link. With a circuit switched network, where each active user uses 100 kb/s and is active 10% of the time, it is only possible to service 1MMMMMMMM 100kkkkkkkk = 10 users, since 100 kbps must be reserved for each user at all times. For a packet switched network of similar properties, the probability that there will be more than 10 active users out of a possible 35 users at any one time is less than 0.004: PP(XX > 10) = (0.1)10 (0.9) (0.1)9 (0.9)
14 For more than 99% of the case, there are 10 or fewer simultaneous active users, and the aggregate arrival rate of data is less than 1 Mbps, the output rate link. This essentially means that packets flow through the link essentially without delay. If there are more than 10 simultaneous users, than the aggregate arrival rate of packets exceed the output capacity of the link, and the output queue will begin to grow until the input rate falls back below 1 Mbps. This indicates that packet switched networks are more efficient for situations in which there is bursty data. But for applications requiring a steady stream of data, such as audio and video applications, this could be an issue, since we no longer have the circuit-like reliability. 14
15 INTERNET BACKBONE The Internet is roughly hierarchical. This means that the general life of a packet is routed through many Internet Service Providers before reaching its destination. On the outer edge of the Internet, access ISPs connect to the rest of the Internet through a tiered hierarchy of ISPs, and it itself is at the bottom. At the very top, there are Tier 1 ISPs. Tier 1 ISPs Are characterised as peering with other tier-1 ISPs, connected to a large number of tier-2 ISPs and other customer networks and have either national or international coverage. The likes of Sprint, Verizon, MCI and Level 3 are tier-1 ISPs. Tier 2 ISPs Typically have regional or national coverage. They may be multi-homed such that they are connected to a few tier-1 ISPs. To connect to a large portion of the public Internet, a tier-2 ISP must route a majority of its traffic through one of its tier-1 ISPs hence the name tier-2. To this end, a tier-2 ISP is said to be a customer of a tier-1 ISP provider. Tier 2 ISPs may also choose to connect (peer) with other tier-2 ISPs. This usually is a business strategy. Tier 3 & Local ISPs Are at the edge of the Internet and provide access to end users. They a generally customers of either Tier-2 or Tier-1 ISPs. Within an ISP s network, the points at which the ISP connects to other ISPs is known as Points of Presence (POPs). A POP is simply a group of routers in the ISP s network at which routers in other ISPs or in the networks belonging to the ISP s customers can connect. 15
16 LOSS & DELAY As a packet travels to its destination, it suffers from several types of delays at each node along its path: Nodal processing delay Queuing Delay Transmission Delay Propagation Delay These delays accumulate to give a total nodal delay. For the purpose of explanations, we shall consider the router as the node of interest. NODAL PROCESSING Nodal processing is the time required to examine each packet s header and determine where to direct the packet next. Nodal processing may also involve bit level error checking. Typical processing delays are of the order of microseconds or less. Once the router has finished its processing, it will forward the packet to an output queue that precedes the outgoing link. QUEUEING DELAY Sometimes the transmission link is busy and packets experience a queuing delay. The length of the queuing delay of a specific packet will depend on the number of earlier- arriving packets that are queued and waiting for transmission across the link. In the absence of any queue, there will be no queuing delay, whereas a congested network will result in a long queuing delay. TRANSMISSION DELAY The transmission delay (store-and-forward delay) is the time to push a complete packet on a link (a function of packet s length and transmission rate). Normally, nodes tend to only transmit the packet after the packet has been received. 16
17 Assuming a FIFO manner then, our packet can only be transmitted only if all the previous packets have been transmitted. If the packet length is L bits and the transmission rate is R bits/sec, then the transmission delay is given by: dd tttttttttt = LL RR PROPAGATION DELAY Once the packet is on the link, it needs to propagate down the link. The propagation delay is the time required to travel from one end of the link to the other, and is dependent on the physical medium of the link. It is given by: dd pppppppp = dd ss Where d is the length of the link and s is the propagation speed of the link mmss mmss 1. Once the last bit of the packet propagates to the adjacent node, it and all its preceding bits of the packet are stored at the node for nodal processing. We thus see that the total nodal delay is given by: dd nnnnnnnnnn = dd pppppppp + dd qqqqqqqqqq + dd tttttttttt + dd pppppppp PACKET LOSS In reality, a queue preceding a link has finite capacity. This means that queuing delays do not approach infinity as the traffic intensity approaches 1. Instead, a packet can arrive to find a full queue, and either it or another packet in the queue will be dropped a packet will have been transmitted into the network core, but will never emerge from the network at the destination. To rectify this issue, a lost packet may be retransmitted. 17
18 INTERNET PROTOCOL STACK To organise the anarchy of network design, network designers have chosen the organise the protocols in layers. Each protocol belongs to one of the layers which roughly describes the services it offers. The aggregate of these layers form a protocol stack, and in the case of the Internet, there are five layers: The Physical Layer: consists of protocols that define how individual bits within the frame move from one node to the next. Data link Layer: consists of protocols that define how packets should be transferred between neighbouring nodes (movement of a packet from one node to the next node in the route). o PPP, Ethernet Network Layer: consists of protocols responsible for moving network layer packets known as datagrams from the source to the destination. o IP, routing protocols such as OSPF Transport Layer: consists of protocols that transport application-layer messages between application endpoints. o TCP, UDP Application Layer: The layer in which the network applications and their application layer protocols reside. o HTTP, FTP, SMTP, DNS 18
19 Topics: CHAPTER 2: Link Layer Functionality Framing Error Control Multiple Access Control Flow Control Reliable Transmission Link Layer Technologies Wireless Ethernet DATALINK LAYER 19
20 THE LINK LAYER The link-layer protocol is used to move a datagram over an individual link. Hence it is mostly implemented on the network interface card (NIC) using a link layer controller. A link layer protocol must define the format of the packets exchanged between the nodes at the ends of the links, as well as the actions taken by these nodes when the packets are sent and received: Framing: Data bits are encapsulated at each level of the Internet Protocol Stack and contain headers and a data payload which is in line with the packet based nature of the Internet. In the link layer, almost all the linklayer protocols encapsulate each network layer datagram within a link layer framer prior to transmission. Error Control: Due to signal attenuation and electromagnetic noise, it is possible for bit errors to exist, leaving the receiver node to incorrectly decipher the frames. Since it is pointless to forward erroneous datagrams, link-layer protocols provide mechanisms to detect (and possibly correct) such bit errors. Multiple Access Control: In a shared medium, it is important for the medium access control (MAC) protocol to define how this single broadcast link can be shared between a number of nodes in essence coordinate the frame transmissions of the many nodes. Flow Control: The nodes on each side of a link have a limited amount of frame buffering capacity. This is a concern when a receiving node may receive frames at a rate faster than it can process them. Without flow control, the receiver s buffer can overflow and frames can get lost. Note that flow control is often performed in the transport layer. Reliable Delivery: When a link layer protocol provides reliable delivery service, it guarantees to move each network layer datagram across the link without error. This is often done with acknowledgements and retransmissions. Reliable delivery service is often used for error prone mediums such as wireless, with a goal of correcting an error locally rather than forcing a retransmission which itself may be erroneous. This feature may be deemed to introduce unnecessary overheads for low bit error-links and hence may not be provided in mediums such as wired coax. These are just some of the actions that can be taken by a link-layer protocol such as Ethernet, PPP or WAN. Since a link layer protocol is only responsible for transporting datagrams over an individual link, it is not unusual for a datagram to be carried by different link layer protocols before it reaches its destination. 20
21 FRAMING Just like we must form words using the alphabet, computers must encapsulate segments of data bits into frames. One of the fundamental issues with framing is determining when a frame ends and when the next one starts. With words, more often than not, spaces acts as some sort of a flag that separates one word from another. In a similar fashion, frames can be separated by flags. What happens when the data contains the same sequence that represents the flag? As an analogy, what happens when you wish to print in C, using the printf statement? To do so in C, we use the back slash \ as an escape (ESC) sequence. With framing, we do exactly the same thing, and append an escape sequence before the supposed flag. The example above shows how the flag is part of the original payload, and that when the data is encapsulated into a frame, an escape sequence is used to indicate that the flag sequence in the payload should not be treated as a flag. FLOW CONTROL Recall that it is possible for the sender and receiver to be operating at different speeds. When this occurs, it is possible that the receiver will simply be unable to handle the frames as they arrive, since its buffers are full and it is unable to process the frames quick enough. This results in a loss of frames. To rectify this issue we can implement: Feedback-based flow control: the receiver sends back information to the sender giving it permission to send more data or at least telling the sender how the receiver is doing Rate based flow control: the protocol has a built in mechanism that limits the rate at which senders may transmit data, without the use of feedback. 21
22 ERROR CONTROL With error detection and correction, we find redundant information is added to the original information such that we can either detect that an error has occurred (but not be able to correct that error) or at the cost of more overhead, be able to correct that error as well using forward error correction techniques. Before we continue, we must define the hamming distance the number of bit positions in which two codewords differ. Hence, with a hamming distance of d, it will require d single bit errors to convert one codeword into another codeword. Consider: and This has a hamming distance of 3. Consider a message size of m bits, with r redundant bits to bring a total of n=m+r bits. This indicates that 2 mm possible combinations of bits are valid, and that 2 nn 2 mm combinations are invalid. ERROR DETECTION To detect d errors, you need a distance dd + 11 code because with such a code there is no way that d single bit errors can change from one valid codeword to another. Consider the situation in which you wish to detect a 1 bit error. In this case, around every valid codeword, all the other codewords which are only 1 bit different (hamming distance of 1) should not be valid. Hence, only codewords which are at a minimum of 2 hamming distances apart are valid. This shows that a distance dd + 1 code is the minimum to detect errors ERROR CORRECTION To correct d error, you need a distance 2d+1 code because that way the legal codewords are far enough apart that even with d changes, the original codeword is still closer than any other codeword, so it can be uniquely determined. Let us derive the minimum number of redundant bits required to detect k bit errors from a total message size of n=m+r bits. Each of the 2 mm legal messages has n illegal codewords which are a distance 1 from it formed by systematically inverting each of the n bits. Thus to correct 1 bit errors, then for each of the 2 mm legal messages, its n neighbours cannot be valid. We can represent this with a shell - the centre of which is one of the 2 mm valid messages. Its radius is a hamming distance of 1, since its n neighbours cannot be valid. Hence for 1 bit errors the condition imposed would be: (nn + 1)2 mm 2 nn (mm + rr + 1) 2 rr Now for k bits then, a shell must have a radius equal to a hamming distance, k: 1 + nn + nn nn kk 2mm 2 nn 22
23 ERROR DETECTING CODES PARITY CHECKING In an even parity scheme, the sender includes one additional bit and chooses its value such that the total number of 1s in the entire message is even. This can also be applied with an odd parity scheme. The receiver need only count the number of 1s in the received messages. If there is an odd number of 1s, then an even parity scheme would flag an error. If however, an even number of bit errors occurs, then an error would go undetected. This method of error checking is rather ad-hoc, given that errors are not independent of each, but rather clustered together in bursts. Under burst error conditions, the probability of undetected errors in a frame protected by single-bit parity can approach 50%. The following figure shows a two-dimensional generalisation (i message rows and j message columns) of the single bit parity scheme. A parity value is computed for each row and for each column, resulting in ii + jj + 1 parity bits. This scheme allows the detection of single and two bit errors, and by isolating the row and column of a one bit error, it is possible to correct the error. Since each of the j columns will have the correct parity by accident is 0.5, the probability of a bad block being accepted when it should not be is 2 jj CHECKSUM In checksumming techniques, the d bits of the data are treated as a sequence of k- bit integers. The simplest approach is to sum these k-bit integers and use the resulting sum as the error detection bits. The Internet checked is based on this approach bytes of data are treated as 16 bit integers and summed. The 1s complement of this sum then forms the Internet checksum that is carried in the segment header. The receiver will then take the 1s complement of the sum of the received data (including the checksum). If the result is all 1 bits then there is no error detected, while a 0 bit will indicate an error has occurred. CYCLIC REDUNDANCY CHECK (CRC) CRC codes are known as polynomial codes, since it is possible to view the bit string to be sent as a polynomial whose coefficients are the 0 and 1 values in the bit strong, with operations on the bit string interpreted as polynomial arithmetic. Example We can represent as a six term polynomial: 1xx 5 + 1xx 4 + 0xx 3 + 0xx 2 + 0xx 1 + xx 0 23
24 1. Let r being the degree of GG(xx), the generator polynomial. This is agreed upon by the sender and receiver. 2. For a given piece of data, DD(xx), with d bits, the sender will choose r additional bits, R, and append them to DD(xx) such that the resulting dd + rr bits pattern is exactly divisible by GG(xx) using modulo 2 arithmetic with no carries in addition or borrows in subtraction. This indicates that addition and subtraction are identical and is just the bitwise XOR operation. To append the r bits, we must bit shift the d bits: DD 2 rr The resulting r bits at the end are zero padded, thus: DD 2 rr XXXXXX RR Will give us the d+r bit message we should send. 3. The receiver will divide the d+r received bits by G. Multiplication and division are the same in base-2 arithmetic, except any additions and subtractions are done without carries or borrows. If the remainder is nonzero, the receiver knows that an error has occurred. The crucial question is determining how to compute R. The condition is that we want to choose R such that G divides into DD 2 rr XXXXXX RR. This implies that: DD 2 rr XXXXXX RR = nnnn RR = rrrrrrrrrrrrrrrrrr DD 2rr GG Example Consider the case in which D=101110, d=6, G=1001. This indicates that r=3, since it is 1 less bit than the generator. We do long division and the remainder R, is what we append onto the message. Thus the message we send is: DD 2 rr XXXXXX RR =
25 MULTIPLE ACCESS CONTROL Multiple Access Control protocols are not very useful on a point-to-point link in which there is only a single sender and receiver at either end. If however, we have a broadcast link, then we can have multiple sending and receiving nodes all connected to the same, single, shared broadcast channel. Simultaneous transmission by nodes can result in collisions or interference. The coordination of the shared broadcast medium is thus a problem that must be resolved by Multiple Access Control (MAC) protocols rules that regulate the transmission of data by nodes on a shared broadcast channel. Regardless of the type of multiple access protocol whether it be channel partitioning, random access or taking turns, there are several desirable characteristics: When only one node sends data, the node throughput should be the maximum capable on the broadcast channel, say R bps. When M nodes have data to send, the average throughput for each node should be R/M bps. The protocol should be decentralised such that there won t be a single point of failure The protocol is simple and inexpensive to implement CHANNEL PARITIONING These types of multiple access mechanisms include FDM and TDM. For a channel that supports N nodes with a transmission rate of R bps on the channel, TDM will divide time into time frames and further divided into N time slots for each of the N nodes. TDM is appealing in the sense that it eliminates collisions and is perfectly fair but each node can only send at a maximum rate of R/N bps and must wait for its turn in the transmission sequence. For FDM, a single channel is divided into different frequencies, each with a bandwidth of R/N. The result is N smaller channels of R/N bps out of the single larger R bps channel. Again, it prevents collision and bandwidth is fairly distributed. This creates bandwidth limitation for any single node. CDMA code division multiple access assigns a different code to each node to encode the data bits it sends. Good CDMA protocols allow simultaneous transmission to occur in spite of interfering transmission by other nodes. 25
26 RANDOM ACCESS PROTOCOLS In a random access protocol, a transmitting node always transmits at the full rate of the channel. When there is a collision, each node involved in the collision will repeatedly retransmit (at random intervals) the frame until it is sent successfully. SLOTTED ALOHA We shall assume that all frames are L bits, and the time is divided into slots of size L/R seconds. For simplicity, we shall assume that nodes start to transmit frames only at the beginning of slots (at the full rate of the channel) and are synchronised to know when the slots begin. When a collision occurs (denoted C on diagram), the nodes detect a collision before the end of the slot. The probability that the nodes will retransmit on the next frame is p note that this value can be different for each node and that the nodes are independent of each other. We can easily see that the probability that any one of the N nodes successfully transmits its data in a slot is: NN pp(1 pp)nn 1 1 There are N possible nodes that can transmit successfully with a probability of p. The other N-1 have a probability of 1-p of not transmitting. If we let NN we see that the maximum efficiency the protocol is ee There are three inefficiencies that occur here: Synchronisation is required There are empty slots (denoted E on diagram) since all the active nodes refrain from transmitting after a collision as a result of the probabilistic transmission policy. There are waste slots due to the collisions In pure ALOHA, there is no synchronisation, so that it is fully decentralised. It is however only half as efficient as the slotted ALOHA since there is a higher chance of collision occurring since there are two possible situations in which collisions can occur a collision with the frame before or after it. 26
27 CSMA Carrier Sense Multiple Access (CSMA) is an improvement of ALOHA in which the nodes listen to the channel to determine whether it is busy or not. If it is busy, then we wait for a random amount of time before retransmission. Collisions can still occur, since it takes time for signals to propagate through the channel channel propagation delay. This can be seen by the space diagram. At time tt 0 B senses that the channel is idle, and transmission propagates in both directions along the medium. At time tt 1 D still senses an idle channel, since B s transmission hasn t reached it yet. It thus starts transmitting. A short time later, B s transmission begins to interfere with D s transmission at D. If collision detection is implemented, then B and D will not transmit the frame in its entirety. TAKING TURNS The polling protocol requires one node to be the master node which polls each the of the nodes in a round robin fashion. The master node will transmit a message to node 1, that it can begin transmission. After this, it will do the same for node 2. Although it prevents collisions and empty slots, it has polling inefficiencies and has a single point of failure. The token passing protocol is decentralised, by using a special purpose frame known as a token. This token is exchanged amongst the nodes in some fixed order and is indicates that it has the right to transmit. If it has nothing to transmit, it will pass it on immediately to the next node. Note that the failure of one node can cause the whole network to fail and that latency issues are present when passing the token. 27
28 LINK LAYER TECHNOLOGIES WIFI With WIFI, a wireless host connects to a base station or to another wireless host through a wireless communication link. A base station is responsible for sending and receiving data between wireless hosts and is often used to coordinate the transmission of multiple wireless hosts with which it is associated. It is also possible to have an Ad-Hoc Mode network in which the hosts can directly communicate with each other. This is done by discovering and communicating with hosts in range in a peer-to-peer fashion without the use of base stations. The WIFI has 4 standards: b, a, g and more recently, n. Standard Frequency Data Rate b Ghz 11 Mbps a Ghz 54 Mbps g Ghz 54 Mbps The above three standards all use CSMA/CA as its MAC protocol and have the ability to reduce their transmission rate in order to reach out over greater distances. WIRELESS CHARACTERISTICS There are other inherent problems when we use a wireless medium for communication. The most prominent issues include: Decreasing signal strength: electromagnetic radiation attenuates as it passes through patter according to the inverse square law. Interference from other sources: Many cordless phones also work on the same 2.4 Ghz range, which can cause interference. Multipath propagation: Multipath propagation occurs when portions of the electromagnetic wave reflect off objects and the ground, taking paths of different lengths between a sender and receiver. This results in the blurring of the received signal at the receiver. Hidden Terminal Problem: Suppose that host A and C wish to transmit to B but A and C can t hear each other s transmission. If both A and C transmit, they will be unaware of their interference at B. 28
29 MAC PROTOCOL The standards use carrier sense multiple access to prevent collisions. But instead of using collision detection, they use collision avoidance CSMA/CA. The reasoning for not implementing collision detection is due to the difficulties in: Dealing with the hidden terminal problem and attenuation Dealing with separating the weak received signals from your own transmission signals. The protocol follows a set of procedures: 1. If initially the sender senses the channel is idle, it transmits its frame after a short period of time known as the Distributed Inter-frame Space (DIFS). 2. Otherwise, the sender chooses a random backoff value and counts down this value when the channel is sensed idle. While the channel is sensed busy, the counter value remains frozen. 3. When the counter reaces zero, the station transmits the entire frame and then waits for an acknowledgement (ACK). 4. The sender will return an ACK after SIFS (Short Inter-frame Spacing) if the data has passed the CRC check. 5. If an acknowledgement is received by the source, the transmitting station knows that its frame has been correctly received at the destination. If the station has another frame to send, it begins the CSMA/CA protocol at step 2. If the acknowledgement isn t received, the transmitting station reenters the backoff phase in step 2, with the random value chosen from a larger interval. As we have just seen, collisions can still occur. We can further reduce the waste due to collisions by allowing the sender to reserve the channel. The sender will first transmit a small request to send (RTS) packet to base station using CSMA. If the RTSs collide, this will be detected but this is negligible since they re short. If the RTS packet is received by the base station, it will broadcast a clear to send CTS response addressed to the sender. Since this is heard by all other stations, all stations except the sender will refrain from transmission. In this way, collision of actual data packets is avoided. 29
30 ETHERNET Ethernet is the most prominent LAN technology due to its wide deployment, simplicity and continuous improvement in technology. First developed by Metcalfe, the Ethernet LAN consisted of a coaxial bus in which the nodes would tap into the cable. Hence, the Ethernet was said to have a bus topology and is a broadcast LAN. Improvements however made it possible to introduce hub-based star topology. A hub is a physical layer repeater which relays bits coming in from one interface on every other interface at the same rate (no frame buffering) and with no ability to perform collision detection. Hence, this topology is also a broadcast LAN. At the turn of the millennium, switches began to replace hubs a layer 2 device which stores and forwards packets. ETHERNET FRAME STRUCTURE Any information from higher layered protocols is encapsulated by a lower layered protocol. An Ethernet frame encapsulates network layer datagrams usually IP. PREAMBLE Each frame starts with an 8 bytes preamble: 7 bytes of byte of This is used to synchronise the clock rates between the sender and receiver. DESTINATION & SOURCE ADDRESS The destination and source address are 6 bytes long (giving 281 trillion combinations) and unique to each network adapter. A typical MAC address could be 08-0b-db-e4-b1-02 with the upper 24 bits interpreted as the organisationally unique identifier and the lower 24 bits being organisation assigned portion. Hence, a manufacturer such as Cisco or TP-LINK will have all their products with the first 24 bits being the same. The lower 24 bits is then individually assigned to each adapter. There are also special conditions with these MAC address: Unicast transmission have addresses with the lowest bit of the first byte being 0 Multicast transmissions have addresses with the lowest bit of the first byte being 1 Broadcast transmissions use a MAC address of ff-ff-ff-ff-ff-ff 30
31 As mentioned above, Ethernet was broadcast based. Hence the Ethernet bus can contain frames which are meant for many devices. Obviously, it would not be very useful to accept every Ethernet Frame and to pass it up to the network layer. Instead, the adapter only accepts the frame if the destination address matches the adapter s address, or if the destination is broadcast or if the destination address is a multicast in which the adapter has been configured to accept. TYPE FIELD The type field is 2 bytes long and permits Ethernet to multiplex network-layer protocols. This allows the adapter to forward the data to the associated network layer protocol such as IP, ARP, Novell IPX or AppleTalk. CRC The purpose of the CRC field is to allow the receiving adapter to detect bit errors in frame. If an error is detected, the frame is simply dropped. AN UNREALIABLE CONNECTIONLESS SERVICE Ethernet technologies provide connectionless service to the network layer since no handshaking is done between the sending and receiving adapter. It is also an unreliable technology since it does not send acknowledgements or NACKS. This allows Ethernet to be simple and cheap, at the expense of introducing gaps in the streams of datagrams passed to the network layer (unless we use TCP in which retransmissions will fill the gaps). ETHERNET S MAC PROTOCOL : CSMA/CD As we have seen already CSMA/CD will: 1. Allow adapters to begin transmission at any time (no slots like ALOHA) after it senses (the voltage level) that the channel is idle. 2. Abort transmission if it detects that another adapter is also transmitting, that is, collision detection. 3. Before reattempting retransmission, an adapter waits a random time that is typically small compared with the time to transmit a frame If many nodes have frames to transmit, the effective transmission rate of the channel can be much less than if only one node has a frame to send. The efficiency of Ethernet is defined to be the long run fraction of time during which frames are being transmitted on the channel without collisions when there is a large number of active nodes that have large frames in their buffers: EEEEEEEEEEEEEEEEEEEE = dd pppppppp dd tttttttttt 31
32 CSMA/CD ALGORITHM 1. The adapter obtains a datagram from the network layer, prepares an Ethernet frame and puts the frame in an adapter buffer 2. If the adapter senses the channel is idle ( there is no signal energy entering the adapter from the channel for 96 bit times i.e the time to transmit 96 bits), it starts to transmit the frame. If the adapter senses that the channel is busy, it waits until it senses no signal energy and then starts to transmit the frame. 3. While transmitting, the adapter monitors for the presence of signal energy coming from other adapters. If the adapter transmits the entire frame without detecting signal energy from other adapters, the adapter is finished with the frame. 4. If the adapter detects signal energy from other adapters while transmitting, it stops transmitting its frame and instead transmits a 48 bit jam signal. This ensures all other transmitting adapters become aware of the collision. 5. After transmitting the jam signal and aborting transmission, the adapter enters an exponential backoff phase. After the nth collision in a row for a particular frame, the adapter chooses a value for K at random from {0, 1, 2., 2 mm 1} where mm = min (nn, 10). The adapter then waits KK 512 bit times and returns to step 2. The goal of the exponential backoff is to adapt retransmission attempts to estimated current load. If there are only a small number of colliding adapters, it makes sense to choose K from a small set of values, whereas a heavy load demands for K to be chosen from a larger, more dispersed set of values. ETHERNET TECHNOLOGY Despite the changes in speed and media, and as we shall see, the MAC protocol is unnecessary in a switch based LAN, the enduring constant of Ethernet has been its frame format. It is thus with no surprise that even though the huge differences between Metcalfe s original Ethernet and today s Ethernets, we still call this link layer protocol Ethernet. 10BASE2 The name given to this technology comes from the fact that this technology has specification: 10Mbps, 200 metres max cable length. It uses a thin coaxial cable in a bus topology with repeaters connecting multiple segments together. 10BASET/100BASET FAST ETHERNET This technology uses the twisted pair and can provide speeds of 10 or 100 Mbps. The nodes are connected to either a hub or a switch in a star topology. Modern switches are full duplex - a switch and a node can each send frames to each other at the same time without interference. In essence then, a switch based LAN will have no collisions (if we have point to point links), and removes the need for a MAC protocol. 32
33 GIGABIT ETHERNET IEEE802.3Z Gigabit Ethernet is an extension of fast Ethernet and offers full duplex operation at a raw data rate of 1000 Mbps whilst maintain backward compatibility with 10BASE- T and 100Base-T. It also allows for point to point links using switches as well as shared broadcast channels using hubs or buffered distributors. CONNECTING LAN SEGMENTS HUBS As we have already seen, hubs are physical layer repeaters which have no frame buffering and no ability to detect collisions. Since a frame that arrives at one interface is reproduced on every other interface, we find that interconnecting LAN segments with hubs results in a large collision domain no two nodes on the network shown can transmit at the same time. Another issue is that since hubs have no frame buffering, it is not possible to interconnect 10BASE-T and 100BASE-T nodes with one another unless there is backward compatibility. SWITCHES A switch is a self-learning link layer device that stores and forwards Ethernet frames. Since it is a link layer device, CSMA/CD is implemented in it. Most switches are plug-and-play and are transparent to the nodes. Since a switch is a store and forward device, it isolates traffic to only the necessary interfaces. Thus it breaks up a otherwise large broadcast domain (LAN) into multiple collision domains (LAN segments) if hubs are still present. If the network is made only of switches, then it will be collisionless. The result is: Increased total max throughput Limitless nodes and geographical coverage Connect different Ethernet types Transparent and plug and play of link layer switches 33
34 SWITCHES: SELF LEARNING The crux of the switches ability to forward Ethernet frames is its capability to self learns automatically create its switch table. The switch table is structure with: MAC address Interface Time stamp The switch table is initially empty and is gradually populated and pruned of stale entries by using a time to live time stamp. Population occurs when an incoming frame is received on one of its interfaces. The switch stores in its table the MAC address in the frame s source address field, the interface which the frame arrive on and the current time. If every node in the LAN sends a frame, then every node will eventually get recorded in its table. It is important to prune the table since stale entries could cause issues: The memory in a switch is limited and a large table increases the time before the packet is forwarded. A node that has left the network is unreachable FILTERING & FORWARDING Filtering is the switch function that determines whether a frame should be forwarded to some interface or should just be dropped. Forwarding is the switch function that determines the interfaces to which a frame should be directed and then moves the frame to those interfaces. 34
35 Example Suppose that switch C is sending a frame to D. The switch will make a note that C is on interface 1 in its table. Since D is not on the table, it will flood the network on interfaces 2 and 3 such that the frame is received by D. Suppose that now D sends an ACK back to C. Since D s MAC address is not in the switch table, the switch will note that D is on interface 2. The destination address is known by the switch, so the switch will forward the frame only to interface 1. VLAN Most LANS are configured hierarchically with each workgroup having its own switched LAN connected to the switched LANs of other groups via a switch hierarchy. With this comes three major draw backs: Lack of traffic isolation: Broadcast traffic still traverses through the entire network. It is desirable to limit the scope of such broadcast traffic to improve LAN performance and for security/privacy reasons. Inefficient use of switches: The lowest level switches are not used to its full capabilities Managing users is inefficient: a user which wishes to switch to another workgroup must be physically reconnected. These draw backs assume that we have a switch without support of VLANS virtual local area networks. A switch that supports VLANs allows multiple virtual local area networks to be defined over a single physical local area network infrastructure. Hosts within a VLAN communicate with each other as if they were connected to their own switch. PORT BASED VLAN A port based VLAN works by dynamically grouping the ports on a single physical switch operates as multiple virtual switches. Each group constitutes a VLAN, with the ports in each VLAN forming a broadcast domain. VLANs can communicate with each other via routing; most vendors combine a router within the switch. 35
36 Example Suppose that one electrical engineering computer is to be configured to belong to the computer science department. In this way, we can configure port 2 to belong to the same VLAN as those on port 9-15 of the same switch. In this example, we use an external router to interconnect between the VLANs (port 7 and 11). In practice this is rarely done since commercial switches have built in routers. VLAN TRUNKING In a situation where multiple workgroups span multiple buildings, we find that we need to interconnect physical switches and yet maintain the VLAN structure. One rather primitive method would be to connect one port that belongs to switch one s VLAN A to one of switch two s VLAN A port. This solution however does not scale. A more scalable approach called VLAN trucking is used. In the figure shown, port 16 of one switch and port 1 of the other is used as a trunk port to interconnect the two VLAN switches. A trunk port is designed to carry frames between VLANs defined over multiple physical switches using an extended Ethernet frame format: 802.1Q. This standard appends a 4-byte VLAN tag between the source address and type so that a switch can determine which VLAN a particular frame belongs to. 36
37 SPANNING TREE PROTOCOL Let us consider a network with loops (which could be inadvertent or intentional if we wish to have a redundant link). What will happen? Without any intervention, the frames will loop forever. Consider sending an Ethernet frame to The switch will realise that the destination host can be reached on port 2. The host will receive the frame correctly. Recall however that each LAN segment associated with a port on the switch is a broadcast domain. This means that the righter most switch will receive the frame as well, and will forward the frame which will be received by the original switch etc. As you can see, loops can be disastrous and chew up bandwidth. Moreover, the self learning switches will be muddled since without any predefined protocol, both switches will realise that it can connect to the destination using two possible paths. This problem is resolved by the spanning tree protocol in which switches will converge on a single topological answer some switch interfaces will be disabled. IEEE 802.1D The spanning tree protocol (IEEE 802.1D) is a link layer network protocol that ensures a loop-free topology for any bridged LAN. To begin with, each bridge or switch has an 8 byte bridge id: 2-byte priority + 6 byte MAC address Each port of a bridge also has a port-id and an associated port cost which is inversely proportional to the link speed: Port-id = 1 byte priority + 1 byte port number The algorithm follows a number of steps: 1. Select a root bridge: The root bridge of the spanning tree has the smallest bridge ID. The 2 byte priority is compared first, and if two bridges have the same priority, the MAC addresses are then used to determine who will be root. For example, if switches A (MAC= ) and B (MAC= ) both have a priority of 10, then switch A will be selected as the root bridge. If the network administrator would like switch B to become the root bridge, they must set its priority to be less than Determine the least cost paths to the root bridge: On each LAN segment, the bridge with the lowest path cost to the root is the designated bridge. This bridge will be responsible for forwarding frames for that particular LAN segment. If the path costs are the same for any two bridges, then we wish to compare bridge-id and port-id. 3. Disable all other root paths: Any active port that is not a root port or a designated port is a blocked port. 37
38 Ports are given particular names depending on their roles: Root port: the switch port leading to root. (Each switch must have a root port, except for the root bridge) Designated port: LAN port leading to root Alternate/backup port Disabled Ports can also be in a number of states: Blocking: Port is disabled to send and receive data except for STP Bridge Protocol Data Units (BPDUs). Listening: The switch processes BPDUs and awaits possible new information that would cause it to return to the blocking state. Learning: While the port does not yet forward frames (packets) it does learn source addresses from frames received and adds them to the filtering database (switching database). Forwarding: The port is able to send and receive frames. This algorithm would not be possible without BPDUs which are datagrams sent and received by all bridges in the network. A bridge sends a BPDU frame using the unique MAC address of the port itself as a source address, and a destination address of the STP multicast address 01:80:C2:00:00:00. The configuration BPDU 1 is 4 tuple: Root-ID Root cost Bridge ID Port ID This happens for each port of the bridges. Hence if the BPDU sent is better than the one received, that bridge will become the designated port. 1 There are three types of BPDUs: configuration which used for spanning tree computation, Topology Change Notification (TCN) and Topology Change Notification Acknowledgement (TCA) 38
39 Topics: CHAPTER 3: Why IP? IP Forwarding ARP To switch or route? Network Configuration IPv4 Datagram IP Fragmentation IPv6 NETWORK LAYER 39
40 WHY IP? Due to the vast number of LAN protocols that exist and the difficulty of scaling a non-hierarchical MAC address space, IP seems to be necessary if we wish to: Have global addressing which is geographically assigned Scale it to WANS Maximise interoperability and minimise service interface A narrow IP ensures that only the necessary components of IP have been defined, and any other functionality required is left to the designers of other network layers. IP FUNCTIONALITY Since IP is in the network layer, it is responsible for transporting packets from the source to the destination. It also introduces mechanisms for: Assigning addresses to hosts and routers Determining the route Forwarding the packets from the source to the destination One many wonder about the difference between routing and forwarding. Routing is the network wide process of choosing a path that is to be taken by packets as they flow through from a sender to a receiver. These paths are calculated using routing algorithms. Forwarding however is the choice a router must make in determining which output link a packet should be put on. 40
41 NETWORK SERVICE MODELS The network service model defines the characteristics of end-to-end transport of packets between sending and receiving end systems. The models could be: Connection oriented network service: o Virtual circuits Connectionless network service: o Datagrams Some of the services we might be interested in implementing include: Guaranteed delivery Guaranteed delivery with bounded delay In-order packet delivery Guaranteed minimal bandwidth Guaranteed maximum jitter Security services When we talk about IP, we say that it is a best effort service. VIRTUAL CIRCUITS Computer networks that provide a connection service at the network layer are called virtual circuit networks. The behaviour is much like a telephone circuit but now the routers must maintain a connection state information for the ongoing connections. There are three identifiable phases in a virtual circuit: VC setup: The transport layer specifies the receiver s address and passes it down for the network layer to setup the VC. The network layer will determine the path and the VC number (VC identifier) for each link along the path. The network layer then adds an entry in the forwarding table in each router along the path. The network layer may also reserve resources such as bandwidth and buffers along the path of the VC. Data transfer VC teardown: This is initiated when the sender or receiver informs the network layer of its desire to terminate the VC. It will update the forwarding tables in each of the packet routers on the path to indicate that the VC no longer exists. ATM is a prime example of a VC network which evolved from telephony. A VC network generally has strict time and reliability requirements that are used to meet the goal of a guaranteed service. Most of the complexity in the system is moved inside the network rather than on the edges. DATAGRAM NETWORKS In datagram networks, there is no need to setup the transmission and thus the routers are stateless in terms of the end to end connection. In short, there is no network-level concept of a connection. Packets are forwarded by a series of routers along the path using the destination address. This destination address is 41
42 compared with the addresses in the forwarding table using the longest prefix matching rule. Unlike VC networks, the complexity of the system is moved to the end systems (that perform control, error recovery) and the datagram networks have no strict timing requirements and are best effort services. INTERNET NETWORK LAYER ADDRESSING & SUBNETTING IPv4 addresses are 4 byte long and are denoted in a dotted decmal notation such as which in binary notation would be Each IP address is globally unique and is located somewhere in the hierarchy (unless we are talking about LAN ip addresses). A host will either have its IP address hard coded in a file or more often than not, will be assigned one using the Dynamic Hosting Configuration Protocol (DHCP) server. The IP address cannot be arbitrarily chosen, but is dependent on geography and the address block your ISP bought from ICANN (in the U.S) or APNIC (in the asia pacific region). Prior to the introduction of a classless interdomain routing address scheme, IP addresses were classed into one of four classes (with fixed subnets): Classful addressing is inefficient, as history has shown that class A and B address blocks were bought but were not used to its full potential an organisation with 2000 hosts could not fit into a class C address with 2 8 addresses, while it would waste most of the addresses available in a class B. As a result, Classless InterDomain Routing (CIDR) addressing was introduced. 42
43 With the introduction of CIDR, an IP address could be broken down into two parts or arbitrary length: Subnet part (high order bits) e.g , , Host part (low order bits) A subnet is a part of the network in which all the device interfaces have the subnet part of the IP address being the same. This also indicates that they can physically reach each other without using a router. With the new CIDR addressing, addresses are formatted with the ip address and the subnet mask which determines the number of bits in the subnet portion of the address. The use of subnets greatly reduces the size of the router s forwarding table since a single entry of the form a.b.c.d/x will be sufficient to forward packets to any destination which matches the first x bits in the address. Example Consider the network shown. Fly-By-Night-ISP has an address block beginning with /20 while ISPs-R-US have an address block beginning with /16. Hence a packet which matches the first 20 bits of will be forwarded to Fly-By-Night ISP. Suppose Organisation 1 moves ISPs and has decided to take its IP address block with it: /23. The routing table is now updated for the changes that have occurred at ISPs-R-Us such that any addresses beginning with /16 or /23 are forwarded to ISPs-R-Us. A packet destined for undergoes longest prefix matching at the Internet routers. Note that this IP address matches /20 and /23. To ensure that the packet arrives at the correct destination, the router must route the packet to the interface connected to /23 the one with the longest matching prefix. 43
44 NAT Network Address Translation (NAT) partially solves the problem of limited IPv4 addresses. The idea is to use a private addresses such as /8 within the LAN and a single public gateway address such as for communication with the wider Internet. A NAT based router appears to the world as a single device with a single IP address. This means that the LAN appears invisible and is often a selling gimmick labelled as a NAT firewall. All traffic leaving the home router for the Internet has a source IP address of and all traffic entering has a destination address of NAT uses a NAT translation table that contains a list of port numbers and IP addresses on the WAN and LAN side. The router handles all requests in the LAN and stores in the NAT translation table the IP address and port number of the client. The router then modifies the datagram with the public IP address and assigns a new source port. These details are associated in the same record located within the NAT translation table. When the server responds by sending a datagram back, the NAT router will attempt to match the destination address and port number to an entry in the table. If it finds a match, the router will manipulate the destination IP address and port number so that it reverts back to the client node s LAN IP address and port. If a match is not found, the datagram is simply dropped. Given that the port number is only 16 bits, there can be a maximum of simultaneous connections. Moreover, NAT has been controversial Port numbers are meant to be used for addressing processes, not for addressing hosts Routers are supposed to process packets only up to layer 3 NAT violates the so called end-to-end argument o Hosts behind a NAT router are unable to accept initial requests and hence cannot act as a server. This problem is solved using port forwarding. IPv6 should be used to solve the shortage of IP addresses instead. 44
45 ARP: ADDRESS RESOLUTION PROTOCOL This section focuses on a protocol that works on a LAN. Recall that IP addresses are situated in layer 3 while MAC addresses in layer 2. Moreover, since layer 3 is responsible for getting a datagram from the source to the destination and layer 2 is responsible for getting a datagram between nodes there needs to be a way in which a layer 3 protocol can package the datagram so that it can be handled by the layer 2 protocol. This is because neither of the two protocols involved can speak both languages know both destination IP and MAC addresses. The solution is to use the Address Resolution Protocol (ARP) which is responsible for creating an ARP table for each node on the network (both hosts and routers). The ARP table is an IP/MAC address mapping for nodes on the same LAN and is three tuple: < IIII AAAAAAAAAAAAAA, MMMMMM aaaaaaaaaaaaaa, TTTTTT > ARP is a plug-and-play protocol since the ARP tables are created without user intervention. ALGORITHM Suppose A wants to send a datagram to B and knows the IP address of B. A cannot simply put a layer 3 datagram on the wire, it must pass it down to layer 2. The problem here is though A does not have the MAC address of B and hence cannot fill out the destination address field in an Ethernet frame. A broadcasts an ARP query packet containing B s IP address. This broadcast is heard by all machines on the LAN. B receives the ARP packet and replies (in a unicast fashion) to A with its MAC address. All other nodes on the network will ignore this broadcast and will not reply to A. A caches the IP-MAC address pair in its ARP table until it times out at which time it will be discarded unless refreshed. It is important to have a TTL since the IP-MAC address pair can change very often IP addresses are assigned dynamically. If updates were not done, then the IP-MAC address pair would be wrong, and bandwidth would just be wasted. 45
46 IP FORWARDING Recall that forwarding is associated with the switch or router process of deciding which output port a particular packet should take. There are two main cases which can occur: Hosts communicating with each other are in the same LAN (E.g A to B) Hosts communicating with each other are in a different LAN (E.g A to E) HOSTS IN THE SAME LAN Every node on the network has its own routing table which determines the route a datagram should take depending on its destination address (layer 3). Suppose A ( ) wishes to send an IP packet to B ( ). The first step is to let the IP protocol to do its routing work: 1. A will use the longest prefix match to determine what the next hop will be. Since the subnet mask is /24, B falls under the first entry. The L denotes that nodes that match this entry are in the same LAN, and that the next hop should be itself an indicator that the task will be passed down to layer In order to fill out the destination MAC address, the ARP protocol must be used. If the IP-MAC address pair of B is available, then A can complete the Ethernet frame without any ARP query packet broadcast. Otherwise, it will need to request for B s MAC address using the ARP query packet broadcast. HOSTS IN DIFFERENT LANS Suppose that A ( ) is sending an IP packet to E ( ). 1. A will look up the network address of E in the routing table. Only the default gateway entry is matched hence E is not on the same LAN. The next hop is the router If A knows the gateway IP-MAC address pair, then there is no need to broadcast the ARP query packet. If not, then it must do so. 3. The link layer will send the datagram to the router in an Ethernet frame. The MAC source and destination address for this Ethernet frame corresponds to A and the router s respectively. 4. At the router, the IP packet is passed up to layer 3 again. The router will look up the address of E in its routing table. It will find that E is on the same network as the router s interface labelled: The IP packet is passed down to the link layer and again it is packaged into an Ethernet Frame using ARP to get the right MAC address of E. The Ethernet Frame s source and destination address now correspond to the router and E s MAC address respectively. 6. The datagram arrives at Note that the IP source and destination address never change! 46
47 TO SWITCH OR TO ROUTE? Most routers have an embedded switch and the perplexing question of whether the device should perform routing or switching is raised. Modifications in the algorithm allow the devices to correctly determine when it should switch (the devices are on the same LAN) and when it should route (the devices are on different networks). 1. The device looks up the destination MAC in the MAC-table 2. If the destination MAC of the Ethernet frame is not that of the interface s MAC, then the device will switch the frame as is onto the learnt port. 3. If the destination MAC of the Ethernet frame is that of the interface s MAC, then the upper layer (IP) will be used to route. a. If there is no match in the routing table, then the datagram is dropped. Otherwise it can determine the next hop MAC address, and hence the datagram can be sent with the new Ethernet header containing the next hop MAC address. The switch-router MAC table shown indicates that the router IP interfaces on VLAN 3 and This means that the device can perform both switching and routing tasks. The device will only behave as a transparent switch for VLAN76. The switch-router routing table is shown below. The next hop is given is an IPaddress since it is a routing table (layer 3). It is up to the MAC table to provide the corresponding next hop MAC address. 47
48 48
49 IP/ETHERNET CONFIGURATION In this section, we deal with the idea that incorrect subnet configurations could lead to problems in a network. If two devices are on the same LAN, but have seemingly different subnets, then there is a chance of a problem If two devices on different LANs, connected via a router have the same subnet, then communication between the devices will not work Suppose node has the following routing table. For two computers which are connected directly with each other: The first configuration will work since both are on the same subnet whereas the second configuration violates this principle. A message from destined for will not match the first routing table entry, and hence will be forwarded to the gateway. The gateway however does not exist. Not all mis-configurations cause problems. Consider a star topology with a switch in the middle. Messages between /24 and /16 will not have problems. This is because A s routing table will have a first entry of /24, which will match C s address. For C s routing table, A s address will match with the entry /16. Both entries will have a local next hop. 49
50 In this configuration, C can communicate with B due to its more general subnet configuration. Whereas B, cannot communicate with C nor A. This is because neither A nor C s address will match the entry /24. The default gateway will then be chosen but in this situation there is no gateway. Let us now consider a more complicated case: A cannot talk to D because its subnet mask of /16 will cause A to think it is on the same LAN as D. B can talk to no one since those on its LAN do not match the subnet /24 nor is the default route correct the router will not respond to requests from B since its interface on the left is By the same token, C cannot talk to B since C believes it is not on the same subnet. C will thus send datagrams to the router using ARP. The router will think that B is on the right side (because of the routing table entry /24), and will broadcast an ARP query packet and get no response. D cannot talk to B, since it thinks B is on the same subnet and will simply send datagrams down to the link layer to process using ARP. This however fails, since broadcasts are not forwarded by the router. 50
51 IPV4 PACKET FORMAT Version number: 4 bits used to specifiy the IP protocol version of the datagram. Header length: Since the datagram can contain a variable number of options, these 4 bits are used to determine where in the IP datagram the data actually begins. The minimum size of the header is 20 bytes. Type of Service: Used to distinguish different types of IP datagrams from each other such that QOS can be provided. Datagram length: The maximum length is bytes. Since Ethernet frames can only be 1500 bytes long, IP datagrams are rarely sized to its maximum length. Identifier, flags, fragmentation offset: The identifier is used to indicate the flow number, the flag is used to determine whether it is the last fragment and the fragmentation offset is used to determine where the data should be concatenated. These fields are used to reassemble fragmented IP packets. For example, an IP datagram may be 3000 bytes long. Since Ethernet frames are only 1500 bytes in size, then it takes at least 2 Ethernet frames to package the datagram. TTL: is included to ensure that datagrams do not circulate forever. The field is decremented each time the datagram is processed by the router. A TTL=0 indicates that the datagram must be dropped. Protocol: The upper layer protocol: TCP/UDP etc Header checksum: the checksum aids a router in detecting bit errors in a received IP datagram. Source and destination IP addresses Options Payload 51
52 IP FRAGMENTATION The maximum amount of data that a a link layer frame can carry is called the maximum transmission unit (MTU). Because each IP datagram is encapsulated within the link layer frame for transport from one router to the next router, there is a chance that the length of the IP datagram is larger than the MTU. This means that an oversized IP datagram must be fragmented into two or more smaller IP datagrams, each of which is encapsulated in an individual link layer frame. If IP datagrams are fragmented, they must be reassembled before they reach the transport layer at the destination TCP and UDP expect complete, unfragmented segments from the network layer. Designers of IPV4, use the identification, flag and fragmentation offset fields in the IP datagram header to assist in the reassembly of IP datagrams by the hosts. If the datagram is to be fragmented, the original identification number of the packet is duplicated in the identification field for each of the fragments. This allows the receiver to determine which datagrams should be combined together to form the original datagram. Since IP is an unreliable service, there is a need for the destination host to make sure it has received the last fragment. This is achieved by using the flag field which is set to 1 if there is a fragment following it, or 0, if it is the last fragment. The fragmentation offset (specified in bytes) is also used to determine whether a segment is missing and to reassemble the fragments in the proper order. The offset specifies where the fragment fits within the original IP datagram payload (after all the headers). Example Consider an original 4000 byte IP datagram (20 byte header, 3980 payload), with id=777 which must be forwarded to a link with an MTU of 1500 bytes. This means three fragments are required, with a maximum fragment payload size of 1480 bytes. The first fragment contains 1480 bytes of the original datagram and has an id=777, flag=1 and offset=0 since it is the beginning of the data payload. The next fragment will be much the same, but the offset will be 1480 = 185. The third fragment will 8 have a payload of = 1020 bytes, id=777, offset of 2960 = 370 and a flag of 1 to specify it is the last fragment. 8 Fragment Length Payload ID offset Flag size
53 INTERNET CONTROL MESSAGE PROTOCOL ICMP is used by hosts and routers to communicate network-layer information to each other. It is most typically used for error reporting. ICMP is often considered part of IP but architecturally it lies just above IP since ICMP messages are carried inside IP datagrams. When a host receives an IP datagram with ICMP specified as the upper layer protocol, it dumultiplexes the datagram s contents to ICMP. ICMP messages have a type and code field. The messages contain the header and the first 8 bytes of the IP datagram that caused the ICMP message to be generated in the first place. PING The ping program sends a type 8, code 0 ICMP message to the specified host. Seeing the echo request, the destination host will send back a type 0, code 0 ICMP echo reply. TRACEROUTE This program allows us to trace a route from a host to any host in the world and is implemented using ICMP messages. To determine the names and addresses of each of the routers between the source and destination, Traceroute sends a series of IP datagrams (UDP segments) with an unlikely port number to the destination, each of which will have an incremental TTL. If we consider the ICMP message with TTL=2, then the second router will discard the datagram and reply with an ICMP warning message to the source (type 11, code 0). The messages includes the name of the router and its IP address, such that the source can obtain the names and IP addresses of the hosts its messages traverses past. Traceroute will know that it has reached the destination when it receives the port unreachable ICMP message (type 3, code 3). 53
54 IPV6 The biggest problem with IPV4, is the insufficient number of IP addresses available. Given that we are fast running out of IPV4 addresses available, designers have come up with IPV6 to supersede its predecessor. In addition to this big change IPV6 attempts to: Speed processing and forwarding by changing the format of the header to a fixed length 40 byte header with no fragmentation allowed. Change the header (TOS field) to facilitate QoS. Remove header checksums Removal of the options field in header. Options field can still be defined using the next header field (which is normally used to defined the upper layer protocol). Introduce ICMPV6 with additional message types such as Packet Too Big and unrecognized IPv6 options and subsumes the functionality of IGMP 54
55 TRANSITIONING FROM IPV4 TO IPV6 It is not practical to upgrade all routers simultaneously. To ensure that the Internet functionality is not disrupted, tunnelling is used to carry IPv6 as a payload in IPv4 datagrams among IPv4 routers. The idea is fairly simple. The IPv6 node on the sending side of the tunnel (the intervening set of IPv4 routers between the ipv6 routers) takes the IPv6 datagram and puts it in the payload field of an IPv4 datagram and is addressed to the IPv6 node on the receiving side of the tunnel. All intervening IPv4 routers in the tunnel route this IPv4 datagram amongst themselves, blissfully unaware that the datagram itself contains a complete IPv6 datagram. The IPv6 node on the receiving side eventually receives the IPv4 datagram destined for it, and extracts the IPv6 datagram so that it can continue to route the datagram. 55
56 Topics: Routing Protocols Routing Algorithms CHAPTER 4: ROUTING 56
57 ROUTING PROTOCOLS So far we have only looked at routing tables and how we can construct them manually. Obviously, static routing can be tedious. Recall the Internet layered structure. The routing protocol sits inside the network layer of the stack and uses the IP protocol is operate. One should note that routing protocols are responsible for producing a routing table, like the one shown in the first figure. In modern router architectures, optimisation occurs, in which the routing table is further condensed into the forwarding table, which contains only the preferred routes a router should make. This is asserted with the idea that: Forwarding is responsible for exchanging data segments from one node to its neighbour Routing is responsible for choosing the desired path a data segment should take ROUTING ALGORITHMS Within an organisations network, there is the ultimate goal of finding the best path from the source to the destination. The definition of best could be determined by the minimum path cost which may reflect the delay, monetary amount used to maintain the link or the congestion level. We model this by creating a graph with nodes that represent the routers and the graph edges representing the physical link. Now that we can model the routing situation, we are inherently asked to choose what type of routing algorithm we want. Routing algorithms are roughly categorised as either a distance vector or link state algorithm Distance Vector Algorithm Local information only: The router knows the physically connected neighbours and the link costs, but does not know the whole view of the network Operates on Neighbour routing table exchange Bellman-Ford computation Link state Algorithm Global information: the router knows the complete topology an link cost of the entire network Operates using 2 components: Reliable flooding Dijkstra s shortest path tree computation 57
58 DISTANCE VECTOR Protocols such as RIP use the distance vector algorithm. Such an algorithm requires that each node maintain a table of triples <destination, cost, next hop>. This implies that: The process of best path computation is iterative, asynchronous and distributed Directly connected neighbours exchange updates periodically or when triggered Each update is a vector of distance since messages include <destination, next hop, cost> On each iterative exchange, the router only updates its local table if it receives a better route (one of lower cost) from one of its neighbours. If however some routes are not refreshed, they are deleted when they reach time out. As we shall see, the link state algorithm presents a situation in which improvements in the best path propagate through the network quickly, but bad news travels slowly. EXAMPLE Consider the network shown. The table at A initially set as shown. Note that A will receive updates from F, B, C and E. After receiving a vector of distance update from C as following: < AA, AA, 1 > < BB, BB, 1 > < CC,, 0 > < DD, DD, 1 > < EE, > < FF, DD, 3 > < GG, DD, 2 > Note that that we have assumed that C has been up and running for a while and that only A is the new addition to the network. That is why the route for F still goes through D until A sends an update to C (which we assume it has not yet done) A will update its table to: After receiving an update from F: < AA, AA, 1 > < BB, GG, 4 > < CC, GG, 3 > < DD, GG, 2 > < EE, > < FF,, 0 > 58
59 EXAMPLE Suppose that a link fails as shown. Then the process of recovering from a link failure at A would then be: EXAMPLE Suppose that the link X-Y has a cost decrease. We find that Z will begin to forward packets destined for X via Y within 2 rounds of vector exchanges. EXAMPLE If however the link cost X-Y increases to 14, we come across the count to infinity problem in which looping occurs. Note that when a link cost increases, packets destined from Y to X will now be passed to Z, since Z still thinks it can reach X via Y with a cost of 5. What Z doesn t realise is that the link cost has increased yet. Y also doesn t realise that Z is reaching X via it, since information is exchange on the basis of <destination, cost>. We thus find that Y updates its cost to X to 6. After the next update, Z realises that the cost to X is now 6+1=7 and updates this to its table, since it is still lower than the link cost of X-Z. The next update will make Y updates its table to 8 since it will continue to route to Z just to get to X. We thus find a continuous loop until the link cost reaches more than
60 Note that if the link cost to X-Z was set to infinity then, the oscillations would continue to occur indefinitely (or for a prolonged period, since infinity is represented as a finite number on a network). A basic solution is to use split horizon or split horizon with poisoned reverse. If the next hop to D is via R then: Split Horizon: Do not include the vector cost of D in the update to R Split Horizon with poisoned reverse: Include the vector cost of D in the update to R, but set the cost to Although this solution works, it still takes a long time, as indicated by the situation shown below. RIP V2 The specifications of the RIP routing protocol (RFC2453) suggests that: RIP runs as an application level process Updates sent on UDP port 520 Multicast IP address of TTL=1 = 16 Distance vectors are only exchanged among neighbours Allows up to 25 destinations per RIP update message Update interval is 30 ± 5 seconds to allow reasonable convergence time and to minimise synchronisation Update can be triggered and only includes changed routes Split horizon is mandatory, while poison reverse is optional Routes not refreshed from 180 seconds are timed out 60
61 LINK STATE With link state algorithms such as OSPF, each node learns the complete topology through the means of reliable flooding such that the entire network knows each node s neighbours. This means that a Link State Advertisement (LSA) includes < NNNNNNNNNN, LLLLLLLLLL, SSSSSSSSSSSSSSSS nnnnnnnnnnnn, AAAAAA > Reliable flooding works by periodically sending LSAs (with successively higher sequence numbers) to all nodes except for the one that sent it. By using the sequence number, each node can store the most recent LSA, and age out the previously stored LSA. DIJKSTRA S ALGORITHM Dijkstra s algorithm is a distributed and iterative link-state routing algorithm which requires all nodes to know the full topology and link costs such that each node can work out its own routing table with the least cost paths from itself to all other nodes. Let: cc(xx, yy): the link cost from node x to y DD(vv): current value of the cost of path from source to destination v pp(vv): predecessor node along the path from source to v N : set of nodes whose least cost path are definitively known The algorithm is shown below: It is actually more useful to see how it works in practice. 61
62 EXAMPLE Consider the network with 6 nodes as shown and the link costs shown. Consider node u. Dijkstra s algorithm can be applied to node u, so that we can find the shortest path tree to all nodes from u. When we first initialise, we have step 0 in which all nodes are set with a cost of except for u s neighbours. The next step is to choose the minimum cost path from u to one of its neighbours. In this case, the minimum cost path is u-x. We thus add x to N At step 1, we thus set all of the costs to x s neighbours (noting that we take the minimum of the current cost or the previous cost) and add the cost from u to x, which is 1. Thus, the distance to W is 3+1=4 via the predecessor, x The next step is to choose the next node for which x has the lowest cost path to, so in this case it is y. We then add y to N. This process continues until all nodes have been added to N. By looking at each column and noting the predecessor, we can come up with the shortest path tree which has its root at u: OSPF The OSPF standards which detail the link state routing protocol is given in RFC2328. Its main features include: Use of hello packets to check links are operational LSA (with sequence number, age and checksum) reliable flooding over entire AS o Router LSA: set of nodes o Network LSA: set of links o Summary LSA: inter-area networks o Summary LSA: area-border-routers o External LSA: external to AS OSPF messages are over IP Hierarchical OSPF allows for scaling In addition to those feaetures, OSPF also implements other features no available in RIP including: Authentication to prevent malicious intrusion Load balancing: equal cost multi-path Extensions to support multicast (MOSPF) and traffic engineering (OSPF-TE) 62
63 HIERARCHICAL OSPF In Hierarchical OSPF, we can partition the network such that there is a local area and a backbone. Suppose that we have the local area denoted as Area 1, 2 and 3. In this situation, the LSA s in each of those areas do not traverse across the backbone. The advantage of this is that the LSA reliable flooding does not need to propagate through the whole network which may introduce security and performance issues with converging to a best path. Instead, each node only needs a detailed view of its own area topology which includes an area border router which handles the routing between the areas. The Area border routers which are connected to Area 1,2 and 3, summarise distances to nets in its own area and advertise it to other Area Border routers. The backbone routers also run OSPF within the backbone network and have no direct view of Area 1,2 and 3. Finally, the boundary routers connect the whole network to other ASs. LINK STATE VS DISTANCE VECTOR ALGORITHMS Feature Distance Vector (e.g RIP) Link State (e.g OSPF) Messaging Entire routing table is sent in the messages, but only between neighbours Small messages which are flooded throughout the network Robustness Convergence One wrongly configured router can lead to network failure Multiple iterations with changes requiring recomputation and retransmission One wrongly configured router can lead to network failure Messages are flooded and recalculation can be done in one go resulting in faster convergence Depending on the situation, one may use distance vector or link state algorithm. However, link state algorithms are often preferred due to faster convergence times. 63
64 HIERARCHICAL ROUTING ON THE INTERNET In practice, we would prefer a hierarchical model of the Internet because it allows for Scalability: a flat network would lead to millions of routing table entries and links swamped with routing protocol messages Administrative autonomy: Each network administrative may wish to control routing in their own network. The solution to this is by organising routers into autonomous systems (Ass), with each AS consisting of a group of routers that are typically under the same administrative control and running the same routing algorithm such that the routers have information about each other. We define: Intra-autonomous system routing protocol (or IGP): as the routing algorithm running within an autonomous system. Examples include OSPF and RIP, which were mentioned previously. Gateway routers: which connect ASs together. The gateway routers run inter-as routing protocols amongst each other, but must also run intra-as protocols with other routers in their AS. Inter-AS routing protocol (or EGP): the protocol used to obtain reachability information from neighbouring Ass and propagating reachability information to all routers internal to the AS. The internet uses BGP4 for this. The process of adding an outside-as destination in a router s forwarding table can thus be summarised as follows: Learn from inter-as protocol that subnet x is reachable via multiple gateways Use routing info from intra-as protocol to determine costs of least cost paths to each of the gateways Choose the gateway to route packets between ASs either using Hot potato routing or policy based routing Determine from forwarding table the interface I that leads to the gateway. Enter (x, I) in forwarding table INTER-AS ROUTING BGP The Border Gateway Protocol (BGP) provides each AS a means to: Obtain subnet reachability information from neighbouring ASs Propagate the reachability information to all routers internal to the AS Determine good routes to subnets based on the reachability information and on AS policy BGP is a path vector protocol and bears some similarity to the distance vector protocols. Each border gateway broadcasts to its neighbours (peers) the entire path (sequence of autonomous system numbers) to destinations: Path(X,Z)=X,Y1,Y2...Z. When a gateway router receives route advertisements, it applies the import policy to either accept or decline the use of those paths. 64
65 BGP FUNDAMENTALS In BGP, pairs of routers (BGP Peers) exchange routing information over semipermanent TCP connections (BGP session) using port 179. There is typically one such BGP TCP connection for each link that directly connects two routers in two different ASs external BGP session (ebgp). There may also be semipermanent TCP connections between routers within an AS internal BGP session (ibgp). This will form a mesh of TCP connections within each AS. Note that there is generally no correlation between BGP sessions and physical links. TCP connections are used due to the necessity of reliability in the exchange of routing information: OPEN: opens TCP connection to peer and authenticates sender UPDATE: advertises new path (or withdraws old) KEEPALIVE: keeps connection alive in absence of updates. It is also used to ACK OPEN requests. NOTIFICATION: reports errors in previous message. This message is also used to close the connection. Consider the network shown. As mentioned previously, routers 3am 1c, 1b and 2a operate both ibgp and ebgp since they have been explicitly configured as BGP peers. When AS2 advertises a prefix to AS1, AS2 is promising it will forward any datagrams destined to that prefix towards the destination (prefix). For example, suppose that AS2 advertises 3 subnets: / / /24 And AS3 has a subnet of /247. Since routers use the longest prefix matching for forwarding datagrams, AS3 could advertise to AS1 the prefix /24, while AS2 can advertise to AS1 the aggregated prefix /22. Note that as mentioned in the last section, the route prefixes received from a peer are filtered and selected based on policies for installation in Routing Information Base. 65
66 PATH ATTRIBUTES & BGP ROUTES In BGP, an autonomous system is identified by its globally unique autonomous system number (ASN). This ensures that routing to destinations can occurs correctly. When a router advertises a route across a BGP session, it includes the prefix and BGP attributes: AS-PATH: This attribute contains the ASs through which the advertisement for the prefix is passed. When a prefix is passed into an AS, the AS adds its ASN to the AS-PATH. The AS-PATH allows routers to detect and prevent looping advertisements. NEXT-HOP: is the router interface that begins the AS-PATH. This allows the routers within an AS to determine the shortest path to reach the first hop out of the AS using its intra-as routing. BGP also allows routers to assign preference metrics to routes, and an attribute that indicates how the prefix was inserted into BGP at the origin AS. BGP ATTRIBUTE: AS-PATH Suppose that AS1 advertises the prefix /16 to AS2. Thus the AS-PATH advertised is AS1. AS2 then advertises to AS3 the AS-PATH for /16 as: AS2-AS1. Finally AS3 may readvertise to AS1 the AS-PATH to the same prefix as AS3- AS2-AS1. Obviously, a loop now exists. By using its import policies, AS1 will reject the route advertised by AS3 since it contains AS1 along its path. In some rare cases where AS1 s network has been partitioned due to a network failure, then AS1 may indeed choose to accept this loop. BGP ATTRIBUTE: MULTI-EXIT DISCRIMINATOR This attribute is used when two ASs connect to each other in more than one place. To allow the routers to choose their routes, Multi-Exit Discriminator (MED) allows each AS to advertise the degree of preference of each link to reach a particular prefix. EXAMPLE AS1 and AS2 have 2 BGP sessions. AS2 advertises prefixed of AS3 to AS1 on both BGP sessions. MED however may choose to advertise a better MED than link B, such that link A is the preferred link. 66
67 In some cases MED is setup such that the path of traffic between two destinations differ depending on the direction. This is often the case when ISPs choose to peer with other ISPs in such a way that their agreement requires them to carry the traffic most of the way to the customer, as shown. This is achieved by letting ISP 2 advertise the prefixes of customer 2 with a better MED on link A that link B. Conversely, ISP 1 will advertise to ISP 2 s routers that to reach customer 1, the MED will be better for link B than link A. Note that this situation only works if both ISPs set and accept MED. BGP ATTRIBUTE: LOCAL-PREF This is the most commonly used attribute since this attribute is used to determine the local preference of use of received routes. EXAMPLE Suppose AS3 provides a better service than AS2 to AS4. AS4 can configure local-pref of routes from AS3 to be higher (better) than those heard from AS2. Thus if AS1 advertises the prefix /16 to AS2 and AS3, AS4 will receive that same prefix from both AS3 and AS2 but will choose the AS-PATH: AS3-AS1 since it has better local-pref. BGP POLICIES Policies are often complex but allow the flexibility and control of inter-as routing. We find that the policies do not always wish to find the shortest path to the destination. Rather policies are designed to: Avoid using competitor s network Avoid transit service i.e do not carry any traffic that does not have source or destination within AS Let another ISP carry most cross-country load Invisible network topology to subscriber (subscriber connection) 67
68 SUBSCRIBER CONNECTION SINGLE-HOME SUBSCRIBER Consider a single homed subscriber. To configure the network, it is easiest to statically define the routes. Obviously including the customer in the ISP s IGP is too risky, and hence, the ISP may choose to: Run an isolated IGP on R1-R2 link and leak that into BGP Run a single BGP session MULTI-HOME SUBSCRIBER Let us consider a situation in which a subscriber is multi-homed for reliability and performance. In this case static routing often doesn t suffice since you wish to use both links to the best possible way. Consider the case in which a customer multi-homes to the same provider using 2 border gateway routers. It is possible to configure 2 static routes such that nodes connected directly to R2 prefer a path via R2 to the ISP, and those connected to R3, to use R3. In this case the ISP s router will receive the MED from the customer so that it chooses to use the link via R2 for traffic /16, and R3 for /16. If however, the customer s router is connected to two ISP routers, then traffic to the customer via the ISP is set using local-pref. However, traffic to the ISP from the customer may require the customer to get the BGP prefixed from the ISP. In the case of multi-homing to multiple providers, we would ideally want load sharing. If the customer uses two different prefixed then this allows for good loading sharing, but is poor in terms of aggregation. Suppose however if the customer has the prefix /24 block and tells ISP2 to advertise it. If ISP3 gets the prefixes /16 and ISP2 gets /24, traffic from ISP3 tends to travel via ISP 2 since it has the longest prefix match. This is poor for load balancing. A slightly better solution which load balances to some extent is to break up the /24 block in two such that ISP2 has /25. In this case, half the traffic will still go through ISP1. There is however a subtle problem. If the link between the customer and ISP1 breaks, note that half the customers address block will not be reachable since ISP2 only handles the routes with the /25 prefix. 68
69 Topics: Multicast Source Based vs Shared Trees Shortest path tree, Reverse path forwarding Minimal spanning, center based trees CHAPTER 5: MULTICAST &MOBILITY 69
70 FUNDAMENTALS OF NETWORK MULTICAST The internet is fast becoming a medium in which we can watch TV and radio content. We have so far only looked at unicast Internet traffic which has one destination and one source. Consider an example in which multiple people are watching an internet TV program simultaneously. It is not logical for the server to send the same datagrams multiple times to different destinations since this would dramatically increasing bandwidth consumption and limit the scalability. This is where multicast comes in. Multicast is the act of sending a datagram to multiple receivers with a single transmit operation from the source. Routers in a network actively participate in multicasting by making copies of packets on the required interfaces that are used to forward towards the multicast receivers. MULTICAST GROUPS A key concept in the Internet Multicast Service Model is the use of indirection. A multicast packet has its destination addressed to a multicast group, such as a class D address. In order for multicast receivers to receive these packets, the hosts must be connected to a router that joins the multicast group on their behalf. With multicast: Anyone can join the multicast group Anyone can send in the multicast group There is no network layer identification to hosts of members Recall that multicast functionalities require the infrastructure to support multicasting namely the routers must be multicast compatible. To join a multicast group, there are 2 steps involved: 1. Local: The host informs the local multicast router of the desire to join a particular group using the Internet Group Management Protocol (IGMP). 2. Wide Area: The local router, on behalf of the end hosts interact with other routers to receive the multicast datagram flow. The routers tend to either use DVMRP, MOSPF or PIM protocols. 70
71 MULTICAST ROUTING The goal in multicast routing is to find a tree (or trees) connecting routers having local multicast group members. Note that a network may be connected in a certain way, but not all routers will participate in multicasting and as such a tree is a set of all paths between the routers used. A tree can be: Source based: A tree in which the root is at the sender one tree per source. This means that there is a different tree for each sender to its receivers in the multicast group. Consider the blue and green source based tree structure which have different sources. Shared tree: A tree structure which is used by all group members. There are different ways in which a multicast tree can be built, depending on the type of tree: Source Based Shortest Path Trees Reverse Path Forwarding Group Shared Minimal Spanning (Steiner) Center-based trees SOURCE BASED: SHORTEST PATH TREE This tree is formed by using Dijkstra s algorithm (we then need to know the whole topology of the network). Since this creates a source based tree, each source will compute the shortest path tree from itself to all receivers. SOURCE BASED: REVERSE PATH FORWARDING Reverse Path Forwarding relies on the router s knowledge of the unicast shortest path from it to the sender. Each router in the multicast group then has a simple forwarding behaviour which will assist in the formation of the source based tree: if (mcast datagram received on incoming link on shortest path back to center) then flood datagram onto all outgoing links else ignore datagram 71
72 The result is a source specific reverse spanning tree. It should be noted that where there is asymmetry, this method of forming a tree may be a bad choice. EXAMPLE Consider the source S. Since the link from which R1 receives the multicast datagram from is on the shortest path, R1 will forward it to all links. R2 and R4 are also on the shortest path so they forward it to all outgoing links. Note that R2 will receive R4 s forwarded multicast datagram, and R4 will receive R2 s multicast datagram they however will be ignored since it is not on the shortest unicast shortest path. This is the same for routers R3 and R6. Suppose however if router R7 had no multicast group members, then the particular subtree that connects R4-R50R7 together is not needed. Hence a prune message is sent upstream by the routers that have no downstream group members. SHARED TREE: STEINER TREE The Steiner (minimal spanning) tree is a minimum cost tree connecting all routers with attached group members. The Steiner tree is not used in practice because: It requires excellent heuristics (even though they do exist) Computationally complex Information about the entire network is needed Monolothic: The tree is recomputed whenever a router needs to join or leave. 72
73 SHARED TREE: CENTER-BASED TREES Center-based trees have a single delivery tree that is shared by all. One router is first identified to be the center of the tree. Hence to join: 1. A edge router acts on behalf of a multicast group receiver and sends a unicast join message addressed to the centre router. 2. The join message is processed by intermediate routers and forwarded towards the centre. 3. Once the join message reaches an existing tree branch for this centre, or arrives at the centre, the node is grafted onto the tree using the path taken hence the path taken becomes a new branch of tree for this router. INTERNET MULTICAST ROUTING So far we have looked at general ways of building multicast trees for both source based and shared trees. We now seek to investigate how multicast routers connect together in a sea of unicast routers through the use of tunnelling. We then explore the multicast routing protocols available on the Internet. TUNNELLING An issue arises when some routers are only unicast. We must seek a way to connect the islands of multicast routers together. This is done by tunnelling. Multicast datagrams are encapsulated inside normal (non-multicastaddressed) datagrams The normal IP datagrams are sent through a tunnel via regular IP unicast routing mechanisms to the receiving multicast router The receiving multicast router unpacks the unicast datagram to get the multicast datagram. 73
74 DVMRP Distance Vector Multicast Routing Protocol (DVMRP) uses the reverse path forwarding technique to build source-based trees and hence requires the distance vector unicast routing information. The tree is based on DVMRP s own routing tables constructed by communicating between DVMRP routers and hence there is no assumption about underlying unicast routing. The initial datagram to the multicast group is flooded everywhere using RPF, and routers that do not wish to be part of the group send an upstream prune message. Note however that DVMRP is soft state meaning that the router actually periodically forgets which branches have been pruned. Hence the multicast data will flow down the now unpruned branch unless it is repruned. Routers can also be quickly regrafted to the tree since it is possible to join at the leaf. PIM: PROTOCOL INDEPENDENT MULTICAST The PIM protocol is independent on any specific underlying unicast routing algorithm. There are two different multicast distribution scenarios: Dense: Group membership by routers is assumed until routers explicitly pruned. This is useful where group members are densely packed in close proximity and the bandwidth wastage is acceptable. o Flood-and-prune RPF is used and the underlying unicast protocol provides RPF information for incoming datagrams. Such a tree is a data-driven construction multicast tree Sparse: Propagates the multicast datagrams only if receivers have asked. This is useful in the situation where bandwidth is costly, the group members are dispersed and the number of networks with a small number of group members with respect to the number of interconnected networks. o A center-based tree approach is taken. o The router sends a join message to the rendezvous point and along the way, the intermediate routers update the state and forward the join message. After joining the rendezvous point, the router can switch to a source-specific tree to increase the performance. o All senders in the group will send unicast data to the rendezvous point which distributes the data down the rendezvous point rooted tree. o The rendezvous point can also extend multicast tree upstream to the source o The rendezvous point can send a stop message if it has no attached receivers 74
75 MOBILITY Mobility from the network perspective involves the possibility of moving from place to place and still being able to maintain connectivity with the network. The spectrum shows the degree of mobility. In a network setting: Home network: the permanent home of a mobile node Home agent: entity within the home network that performs the mobility management functions on behalf of the mobile node Foreign network: the network in which the mobile node is currently residing in Foreign agent: the entity within the foreign network that helps the mobile node with the mobility management functions Correspondent: the entity wishing to communicate with the mobile node APPROACHES TO MOBILITY One approach to mobility is to let routing handle the job. This is done by letting the routers advertise the permanent address of mobile-nodes-in-residence via the usual routing table exchange. The changes in the routing tables indicate where each mobile node is located and does not require changes to the end system. This approach however is not scalable when we have potential millions of mobiles nodes each requiring a specific routing entry. 75
76 An alternative approach is to push the mobility functionality from the network core to the network edge. By doing so we can either choose to use: Indirect routing: communication from the correspondent to the mobile node goes through the home agent and is then forwarded to the mobile node s location. This requires the foreign agent to create a care of address that is associated with the permanent address of the mobile node. This whole process of mobility is transparent to the correspondent because the mobile node has two addresses care of address and the permanent one. Direct routing: the correspondent gets the foreign address of the mobile node and then communicates directly with the mobile node. Regardless of the approach taken, the registration process (which lets the foreign and home agent know how to behave) is the same: 1. Following the receipt of a foreign agent advertisement, a mobile node sends a mobile IP registration message to the foreign agent. 2. The foreign agent receives the registration message and records the mobile node s permanent IP address. The foreign agent now knows that it should be looking for datagrams containing an encapsulated datagram whose destination address matches the permanent address of the mobile node. 3. The foreign agent contacts the home agent and supplies the care of address, the home agent address and the permanent address of the mobile node. 4. The home agent receives the registration and checks for authenticity and correctness of request. The home agent binds the permanent IP address of the mobile node to the COA so that tunnelling can occur. The home agent sends a reply containing the home agent address, mobile node permanent address, registration lifetime and the registration ID. 5. The foreign agent receives the reply and forwards it to the mobile node 76
77 MOBILITY USING INDIRECT ROUTING 1. Suppose a correspondent sends a datagram to the mobile node. It must do so by addressing the datagram to the mobile node and it will be routed towards the home network 2. The home agent intercepts the datagram and then forwards them (after encapsulation) to a the foreign agent via the care of address. This is the same notion as tunnelling. 3. The foreign agent then forwards the datagram to the mobile node. 4. The mobile node can then send datagrams to the correspondent by directly addressing the datagram to the correspondent with the source being the permanent address of the mobile node. The following figure shows the whole process of a correspondent sending a datagram to the mobile node with a care of address of If a mobile user moves to another network, the mobile node must register with the new foreign agent which will subsequently contact the home agent and update the care of address for the mobile node. The packets thus continue to be forwarded to the mobile node at the new care of address. 77
78 MOBILITY USING DIRECT ROUTING Direct routing overcomes the inefficiencies of triangle routing the issue of having packets forwarded from the home agent to the foreign agent before reaching the mobile node. Sometimes a more efficient route exists especially if the correspondent and the mobile node are actually in the same network. In the direct routing approach, a correspondent agent in the correspondent s network first learns the COA of the mobile node. This is done so by contacting the home agent. The correspondent agent then tunnels datagrams directly to the mobile node s COA. This means that the mobility function is not transparent to the correspondent. There are two additional challenges now: A mobile-user location protocol is needed There is an issue with a mobile node moving from one foreign network to another since in indirect routing, the correspondent agent only queries the home agent once for the COA at the beginning of the session. One way of overcoming the issue of moving between foreign networks is to have an anchor foreign agent the first foreign agent used in the session. The data is always routed to the anchor foreign agent and subsequently is forwarded to the mobile node via the current foreign network the mobile node is in. 78
79 Topics: TCP UDP Reliable Delivery Flow Control Congestion Control Fairness CHAPTER 6: TRANSPORT LAYER 79
80 TRANSPORT SERVICES & PROTOCOLS Recall that the transport layer consists of the services and protocols which: Provide logical communication between application processes running on different hosts Break up application messages into segments and passes to it down to the network layer and at the receiver, reassembles it Provides multiplexing of data between applications through the use of sockets allows for multiple applications to communicate with different hosts almost simultaneously Some of the characteristics may include reliable, in order delivery (TCP) or the lack thereof (UDP). Delay and bandwidth guarantees however cannot be provided on this layer. In the case of UDP, sockets are identified by the destination IP address and port number, while in TCP it is four tuple: <source IP, source port, destination IP, destination port>. DEMULTIPLEXING & MULTIPLEXING We have just seen that sockets can be used to multiplex and demultiplex data from multiple applications on the same host. The issue however then becomes in being able to demultiplex at the receiver side. Demultiplexing is achieved through the use of two fields in the transport layer protocol datagrams. At the most basic level, the destination port is socket specific and more often than not, only one application uses a particular socket. 80
81 UDP CONNECTIONLESS ORIENTED When a host receives a UDP segment, it checks the destination port number in the segment and directs the segment to the socket with that port number. What we find with UDP s two tuple socket identifier is that datagrams from different source IP addresses and/or source ports still get directed to the same socket. This is shown in the following diagram. WHY UDP? No Frills transport protocol that is a best effort service segments may be lost or delivered out of order, but each segment is handled independently from others. Useful in loss-tolerant, rate sensitive critical applications when lost segments are better than delays e.g video conferencing Connectionless: No time delay associated with handshaking No connection state Small segment header No congestion control: useful for applications that wish to do their own flow control and/or error correction. CONNECTION-ORIENTED DEMULTIPLEXING (TCP) As stated before, the TCP socket is 4 tuple: Source IP Source port Destination IP Destination port This means that unlike UDP, TCP segments with different source IP and/or ports do not correspond to the same socket. This allows the server host to support many concurrent TCP sockets such is the case with web servers. Note that the nature of the connection is not an end to end TDM or FDM nor a virtual circuit since the connection state resides entirely in the two end systems and no where in between the intermediate network elements. 81
82 WHY TCP? Point-to-point connection transport protocol one sender, one receiver Reliable, in order byte stream Pipelined with TCP congestion and flow control useful when there is enough memory for send and receive buffers Full duplex TCP OPERATION The TCP client establishes a connection with the TCP in the server through a three way handshake the first two TCP segments do not carry payload, while the third may carry payload. During this handshake, TCP sets up several variables and their states. After connection setup, the client can passes a stream of data through the socket and lets TCP direct this data to the connection s send buffer. From time to time, TCP will grab chunks of data from the send buffer. The size of these payload chunks are limited by the maximum segment size (MSS). It is set under the consideration of the largest link layer frame (maximum transmission unit, MTU) that can be transmitted by the local host. This consideration is done so that a TCP segment will fit into a single link layer frame. TCP pairs each chunk of client data with a TCP header, thereby forming TCP segments. The segments are then passed down to the network layer, where they are encapsulated within network layer IP datagrams. At the other end, TCP segments are placed in the receive buffer before they pass through the socket and are read by the application. 82
83 TCP SEGMENT SOURCE & DESTINATION PORTS Used to multiplex/demultiplex data from/to upper layer applications. SEQUENCE & ACKNOWLEDGEMENT NUMBERS These sequence and acknowledgement number fields in the TCP header are used to implement reliable data transfer and are initially randomly set to minimise the possibility that a segment still present in the network from an earlier, already terminated connection between two hosts isn t mistaken for a valid segment in a later connection. The sequence number is the byte stream number of the first byte in the segment s data. The acknowledgement number is the sequence number of the next byte expected from the receiver this provides a mean of cumulative ACK whereby all bytes up until the acknowledgement number has been received correctly. Note that if B had received bytes 0 through to 535 and 900 to 1000, B s segment to A will contain the ACK of 536, since it is still waiting for bytes after 536. RECEIVE WINDOW Used for flow control. HEADER LENGTH Specifies the length of the TCP header in 32 bit words because the TCP header can be of variable length. TCP TIME OUTS 83
84 Recall that TCP implements reliable delivery. In some situations, a TCP segment gets lost in a network and never reaches its destination the original TCP segment must be resent. The issue is how long should the timeout value be before resending? It should be: Longer than the RTT because a value shorter means that there not enough time is being allowed for the acknowledgement to propagate through. This removes the need for many unnecessary retransmissions. Not too long since this would waste time in reacting to segment loss To set the timeout value, we need the samplertt: amount of time required for a particular segment to be sent and an acknowledgement to be received. Note that TCP never computes the samplertt for a retransmitted segment; only for segments which have been transmitted once and is currently unacknowledged. To get an exponential weighted moving average of the RTT that will smoothen out the fluctuations, we use the estimatedrtt. With αα = EEEEEEEEEEEEEEEEEEEEEEEE = (1 αα)eeeeeeeeeeeeeeeeeeeeeeee + αα(ssssssssssssssssss) The deviation of host much the SampleRTT deviates from the Estimated RTT is given by: With ββ = 0.25 DDDDDDDDDDDD = (1 ββ)dddddddddddd + ββ SSSSSSSSSSSSSSSSSS EEEEEEEEEEEEEEEEEEEEEETT It would thus make sense for the timeout value to be set around the EstimatedRTT with some deviation: TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT = EEEEEEEEEEEEEEEEEEEEEEEE + 4 DDDDDDDDDDDD RELIABLE DATA TRANSFER TCP implements reliable data transfer through the use of: Cumulative ACKS Single retransmission timer There are three main events the TCP algorithm handles: 1. Data is received from the application above 2. Timer times out 3. ACK is received 84
85 NextSeqNum = InitialSeqNum SendBase = InitialSeqNum /*First unacknowledged byte */ loop (forever) { switch(event) event: data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data) event: timer timeout retransmit not-yet-acknowledged segment with smallest sequence number start timer event: ACK received, with ACK field value of y if (y > SendBase) { SendBase = y if (there are currently not-yet-acknowledged segments) start timer } else { increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) { resend segment with sequence number y } } /* end of loop forever */ Note that the algorithm above shows two triggers for retransmission: Timeout events the timer resets the timeout interval to be twice that set previously and the first unacknowledged segment is resent. Duplicate ACKS an ACK that reacknowledges a segment for which the sender has already received an acknowledgement for. Since a sender sends a large number of segments back to back, if one segment is lost, many back to back duplicate ACKs occur. Thus three duplicate ACKS for the same data indicates that the segment following the segment that has been ACKed three times has been lost and fast retransmit occurs before the segment s timer expires. The three situations on the right shows: 1. When the ACK is lost, host A has no idea that host B has received bytes correctly, so a retransmit of segment with sequence number 92 occurs. 2. Timeout occurs before both of the ACKS have arrived at A. Host A thus resends the first segment with sequence number 92 and restarts the timer. As long as the ACK for the segment arrives before the new timeout, the second segment will not be retransmitted. 3. Two segments are sent consecutively by A and are acknowledged by B. Since the first ACK is lost but the second ACK arrives before the timeout, not retransmission is done since cumulative ACKS imply that bytes were also received by B. 85
86 TCP CONNECTION MANAGEMENT Recall in the setup process TCP must: Initialise sequence numbers Buffers, flow control information (e.g RcvWindow) For the client, this is done using the connect() argument, while the server will do so through accept(). THREE WAY HANDSHAKE 1. Client hosts send TCP SYN segment to server and specifies initial sequence number with no payload. 2. Server host receives the SYN and replies with SYNACK segment and specifies the initial sequence number for the server. The server also allocates buffers 3. Client receives SYNACK and replies with an ACK segment which may have a payload. At this stage the client allocates buffers and variables to the connection. It is necessary for a three way handshake since both the host and server must agree on the sequence number. Without the third step, the server has no way of knowing whether the client knows its initial sequence number. TEARDOWN To close the connection, there are 4 steps: 1. Client sends TCP FIN control segment to server and enters FIN_WAIT_1 state. 2. Server receives FIN and replies with an ACK. Server closes the connection and sends FIN. 3. Upon receiving the ACK, the client enters FIN_WAIT_2. Client receives FIN and replies with ACK. It enters TIME_WAIT and will respond with ACK to received FINs 4. Server receives ACK and the connection is closed. 86
87 TCP FLOW CONTROL TCP flow control is implemented so that the sender won t overflow the receiver s buffer by transmitting too much, too quickly. The operation requires the receiver to advertise spare room in its buffer RcvWindow in segments. RRRRRRRRRRRRRRRRRR = RRRRRRRRRRRRRRRRRR (LLLLLLLLLLLLLLLLLLLLLLLL LLLLLLLLLLLLLLLLLLLLLLLL) The sender can thus limit unacked data to RcvWindow such that we can guarantee that there will be no overflow in the RcvBuffer. CONGESTION CONTROL Congestion control is the system which handles the problem of having too many sources sending data too fast for the network to handle and is fundamentally different to flow control which focuses on the limitation of the receiver. Congestion can lead to: Lost packets (buffer overflow at routers): Since buffers have finite space, when the buffers are full, packets are dropped and retransmission is necessary to compensate for lost packets. Long delays (queuing in router buffers): Achieving near full link utilisation appears to be good for throughput but since the queuing at routers are full, the queuing delay each packet experiences is substantial. APPROACHES TOWARDS CONGESTION CONTROL End to End Congestion Control No explicit feedback from network Congestion is inferred through loss or delay. Network Assisted Congestion Control Routers provide feedback to end systems using a single bit (SNA, DECbit, TCP/IP ECN, ATM) Can provide explicit feedback at the rate sender should send at TCP uses end to end congestion control since it is on the transport layer. It uses the fact that: LLLLLLLLLLLLLLLLLLLLLLLL LLLLLLLLLLLLLLLLLLLLLLLLLL CCCCCCCC The congestion window is dynamic and a function of the perceived network congestion. Thus, it implicitly defines the rate at which the sender should operate at: rrrrrrrr = CCCCCCCC RRRRRR BBBBBBBBBB/ssssss Note that congestion is perceived through a loss event either by timeout or 3 duplicate ACKs. As noted before, retransmission occurs with the timeout interval doubled and the congestion window reduced selfclocking mechanism. 87
88 A lost segment implies congestion and the TCP sender s rate should be decreased An acknowledged segment indicates that the network is delivering the sender s segments to the receiver and hence, the sender s rate can be increased when an ACK arrives for a previously unacknowledged segment. Bandwidth Probing: TCP increases its transmission rate to probe for the rate that at which congestion onset begins, backs off from that rate, and then begins probing again to see if congestion onset rate has changed. TCP AIMD TCP congestion control is based on additive increase, multiplicative decrease: Additive increase: Probe for rate by increasing CWND by 1 MSS every RTT in the absence of loss events. Multiplicative decrease: Cut CWND by half after a loss event SLOW START & CONGESTION AVOIDANCE When a TCP connection begins, the value of CWND is small value of 1 MSS resulting in a transmission rate of MSS/RTT. In slow-start state: The value of CWND begins at 1 MSS and increases by 1 MSS every time (exponentially increase rate) a transmitted segment is first acknowledged. Note that when the first segment is acknowledged, the sender sends out 2 MSS segments. Once these are acknowledged, the congestion window is then set to 4. Hence, the CWND is doubled every RTT roughly. Slow start ends when: CWND=SSTHRESH, slow start ends and TCP transitions into congestion avoidance mode and increases the window linearly by 1 MSS every RTT. Note that initially, SSSSSSSSSSSSSSSS = 1 CCCCCCCC 2 A loss event is detected by timeout, the TCP sender sets the value of CWND to 1 and begins the slow start process anew. It also sets the value of a second state variable SSTHRESH to CWND/2 half the value of the congestion window value when congestion was detected. 3 duplicate ACKS exits the slow start state and TCP performs a fast retransmit. Since ACKS have been received, the congestion perceived here is less alarming than a loss event. TCP will halve the value of CWND and record this as SSTHRESH. 88
89 Similarly, congestion avoidance ends when a loss event or 3 duplicate ACKS occurs. The behaviour is identical to slow start in these events. When CWND is below Threshold, sender in slow-start phase, window grows exponentially. When CWND is above Threshold, sender is in congestion-avoidance phase, window grows linearly. When a triple duplicate ACK occurs, Threshold set to CWND/2 and CWND set to Threshold. The result is that SSSSSSSSSSSSSSSS = CCCCCCCC = CCCCCCDD nnnnnn When timeout occurs, Threshold set to CWND/2 where CWND is the value at which the loss event occurred and CWND is set to 1 MSS. 2 Congestion Window (in segments) In an old TCP Tahoe version, the CWND was set to 1 even after 3 duplicative ACKS In TCP RENO where fast recovery is implemented, 3 duplicative ACKS in the 8 th round will result in SSTHRESH to be set to 6 MSS and CWND will be set to 6 CWND. Example If MSS is 1460 B and CWND is B, by how much will CWND increase for each of the 10 ACKS received for the 10 segments being sent in congestion avoidance mode? Since there are 10 segments being transmitted in each RTT, and in each RTT the CWND will increase by 1 MSS. Thus each ACK will increase CWND by 1/10. 89
90 TCP THROUGHPUT TCP throughput is a function of the window, as shown previously. However, the previous formula shown ignores the losses that occur. Let W be the window size when loss occurs. At this point the throughput is W/RTT, just like before. But after the loss, the window drops to W/2 and throughput is then W/2RTT. Thus the average throughput is given by: AAAAAAAAAAAAAA TThrrrrrrrrhpppppp = 0.75 WW RRRRRR Example If there are 1500 B segments, 100 ms RTT and we would like 10 Gbps throughput, calculate the window size required in terms of segments Which equates to WW BBBBBBBBBB/ss = WW WW = BB = segments. If we consider throughput in terms of loss rate then for the above example: MMMMMM TThrrrrrrrrhpppppp = 1.22 RRRRRR LL The loss rate, L would need to be
91 FAIRNESS We investigate the fairness of TCP. Given that there are K TCP sessions sharing the same bottleneck link with bandwidth R, each TCP session should be able to achieve an average rate of R/K. Consider 2 TCP sessions in congestion avoidance. The 45 degree line is where TCP sessions are sharing the link bandwidth equally between 2 connections. Suppose initially the TCP window sizes are shown. No loss occurs since the total bandwidth utilisation is less than R. Hence both connections increase their window size by 1 MSS every RTT. The result is a locus which is parallel to the 45 degree line. When the bandwidth consumption exceeds R, packet loss occurs. Suppose this occurs at B. Connection 1 and 2 decrease their windows by a factor of 2 and arrive at C. The whole process starts again until it reaches the 45 degree line. In this idealistic scenario, fairness is achieved. However in reality, many applications use parallel connections between 2 hosts. This means that an application gets a larger portion of the bandwidth. For example consider 9 sessions running on a R link rate. If a new application asks for 1 TCP connection, then each application gets a rate of R/10. If however it asks for 11 TCP connections then, the new application gets 11/20 of the bandwidth available highly unfair. 91
92 Topics: HTTP FTP DNS CHAPTER 7: APPLICATION LAYER 92
93 APPLICATION ARCHITECTURE The client-server model consists of two components: A server (server farm) that is always on, has a (set of) permanent IP address(es). Clients which communicate with the server and are intermittently connection and may have dynamic IP addresses. Clients do not communicate with other clients In pure P2P architectures: There is no server Arbitrary end systems directly communicate with each other Peers are intermittently connected and change IP addresses Have both client and server processes. Hybrid architectures have properties of both: Napster in its operation had a centralised server used for file searching and peer registration. File transfer however was P2P based. Instant messaging requires the registration with a central server to determine who online contacts. Chatting between two users however is done via P2P A client process is a program running within a host that initiates communication with another host. A server process is a program that waits to be contacted. Note that two processes on the same host use inter-process communication. SOCKETS Recall that sockets are used by the transport layer to multiplex/demultiplex data to the applications. Programs make use of these sockets through APIs which are: Protocol specific (UDP or TCP) Provide several options to fix parameters As an application developer, one would need to determine which transport layer protocol to used based on the needs: Dataloss: Tolerant or Reliability required Timing: Is high delay tolerable Bandwidth: Some applications require a minimum amount of bandwidth to be effective (UDP recommended) while others tend to be elastic (TCP recommended). 93
94 APPLICATION LAYER PROTOCOLS The application layer protocol defines: Types of messages exchanged Syntax of message types Semantics of the fields i.e the meaning of information in fields Rules for when and how processes send and response to messages These protocols could be Public domain protocols such as HTTP and SMTP, or they could be Proprietary protocols such as KaZaA. WEB & HTTP A webpage consists of a base HTML-file which may include several referenced objects such as: HTML files JPEG image Java/Flash applet Audio files Each object is addressable by a URL: The HTTP hypertext transfer protocol is the web s application layer protocol. It is based on the client/server model with clients accessing websites using browsers and web servers responding to requests. HTTP piggy backs on TCP: Clients initiate connections to the server on port 80 Server accepts TCP connection from client HTTP messages are exchanged between the browser and the web server TCP connection is closed The elegance of HTTP is in its statelessness servers need not maintain information about past client requests. Instead, state is implemented through the use of sessions and cookies. 94
95 PERSISTENT VS NON-PERSISTENT In non-persistent HTTP connections, at most one object is sent over a TCP connection. The old HTTP/1.0 uses non-persistent HTTP. The issue with nonpersistent connections is the need to re-setup HTTP connections with the server when more than one object is to be requested. Requires 2 RTTs per object one to initiate TCP connection and one for HTTP request and first few bytes of HTTP response to return. Thus total response time is RRRRRRRRRRRRRRRR tttttttt = 2RRRRRR + tttttttttttttttt tttttttt OS must work and allocate host resources for each TCP connection Browsers often open parallel TCP connections to fetch referenced objects. Hence more time is spent initiating the connection than with persistent HTTP. Persistent HTTP allows multiple objects to be sent over a single TCP connection between the client and server. By default, HTTP/1.1 uses persistent connections. Server leaves connection open after sending response. Thus subsequent HTTP messages between the same client/server are sent over the same connection. Allows pipelining and no pipelining: o Without pipelining: The client only issues a new request after the previous request has been fulfilled and received entirely. On average it takes 1 RTT for each request-response. o Pipelining: The client sends requests as soon as it encounters a referenced object. The following shows a persistent HTTP connection 95
96 HTTP REQUEST The HTTP request message is an ASCII format and contains: The request line: GET, POST, HEAD (ask server to leave requested object out of response), PUT (uploads file in entity body to path specified in URL field), DELETE Header line Carriage return to indicate the end of the message HTTP RESPONSE The HTTP response message contains the version and status code, the header lines and the data. Status Description 200 OK 301 Moved Permanently 400 Bad Request 404 Not found 505 HTTP version not supported COOKIES: USER-SERVER STATE Cookies are used to save the user-server state on the client. Cookies consists of 4 components Cookie header line in the HTTP response message Cookie header line in HTTP request message Cookies file kept on the user s host and managed by browser Back-end database at web site 96
97 The first time a user access a website, the site creates a unique ID and creates an entry in the backend database for ID. The web browser will send back the cookie unchanged each time a request happens. Cookies are commonly used for: Authorisation Shopping carts Recommendations User session state Spyware WEB CACHE PROXY SERVER In most situations, many users tend to access the same sites on a particular network. It would thus make sense for a proxy server to handle the request for all the users on that network such that a single request need only made to the actual web server, reducing access link utilisation and minimising delay. A proxy server can either be transparent or it can be explicitly set in the browser such that all HTTP requests are first sent to the proxy server. If the requested object is in the cache, the proxy server returns the object If the requested object is not already in the cache, the proxy server requests the object from the origin server, returns the object to the client and caches it for future requests. CONDITIONAL GET Another method of reducing access link utilisation is through the use of conditional get in which an object is not sent if the cache has an up-to-date version. This is achieved by specifying the If-modified-since <date> field in HTTP request. If the response is HTTP/ Not Modified, then there is no need to get the field. 97
98 FTP The file transfer protocol is used to transfer files to/from remote hosts and is again based on the client server model and is stateful because is retains information about the current directory the user is in and earlier authentications. FTP servers typically run on port 21. The operation is quite intuitive: 1. FTP client contacts FTP server on port 21 using TCP as the transport layer protocol 2. Client obtains authorisation over the control connection on port Client browses remote directory by sending commands over control connection 4. When the server receives a command for a file transfer, the server opens a TCP data connection to the client. This is done so that it is still possible to use the control connection to specify commands. 5. After transferring one file, the server closes the TCP data connection. There are three major components to User agent: the mail reader that provides the ability to compose, edit and read messages. Mail servers: provides a mailbox for incoming messages for users, manages the message queue for outgoing mail messages SMTP (simple mail transfer protocol): exchanges s between other servers SMTP Like FTP and HTTP, SMTP uses persistent TCP to reliably transfer messages from the client server to the server mail server on port 25. There are three phases during the transfer of messages: Handshaking: The commands and responses are all in ASCII Transfer of messages: Messages must be in 7 bit ASCII. Attachments are MIME encoded. The end of messages is denoted with. on a line of its own. Closure 98
99 MAIL ACCESS PROTOCOLS To actually retrieve s from the server we can use: POP (Post Office Protocol): The client gains authorisation and downloads the messages and deletes it from the server. This means you cannot reread messages. It is stateless across sessions. IMAP (Internet Mail Access Protocol): Offers more features than POP and allows the manipulation of stored messages on the server. IMAP keeps user state across sessions including names of folders and mappings between message IDs and folder name. Web based 99
100 DNS The Domain name system allows internet hosts to be assigned a name that correlates to an IP or a range of IP addresses. The system operates using: 13 clusters of root DNS servers at the top of the hierarchy. UDP as the transport layer protocol underneath a application layer protocol that is used to resolve names to IP addresses. DNS provides: Hostname to IP address translation Host aliasing Mail server aliasing Load distribution Underneath the root DNS servers are the Top-Level-Domain (TLD) servers which are responsible for one of the TLD extension such as com, org, net, edu, uk, au etc. Authoritative DNS servers are servers that providing authoritative hostname to IP mappings for an organisation s servers. For example, yahoo.com has a set of authoritative DNS servers that matches the hostname to a range of IPs. Local name servers do not strictly belong to a hierarchy. By default each ISP runs a nameserver which acts as a proxy for host computers and forwards queries into the hierarchy. OPERATION 1. A host computer generally queries a local nameserver 2. The local DNS server (if it cannot resolve the name without assistance) will act on behalf of the host computer to query one of the root DNS servers to find the TLD server 3. The local DNS server then queries the TLD server to get the IP address of the nameserver for the domain name RECURSIVE QUERIES What we saw before was the use of iterated query characterised by a response which maybe the complete answer or a referral to another server that may know the answer. In recursive query, the burden of name resolution is put on the contacted name server. The complete host-ip mapping is returned to the requester. 100
101 DNS CACHING & UPDATING RECORDS Once any name server learns the IP-HOST mapping, it caches the mapping. The cache entries time out after some period. To lessen the burden on root name servers, the TLD servers are typically cached in local name servers. DNS RECORDS DNS records are distributed to many nameservers which store the mappings in resource records which are formatted as: Common Type values include: < nnnnnnnn, vvvvvvvvvv, tttttttt, tttttt > A: the name is a hostname and the value is an IP address NS: the name is a fully qualified domain name and the value is the IP address of the authoritative name server for this domain. CNAME: the name is an alias name for some canonical name MX: value is the name of the mailserver associated with name. DNS MESSAGES The message header contains: 16 bit identifier for each query Flags: Determine whether it is a query or (authoritative) reply, whether recursion is desired or available Questions: contain name and type fields for the query Answers: contains RRs in response to the query Authority: contains records for authoritative servers INSERTING RECORDS To create a domain name, you register with the registrar which will subsequently insert two RRs into the TLD server: one for the NS and the other for the A type record of the nameserver: 101
CSCI 491-01 Topics: Internet Programming Fall 2008
CSCI 491-01 Topics: Internet Programming Fall 2008 Introduction Derek Leonard Hendrix College September 3, 2008 Original slides copyright 1996-2007 J.F Kurose and K.W. Ross 1 Chapter 1: Introduction Our
R2. The word protocol is often used to describe diplomatic relations. How does Wikipedia describe diplomatic protocol?
Chapter 1 Review Questions R1. What is the difference between a host and an end system? List several different types of end systems. Is a Web server an end system? 1. There is no difference. Throughout
EECS 122: Introduction to Computer Networks Multiaccess Protocols. ISO OSI Reference Model for Layers
EECS 122: Introduction to Computer Networks Multiaccess Protocols Computer Science Division Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720-1776
Network Edge and Network Core
Computer Networks Network Edge and Network Core Based on Computer Networking, 4 th Edition by Kurose and Ross What s s the Internet: Nuts and Bolts View PC server wireless laptop cellular handheld access
Based on Computer Networking, 4 th Edition by Kurose and Ross
Computer Networks Ethernet Hubs and Switches Based on Computer Networking, 4 th Edition by Kurose and Ross Ethernet dominant wired LAN technology: cheap $20 for NIC first widely used LAN technology Simpler,
TCIPG Reading Group. Introduction to Computer Networks. Introduction 1-1
TCIPG Reading Group Introduction to Computer Networks Based on: Computer Networking: A Top Down Approach, 4 th edition. Jim Kurose, Keith Ross Addison- Wesley, July 2007. Introduction 1-1 Chapter 1: Introduction
What s a protocol? What s a protocol? A closer look at network structure: What s the Internet? What s the Internet? What s the Internet?
What s the Internet? PC server laptop cellular handheld access points wired s connected computing devices: hosts = end systems running apps communication s fiber, copper, radio transmission rate = bandwidth
Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software
Local Area What s a LAN? A transmission system, usually private owned, very speedy and secure, covering a geographical area in the range of kilometres, comprising a shared transmission medium and a set
Ethernet. Ethernet Frame Structure. Ethernet Frame Structure (more) Ethernet: uses CSMA/CD
Ethernet dominant LAN technology: cheap -- $20 for 100Mbs! first widely used LAN technology Simpler, cheaper than token rings and ATM Kept up with speed race: 10, 100, 1000 Mbps Metcalfe s Etheret sketch
Analog vs. Digital Transmission
Analog vs. Digital Transmission Compare at two levels: 1. Data continuous (audio) vs. discrete (text) 2. Signaling continuously varying electromagnetic wave vs. sequence of voltage pulses. Also Transmission
Chapter 1 Computer Networks and the Internet
CSF531 Advanced Computer Networks 高 等 電 腦 網 路 Chapter 1 Computer Networks and the Internet 吳 俊 興 國 立 高 雄 大 學 資 訊 工 程 學 系 Outline 1.1 What is the Internet? 1.2 Network edge 1.3 Network core 1.4 Access networks
Computer Networks and the Internet
? Computer the IMT2431 - Data Communication and Network Security January 7, 2008 ? Teachers are Lasse Øverlier and http://www.hig.no/~erikh Lectures and Lab in A126/A115 Course webpage http://www.hig.no/imt/in/emnesider/imt2431
Network edge and network core. millions of connected compu?ng devices: hosts = end systems running network apps
Computer Networks 1-1 What s the Internet: nuts and bolts view PC server wireless laptop cellular handheld access points wired links millions of connected compu?ng devices: hosts = end systems running
Attenuation (amplitude of the wave loses strength thereby the signal power) Refraction Reflection Shadowing Scattering Diffraction
Wireless Physical Layer Q1. Is it possible to transmit a digital signal, e.g., coded as square wave as used inside a computer, using radio transmission without any loss? Why? It is not possible to transmit
(Refer Slide Time: 2:10)
Data Communications Prof. A. Pal Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Lecture-12 Multiplexer Applications-1 Hello and welcome to today s lecture on multiplexer
Introduction cont. Some Structure in the Chaos. Packet switching versus circuit switching. Access networks and physical media
Introduction cont. Some Structure in the Chaos Lecture goal: get context, overview, feel of ing more depth, detail later in course approach: o descriptive o use Internet as example Overview: access net,
EECC694 - Shaaban. Transmission Channel
The Physical Layer: Data Transmission Basics Encode data as energy at the data (information) source and transmit the encoded energy using transmitter hardware: Possible Energy Forms: Electrical, light,
Note! The problem set consists of two parts: Part I: The problem specifications pages Part II: The answer pages
Part I: The problem specifications NTNU The Norwegian University of Science and Technology Department of Telematics Note! The problem set consists of two parts: Part I: The problem specifications pages
Chapter 9A. Network Definition. The Uses of a Network. Network Basics
Chapter 9A Network Basics 1 Network Definition Set of technologies that connects computers Allows communication and collaboration between users 2 The Uses of a Network Simultaneous access to data Data
TCOM 370 NOTES 99-12 LOCAL AREA NETWORKS AND THE ALOHA PROTOCOL
1. Local Area Networks TCOM 370 NOTES 99-12 LOCAL AREA NETWORKS AND THE ALOHA PROTOCOL These are networks spanning relatively short distances (e.g. within one building) for local point-to-point and point-to-multipoint
CS263: Wireless Communications and Sensor Networks
CS263: Wireless Communications and Sensor Networks Matt Welsh Lecture 4: Medium Access Control October 5, 2004 2004 Matt Welsh Harvard University 1 Today's Lecture Medium Access Control Schemes: FDMA TDMA
Introduction to computer networks and Cloud Computing
Introduction to computer networks and Cloud Computing Aniel Nieves-González Fall 2015 Computer Netwoks A computer network is a set of independent computer systems that are connected by a communication
DATA COMMUNICATION AND NETWORKS
DATA COMMUNICATION AND NETWORKS 1. Define the term Computer Networks. A Computer network is a number if computers interconnected by one or more transmission paths. The transmission path often is the telephone
Lecture 7 Multiple Access Protocols and Wireless
Lecture 7 Multiple Access Protocols and Wireless Networks and Security Jacob Aae Mikkelsen IMADA November 11, 2013 November 11, 2013 1 / 57 Lecture 6 Review What is the responsibility of the link layer?
CH.1. Lecture # 2. Computer Networks and the Internet. Eng. Wafaa Audah. Islamic University of Gaza. Faculty of Engineering
Islamic University of Gaza Faculty of Engineering Computer Engineering Department Networks Discussion ECOM 4021 Lecture # 2 CH1 Computer Networks and the Internet By Feb 2013 (Theoretical material: page
Random Access Protocols
Lecture Today slotted vs unslotted ALOHA Carrier sensing multiple access Ethernet DataLink Layer 1 Random Access Protocols When node has packet to send transmit at full channel data rate R. no a priori
:-------------------------------------------------------Instructor---------------------
Yarmouk University Hijjawi Faculty for Engineering Technology Computer Engineering Department CPE-462 Digital Data Communications Final Exam: A Date: 20/05/09 Student Name :-------------------------------------------------------Instructor---------------------
CSE331: Introduction to Networks and Security. Lecture 6 Fall 2006
CSE331: Introduction to Networks and Security Lecture 6 Fall 2006 Open Systems Interconnection (OSI) End Host Application Reference model not actual implementation. Transmits messages (e.g. FTP or HTTP)
ESSENTIALS. Understanding Ethernet Switches and Routers. April 2011 VOLUME 3 ISSUE 1 A TECHNICAL SUPPLEMENT TO CONTROL NETWORK
VOLUME 3 ISSUE 1 A TECHNICAL SUPPLEMENT TO CONTROL NETWORK Contemporary Control Systems, Inc. Understanding Ethernet Switches and Routers This extended article was based on a two-part article that was
DATA COMMUNICATIONS AND NETWORKING. Solved Examples
Page 1 of 10 DATA COMMUNICATIONS AND NETWORKING Solved Examples References: STA: Stallings, Data and Computer Communications, 6 th ed. TAN: Tannenbaum, Computer Networks, 4 th ed.) 1. Given the following
Data Link Layer Overview
Data Link Layer Overview Date link layer deals with two basic issues: Part I How data frames can be reliably transmitted, and Part II How a shared communication medium can be accessed In many networks,
Public Switched Telephone System
Public Switched Telephone System Structure of the Telephone System The Local Loop: Modems, ADSL Structure of the Telephone System (a) Fully-interconnected network. (b) Centralized switch. (c) Two-level
Wireless Links - Wireless communication relies on radio signals or infrared signals for transmitting data.
Uses of a network A network is a way to connect computers so that they can communicate, exchange information and share resources in real time. Networks enable multiple users to access shared data and programs
Chapter 1: roadmap. Access networks and physical media
Chapter 1: roadmap 1.1 What is the nternet? 1.2 Network edge 1.3 Network core 1.4 Network access and physical media 1.5 nternet structure and SPs 1.6 elay & loss in packet-switched networks 1.7 Protocol
Unit of Learning # 2 The Physical Layer. Redes de Datos Sergio Guíñez Molinos [email protected] 1-2009
Unit of Learning # 2 The Physical Layer Redes de Datos Sergio Guíñez Molinos [email protected] 1-2009 The Theoretical Basis for Data Communication Sergio Guíñez Molinos Redes de Computadores 2 The Theoretical
Computer Network. Interconnected collection of autonomous computers that are able to exchange information
Introduction Computer Network. Interconnected collection of autonomous computers that are able to exchange information No master/slave relationship between the computers in the network Data Communications.
Lecture 3: Signaling and Clock Recovery. CSE 123: Computer Networks Stefan Savage
Lecture 3: Signaling and Clock Recovery CSE 123: Computer Networks Stefan Savage Last time Protocols and layering Application Presentation Session Transport Network Datalink Physical Application Transport
Data Transmission. Data Communications Model. CSE 3461 / 5461: Computer Networking & Internet Technologies. Presentation B
CSE 3461 / 5461: Computer Networking & Internet Technologies Data Transmission Presentation B Kannan Srinivasan 08/30/2012 Data Communications Model Figure 1.2 Studying Assignment: 3.1-3.4, 4.1 Presentation
Chapter 4 Connecting to the Internet through an ISP
Chapter 4 Connecting to the Internet through an ISP 1. According to Cisco what two things are essential to gaining access to the internet? a. ISPs are essential to gaining access to the Internet. b. No
1 Which network type is a specifically designed configuration of computers and other devices located within a confined area? A Peer-to-peer network
Review questions 1 Which network type is a specifically designed configuration of computers and other devices located within a confined area? A Peer-to-peer network B Local area network C Client/server
LAN Switching. 15-441 Computer Networking. Switched Network Advantages. Hubs (more) Hubs. Bridges/Switches, 802.11, PPP. Interconnecting LANs
LAN Switching 15-441 Computer Networking Bridges/Switches, 802.11, PPP Extend reach of a single shared medium Connect two or more segments by copying data frames between them Switches only copy data when
Transport Layer Protocols
Transport Layer Protocols Version. Transport layer performs two main tasks for the application layer by using the network layer. It provides end to end communication between two applications, and implements
2. What is the maximum value of each octet in an IP address? A. 128 B. 255 C. 256 D. None of the above
1. How many bits are in an IP address? A. 16 B. 32 C. 64 2. What is the maximum value of each octet in an IP address? A. 128 B. 255 C. 256 3. The network number plays what part in an IP address? A. It
CSE 123A Computer Networks
CSE 123A Computer Networks Winter 2005 Lecture 5: Data-Link II: Media Access Some portions courtesy Srini Seshan or David Wetherall Last Time Framing: How to translate a bitstream into separate packets
Protocols. Packets. What's in an IP packet
Protocols Precise rules that govern communication between two parties TCP/IP: the basic Internet protocols IP: Internet Protocol (bottom level) all packets shipped from network to network as IP packets
THE BCS PROFESSIONAL EXAMINATIONS BCS Level 5 Diploma in IT. October 2009 EXAMINERS' REPORT. Computer Networks
THE BCS PROFESSIONAL EXAMINATIONS BCS Level 5 Diploma in IT October 2009 EXAMINERS' REPORT Computer Networks General Comments The responses to questions were of marginally better quality than April 2009
Objectives. Lecture 4. How do computers communicate? How do computers communicate? Local asynchronous communication. How do computers communicate?
Lecture 4 Continuation of transmission basics Chapter 3, pages 75-96 Dave Novak School of Business University of Vermont Objectives Line coding Modulation AM, FM, Phase Shift Multiplexing FDM, TDM, WDM
Telecommunications, Networks, and Wireless Computing
Objectives Telecommunications, Networks, and Wireless Computing 1. What are the features of a contemporary corporate telecommunications system? On what major technology developments are they based? 2.
Network Overview. The network edge: The network edge: Internet Services Models. The network edge: A closer look at network structure:
A closer look at network structure: Network Overview network edge: applications and hosts network core: routers network of networks access networks, media: communication s Introduction 1-1 Introduction
Network+ Guide to Networks 6 th Edition. Chapter 7 Wide Area Networks
Network+ Guide to Networks 6 th Edition Chapter 7 Wide Area Networks Objectives Identify a variety of uses for WANs Explain different WAN topologies, including their advantages and disadvantages Compare
Controlled Random Access Methods
Helsinki University of Technology S-72.333 Postgraduate Seminar on Radio Communications Controlled Random Access Methods Er Liu [email protected] Communications Laboratory 09.03.2004 Content of Presentation
CS6956: Wireless and Mobile Networks Lecture Notes: 2/11/2015. IEEE 802.11 Wireless Local Area Networks (WLANs)
CS6956: Wireless and Mobile Networks Lecture Notes: //05 IEEE 80. Wireless Local Area Networks (WLANs) CSMA/CD Carrier Sense Multi Access/Collision Detection detects collision and retransmits, no acknowledgement,
Data Link Protocols. TCP/IP Suite and OSI Reference Model
Data Link Protocols Relates to Lab. This module covers data link layer issues, such as local area networks (LANs) and point-to-point links, Ethernet, and the Point-to-Point Protocol (PPP). 1 TCP/IP Suite
Local Area Network By Bhupendra Ratha, Lecturer School of Library and Information Science Devi Ahilya University, Indore Email: [email protected] Local Area Network LANs connect computers and peripheral
Course book: Computer Networking. Computer Networks 3 rd edition. By Andrew ST S.Tanenbaum. Top Down approach 3 rd edition.
Computer Networking Course book: Computer Networking Top Down approach 3 rd edition By Jim kurose and keith ross Reference book: Computer Networks 3 rd edition By Andrew ST S.Tanenbaum Introduction 1-1
ADSL or Asymmetric Digital Subscriber Line. Backbone. Bandwidth. Bit. Bits Per Second or bps
ADSL or Asymmetric Digital Subscriber Line Backbone Bandwidth Bit Commonly called DSL. Technology and equipment that allow high-speed communication across standard copper telephone wires. This can include
Chapter 7: Computer Networks, the Internet, and the World Wide Web. Invitation to Computer Science, C++ Version, Third Edition
Chapter 7: Computer Networks, the Internet, and the World Wide Web Invitation to Computer Science, C++ Version, Third Edition Objectives In this chapter, you will learn about: Basic networking concepts
Data Link Protocols. Link Layer Services. Framing, Addressing, link access: Error Detection:
Data Link Protocols Link Layer Services Framing, Addressing, link access: encapsulate datagram into frame, adding header, trailer channel access if shared medium MAC addresses used in frame headers to
Architecture and Performance of the Internet
SC250 Computer Networking I Architecture and Performance of the Internet Prof. Matthias Grossglauser School of Computer and Communication Sciences EPFL http://lcawww.epfl.ch 1 Today's Objectives Understanding
Networks. The two main network types are: Peer networks
Networks Networking is all about sharing information and resources. Computers connected to a network can avail of many facilities not available to standalone computers: Share a printer or a plotter among
Data Link Layer(1) Principal service: Transferring data from the network layer of the source machine to the one of the destination machine
Data Link Layer(1) Principal service: Transferring data from the network layer of the source machine to the one of the destination machine Virtual communication versus actual communication: Specific functions
Network Simulation Traffic, Paths and Impairment
Network Simulation Traffic, Paths and Impairment Summary Network simulation software and hardware appliances can emulate networks and network hardware. Wide Area Network (WAN) emulation, by simulating
Computer Networks Homework 1
Computer Networks Homework 1 Reference Solution 1. (15%) Suppose users share a 1 Mbps link. Also suppose each user requires 100 kbps when transmitting, but each user transmits only 10 percent of the time.
Local-Area Network -LAN
Computer Networks A group of two or more computer systems linked together. There are many [types] of computer networks: Peer To Peer (workgroups) The computers are connected by a network, however, there
11/22/2013 1. komwut@siit
11/22/2013 1 Week3-4 Point-to-Point, LAN, WAN Review 11/22/2013 2 What will you learn? Representatives for Point-to-Point Network LAN Wired Ethernet Wireless Ethernet WAN ATM (Asynchronous Transfer Mode)
IT4405 Computer Networks (Compulsory)
IT4405 Computer Networks (Compulsory) INTRODUCTION This course provides a comprehensive insight into the fundamental concepts in data communications, computer network systems and protocols both fixed and
Appendix A: Basic network architecture
Appendix A: Basic network architecture TELECOMMUNICATIONS LOCAL ACCESS NETWORKS Traditionally, telecommunications networks are classified as either fixed or mobile, based on the degree of mobility afforded
Introduction. Abusayeed Saifullah. CS 5600 Computer Networks. These slides are adapted from Kurose and Ross
Introduction Abusayeed Saifullah CS 5600 Computer Networks These slides are adapted from Kurose and Ross Goals of This Course v Be familiar with Fundamental network topics Some advanced topics State-of-the-art
Network Security. Vorlesung Kommunikation und Netze SS 10 E. Nett
Network Security Internet not originally designed with (much) security in mind original vision: a group of mutually trusting users attached to a transparent network Security considerations in all layers!
C20.0001 Information Systems for Managers Fall 1999
New York University, Leonard N. Stern School of Business C20.0001 Information Systems for Managers Fall 1999 Networking Fundamentals A network comprises two or more computers that have been connected in
Chapter 11: WAN. Abdullah Konak School of Information Sciences and Technology Penn State Berks. Wide Area Networks (WAN)
Chapter 11: WAN Abdullah Konak School of Information Sciences and Technology Penn State Berks Wide Area Networks (WAN) The scope of a WAN covers large geographic areas including national and international
EPL 657 Wireless Networks
EPL 657 Wireless Networks Some fundamentals: Multiplexing / Multiple Access / Duplex Infrastructure vs Infrastructureless Panayiotis Kolios Recall: The big picture... Modulations: some basics 2 Multiplexing
Cable Modems. Definition. Overview. Topics. 1. How Cable Modems Work
Cable Modems Definition Cable modems are devices that allow high-speed access to the Internet via a cable television network. While similar in some respects to a traditional analog modem, a cable modem
Link Layer. 5.6 Hubs and switches 5.7 PPP 5.8 Link Virtualization: ATM and MPLS
Link Layer 5.1 Introduction and services 5.2 Error detection and correction 5.3Multiple access protocols 5.4 Link-Layer Addressing 5.5 Ethernet 5.6 Hubs and switches 5.7 PPP 5.8 Link Virtualization: and
Think! Think! Data communications. Long-Distance. Modems: to analog and back. Transmission Media. The last mile is the hardest for digital information
Data communications Think! Think!?? What makes it possible to communicate from point A to point B?? Long-Distance Transmission Media If you place a call outside the local transport area, an interchange
How To Make A Multi-User Communication Efficient
Multiple Access Techniques PROF. MICHAEL TSAI 2011/12/8 Multiple Access Scheme Allow many users to share simultaneously a finite amount of radio spectrum Need to be done without severe degradation of the
RTT 60.5 msec receiver window size: 32 KB
Real-World ARQ Performance: TCP Ex.: Purdue UCSD Purdue (NSL): web server UCSD: web client traceroute to planetlab3.ucsd.edu (132.239.17.226), 30 hops max, 40 byte packets 1 switch-lwsn2133-z1r11 (128.10.27.250)
Wireless Networks. Reading: Sec5on 2.8. COS 461: Computer Networks Spring 2011. Mike Freedman
1 Wireless Networks Reading: Sec5on 2.8 COS 461: Computer Networks Spring 2011 Mike Freedman hep://www.cs.princeton.edu/courses/archive/spring11/cos461/ 2 Widespread Deployment Worldwide cellular subscribers
1 Introduction to mobile telecommunications
1 Introduction to mobile telecommunications Mobile phones were first introduced in the early 1980s. In the succeeding years, the underlying technology has gone through three phases, known as generations.
Network Design. Yiannos Mylonas
Network Design Yiannos Mylonas Physical Topologies There are two parts to the topology definition: the physical topology, which is the actual layout of the wire (media), and the logical topology, which
What is CSG150 about? Fundamentals of Computer Networking. Course Outline. Lecture 1 Outline. Guevara Noubir [email protected].
What is CSG150 about? Fundamentals of Computer Networking Guevara Noubir [email protected] CSG150 Understand the basic principles of networking: Description of existing networks, and networking mechanisms
Public Network. 1. Relatively long physical distance 2. Requiring a service provider (carrier) Branch Office. Home. Private Network.
Introduction to LAN TDC 363 Week 4 Connecting LAN to WAN Book: Chapter 7 1 Outline Wide Area Network (WAN): definition WAN Topologies Choices of WAN technologies Dial-up ISDN T1 Frame Relay DSL Remote
BCS THE CHARTERED INSTITUTE FOR IT BCS HIGHER EDUCATION QUALIFICATIONS. BCS Level 5 Diploma in IT SEPTEMBER 2014. Computer Networks EXAMINERS REPORT
BCS THE CHARTERED INSTITUTE FOR IT BCS HIGHER EDUCATION QUALIFICATIONS BCS Level 5 Diploma in IT SEPTEMBER 2014 Computer Networks EXAMINERS REPORT General Comments This session is again like the April
Ethernet, VLAN, Ethernet Carrier Grade
Ethernet, VLAN, Ethernet Carrier Grade Dr. Rami Langar LIP6/PHARE UPMC - University of Paris 6 [email protected] www-phare.lip6.fr/~langar RTEL 1 Point-to-Point vs. Broadcast Media Point-to-point PPP
WAN Data Link Protocols
WAN Data Link Protocols In addition to Physical layer devices, WANs require Data Link layer protocols to establish the link across the communication line from the sending to the receiving device. 1 Data
Clearing the Way for VoIP
Gen2 Ventures White Paper Clearing the Way for VoIP An Alternative to Expensive WAN Upgrades Executive Overview Enterprises have traditionally maintained separate networks for their voice and data traffic.
IT4504 - Data Communication and Networks (Optional)
- Data Communication and Networks (Optional) INTRODUCTION This is one of the optional courses designed for Semester 4 of the Bachelor of Information Technology Degree program. This course on Data Communication
Region 10 Videoconference Network (R10VN)
Region 10 Videoconference Network (R10VN) Network Considerations & Guidelines 1 What Causes A Poor Video Call? There are several factors that can affect a videoconference call. The two biggest culprits
Objectives. Remote Connection Options. Teleworking. Connecting Teleworkers to the Corporate WAN. Providing Teleworker Services
ITE I Chapter 6 2006 Cisco Systems, Inc. All rights reserved. Cisco Public 1 Objectives Providing Teleworker Services Describe the enterprise requirements for providing teleworker services Explain how
Chapter 2 - The TCP/IP and OSI Networking Models
Chapter 2 - The TCP/IP and OSI Networking Models TCP/IP : Transmission Control Protocol/Internet Protocol OSI : Open System Interconnection RFC Request for Comments TCP/IP Architecture Layers Application
Introduction. Abusayeed Saifullah. CS 5600 Computer Networks. These slides are adapted from Kurose and Ross
Introduction Abusayeed Saifullah CS 5600 Computer Networks These slides are adapted from Kurose and Ross Roadmap 1.1 what is the Inter? 1.2 work edge end systems, works, links 1.3 work core packet switching,
CSMA/CA. Information Networks p. 1
Information Networks p. 1 CSMA/CA IEEE 802.11 standard for WLAN defines a distributed coordination function (DCF) for sharing access to the medium based on the CSMA/CA protocol Collision detection is not
ADSL part 2, Cable Internet, Cellular
ADSL part 2, Cable Internet, Cellular 20 June 2016 Lecture 12 20 June 2016 SE 428: Advanced Computer Networks 1 Topics for Today ADSL Cable Internet Cellular Radio Networks 20 June 2016 SE 428: Advanced
Chapter 8: Computer Networking. AIMS The aim of this chapter is to give a brief introduction to computer networking.
Chapter 8: Computer Networking AIMS The aim of this chapter is to give a brief introduction to computer networking. OBJECTIVES At the end of this chapter you should be able to: Explain the following terms:
What s the Internet. routers: forward packets (chunks of data) millions of connected computing devices: hosts = end systems
What s the Internet PC server wireless laptop cellular handheld router access points wired links millions of connected computing devices: hosts = end systems running network apps communication links fiber,
Broadband 101: Installation and Testing
Broadband 101: Installation and Testing Fanny Mlinarsky Introduction Today the Internet is an information superhighway with bottlenecks at every exit. These congested exits call for the deployment of broadband
Multiplexing on Wireline Telephone Systems
Multiplexing on Wireline Telephone Systems Isha Batra, Divya Raheja Information Technology, Dronacharya College of Engineering Farrukh Nagar, Gurgaon, India ABSTRACT- This Paper Outlines a research multiplexing
Computer Networks: LANs, WANs The Internet
1 Computer Networks: LANs, WANs The Internet Required reading: Garcia 1.1 and 1.2 CSE 3213, Fall 2010 Instructor: N. Vlajic History of Computers 2 Computer a machine that manipulates data according to
Introduction to Computer
PDHonline Course E175 (8 PDH) Introduction to Computer Instructor: Dale W. Callahan, Ph.D., P.E. and Lea B. Callahan, P.E. 2012 PDH Online PDH Center 5272 Meadow Estates Drive Fairfax, VA 22030-6658 Phone
What is Network Latency and Why Does It Matter?
What is Network Latency and Why Does It Matter? by O3b Networks This paper is presented by O3b Networks to provide clarity and understanding of a commonly misunderstood facet of data communications known
