# % # % & &( ) & & + ) ),,, ) & & ## )&&. ),,, ) & & ## )&&. / /1 4 ) (.
|
|
|
- Joseph Willis
- 9 years ago
- Views:
Transcription
1 ! # % # % & &( ) & & + ) ),,, ) & & ## )&&. ),,, ) & & ## )&&. / /1 4 ) (. 5
2 Controlling TCP Incast Congestion in Data Centre Networks Akintomide Adesanmi Lotfi Mhamdi School of Electronic and Electrical Engineering University of Leeds, Leeds, UK Abstract The rapid increase in data centre communication in recent years has led to a wave of interest in the problems that affect communications in the data centre environment. Many data centre applications rely on TCP/IP protocols for reliable data transport, which while successful for many Internet applications, does not integrate with and perform seamlessly in the data centre environment. One of these problems arises in data centre setups where many servers communicate simultaneously, and effectively in parallel, with one client through a single switch. In this scenario, the servers could experience a large drop in throughput due to severe packet loss, a phenomenon known as Incast congestion. In this paper, we investigate the Incast problem and introduce a variant of TCP, namely M21TCP-A, which is developed with the idea that the switch can be used to inform each parallel sender (server) of how many other senders are communicating simultaneously. Each sender can then limit its congestion window based on this feedback from the switch. M21TCP-A is evaluated against RED, ECN and DCTCP, which are the most successful congestion control proposals for mitigating Incast. Performance results show that M21TCP-A mitigates Incast thoroughly and outperforms previous proposals. Index Terms Congestion, TCP, Incast. I. INTRODUCTION In recent years, data centres have become very important and instrumental in driving and supporting the new era of Internet cloud computing. Modern data centres host a variety of services and applications such as web search, scientific computing, social networks, distributed files systems, etc [1]. These data centres usually run TCP/IP over an Ethernet network since TCP is low cost and easy to use [2]. Over the years, TCP has succeeded in meeting most communication requirements of Internet-based communications because of its assurance of reliable delivery and its congestion control mechanisms. However, recent research has shown that it falls short in the unique data centre environment with its advanced network topologies, unique workloads and high throughput requirements [3]. Many data centres run soft real time applications that employ a Partition/Aggregate pattern; these have been used for applications such as e-commerce, web search, retail etc. As shown in Figure 1, this pattern usually consists of a multi-layered hierarchical structure where the deadline of computations and responses on one layer affects computations and responses on higher layers. In a simple two-layered structure, for instance, aggregators on higher layers designate tasks amongst the workers on lower layers [4]. When each worker completes a task, it sends the results back to the aggregator (parent node), which compiles these results into a final response. Latency targets drive the performance of more multi-layered partition-aggregate structures. The latency targets, obtained from customer surveys [5], determine the deadlines of each layer in the algorithm tree. If children nodes (workers) miss their deadlines, then their parent nodes (aggregators) send out incomplete responses, leading to lower quality results and thus lower revenue. Therefore, the slowest sending child node limits each parent node. Partition/Aggregate workflow patterns are just one of the many to one communication workload patterns that are affected by a communication problem that is usually found in data centres, called Incast congestion. Incast congestion is a catastrophic loss in throughput that occurs when the number of servers sending data increases beyond the ability of an Ethernet switch to buffer packets. It occurs when multiple servers are communicating through an Ethernet switch and the limited buffer is overwhelmed by a concurrent flood of highly bursty traffic from parallel communicating servers, leading to severe packet loss and consequently one or more TCP timeouts. Parallel senders, in this case, include multiple servers that use the same Ethernet switch or router. These timeouts impose hundreds of milliseconds delay, which almost guarantees that a server (worker in partition/aggregate pattern) misses the deadline and that subsequent data to be sent is not received [6]. Timeouts have been shown to reduce throughput by up to 90% of the link capacity [7]. Considerable research efforts have been advocated to addressing the Incast problem. These proposals can be mainly divided into two categories. The first is an extension of traditional TCP [7] [8]. These approaches inherit the TCP properties and fall short in overcoming the Incast problem. The second category uses rate control and Active Queue Management (AQM) mechanisms. This includes DCTCP [9], D 2 CTCP [10], D 3 [11] and pfabric [12]. While these techniques improve the data centre transport capabilities in some aspects, they fail in others, especially with respect to performance (throughput and scalability) and implementation cost. This article introduces M21TCP-A, a novel approach to solving the Incast problem, where the router informs each sender of the total number of parallel senders by setting a number of senders TCP field in every packet that passes through it. The protocol leverages the idea of router based flow
3 Servers SRU1 Client Switch 2 1 SRU2 Data Block 3 SRU3 4 SRU4 Fig. 2. The classical Incast scenario showing multiple servers communicating with a single client through a bottleneck link. example of data centre environment with representative traffic workloads as described next.! Fig. 1. Partition Aggregate pattern. and rate control proposals such as pfabric [12] and D 3 [11]. It targets the root cause of Incast: buffer overflow due to bursty traffic. The switch allocates a number of senders to each flow using an additional field in the TCP or IP header. A maximum congestion window is then calculated at the sending server, such that if each sender sends packets concurrently, the switch buffer still has enough memory to handle all the packets. M21TCP-A is compatible with any other congestion algorithm as the senders only set a maximum that the congestion window must not supersede. The remainder of this article is structured as follows. Section II introduces the Incast problem and characterize the data centre traffic workload. Section III briefly discusses relevant existing congestion control algorithms for data centres. In Section IV, we introduce the M21TCP-A congestion control algorithm and discuss its properties. Section V presents the performance study of our algorithm with a comparison to existing schemes. Finally, Section VI concludes the paper. II. THE TCP INCAST PROBLEM TCP Incast congestion occurs in barrier synchronized many to one communication patterns like the Partition/aggregate pattern and cluster based storage workloads, where effectively parallel concurrent senders communicate with a client through a bottleneck link. Figure 2 depicts an example of this scenario. Many flows running simultaneously can overwhelm a switch by exhausting the switch memory and leading to severe packet loss. When Incast occurs, a client may observe a TCP throughput drop of one or two orders of magnitude below its link capacity when packets overfill the buffers on the client port of the switch [7]. So far, there has been no widespread accepted solution to Incast. Individual application solutions such as that discussed in [13] are tedious because they require that each application be built and set up according to their specific needs, and the capabilities of the network. In order to better describe and analyse the Incast problem, it s worth going through an A. Workload characterization The workload used in this paper is inspired by distributed cluster based storage systems, and bulk block transfers in batch processing tasks like MapReduce [14]. In these environments, many servers/senders communicate with a single client through a switch (the classic scenario under which Incast occurs). Each server/sender stores a part of a data block usually referred to as a Server Request Unit (SRU). The client requests a data block from the servers by sending request packets to all the servers simultaneously. The servers reply with the SRU, leading to a simultaneous many to one communication pattern. Important still, the client can only request the next block from servers after it receives all the previous data for the block requested. This is called a barrier-synchronized workflow pattern. The workload is a fixed fragment workload, which means that when the number of senders is increased, the size of each SRU remains constant. Therefore, given n servers, if each server sends 256KB of data as shown in Figure 3, the total block size is n SRU. It should be noted that in some applications that run partition aggregate patterns like that researched in [12], the client has a deadline and sends all the data received during that deadline to its parent node, whether or not it has received all the data it requested. The effect of Incast in such cases is decreased quality of results. Despite this fact, we believe that the workload presented here will allow a thorough examination of Incast scenarios. The default network parameters used are typical of data centre communications and were obtained from system administrators and switch specifications and are detailed in [2] [3]. B. Incast analysis Figure 4 shows the result of simulating Incast with the default parameters in Figure 3. This plot is consistent with previously obtained Incast patterns [9] [3] [2] [15]. At low server numbers, the total throughput of the whole system is close to the bandwidth of the bottleneck link i.e. 1Gbps. However the throughput collapses to less than 400Mbps at 16 servers. At greater server numbers, the throughput is even less that 200Mbps. This situation replicates the idea of Incast congestion.
4 Parameter SRU size Maximum Segment Size Link Bandwidth Link delay TCP Variant Device Transmit Buffer Size Retransmission Time Out (RTO) Switch Buffer Size Limited Transmit Switch Queue Default 256KB 576 bytes 1 Gbps 25us NewReno 128KB 200ms 64KB disabled Droptail Fig. 3. Exemplar network communication scenario parameters. Fig. 5. The total throughput of multiple barrier synchronized connections vs the number of senders, under a fixed block workload. Fig. 4. The total throughput of multiple barrier synchronised connections vs the number of senders, under a fixed block workload. Figure 5 shows that increasing the buffer size, which is equivalent to increasing the switch s memory, improves the performance of the system and delays the onset of Incast. Approximately, doubling the buffer size, doubles the number of servers at which Incast will set in. One problem with increasing buffer sizes is cost. For instance, The E1200 (1024KB buffer) switch costs $500,000 USD [7]. TCP timeouts are usually responsible for Incast congestion. When one or more servers experiences a timeout as a result of severe packet loss at the queue, the other servers may complete their transfers but do not receive the next request until that timeout(s) expires, and all the servers complete their transfers. Therefore, the bottleneck link remains idle or underutilised for extended periods of time. III. EXISTING CONGESTION CONTROL ALGORITHMS As stated earlier, while there have been many proposals for Incast and congestion control in data centres, our focus is on the few relevant efforts. TCP s state of the art congestion control algorithms assume that a network is a black box. For congestion avoidance, the end hosts gradually probe the network by increasing the congestion window gradually, decreasing it (the feedback) only when a packet is lost. This method of congestion avoidance cannot cope with the unique requirements of data centres that run high throughput, latency sensitive applications. Previous attempts to solve this problem involve Active Queue Management (AQM) at the switch, Explicit Congestion Notification (ECN) and other proprietary congestion control algorithms. Random Early detection (RED) [16] is an AQM scheme which aims to reduce the number of packets dropped by the router, provide lower delay by slowing queue build-up and make sure that there will always be buffer space available for an incoming packet. The RED algorithm drops packets probabilistically. The probability that the router will drop a packet increases as the estimated average queue size increases i.e. the switch is more likely to drop a packet if its transmit buffer was recently full and less likely to drop packets if it was recently empty. The RED algorithm involves two main functions: Estimating the average queue length with the algorithm described in [17]. Dropping Packets based on the average queue length and two configurable parameters, minth (minimum threshold) and maxth (maximum threshold). No packet is dropped if the queue size is below the minimum threshold while all packets are dropped if the queue size is above the maximum threshold. In between, packet drop decisions are made probabilistically. Explicit congestion notification refers to a router side action where the switch provides an indication of congestion by marking packets that exceed a threshold, instead of dropping them. ECN is usually used in conjunction with AQM schemes like RED, marking packets anytime RED (or others) would drop them. The IP header has two ECN bits: four code points. Three are set by the sending server to indicate whether or not a flow is ECN capable and the last (CE code point) is set by the switch to indicate congestion. On receipt of a packet with
5 the CE codepoint set, the receiver encodes an ECN ECHO in the TCP header of the Acknowledgement. The server then responds to the ECHO as it would to a timeout (halves the congestion window) [18]. Data centre TCP (DCTCP) is a congestion control algorithm designed to achieve high burst tolerance and high throughput with commodity shallow buffered switches [9]. It uses ECN and a proprietary marking scheme similar to RED. The key contribution of DCTCP is the act of deriving multibit feedback from the single bit ECN marks. The DCTCP AQM scheme and RED have two main differences: DCTCP has only one marking threshold, K. Above this threshold; an arriving packet is marked with the CE codepoint, sending an indication of congestion. Marking is based on the current queue length not the average queue length. Therefore a RED queue can easily be repurposed for DCTCP by setting the maximum and minimum thresholds to the same value and marking packets based on the instantaneous queue length. IV. THE M21TCP ALGORITHM It has been established that the main cause of Incast congestion is TCP timeouts caused by severe packet loss at the switch s transmission buffer. This severe packet loss occurs during highly bursty transmissions that overflow the switch buffer. In some next generation data centre congestion control, the routers are actively involved in preventing Incast [19] [11] [12]. M21TCP-A leverages router technology to inform each parallel server of the total number of parallel servers that are communicating through that router. Given that each server has an idea of the router size, they can calculate a maximum congestion window above which, each server cannot send. The maximum congestion window is calculated such that if packets from each sender reach the switch simultaneously, the switch will still not overflow. Therefore M21TCP-A ensures that TCP senders do not exceed a sending rate limit that could cause a buffer overflow by encoding the number of senders transmitting concurrently in each packet s header. Like ECN, a packet with the encoded information traverses the routers along that path to the receiver. The encoded information is transmitted by the receivers (client) back to the senders through ACK packets. Each router along the path encodes a new value if and only if the value it hopes to set is more than the value encoded in the header. The M21TCP-A algorithm has three main components: 1) Router/Switch Operation: A router that supports M21TCP-A operation allocates a number of senders to each flow by counting the number of flows currently traversing the interface. This sender number is encoded in an additional IP or TCP field and is valid for the next RTT. In order to properly perform this function, the router must track the number of flows traversing the interface. The M21TCP-A router does this by keeping a list of all the different flows with varying flow parameters in its memory and checking for new flows by comparing incoming packets to the list of already attained flow parameters. If the packet s flow parameters are already contained in the list, the list remains as it is. If the packet s flow parameters are not contained in the list, then they are added to the list. The flow parameters maintained in the list are, similar to the TCP 5-tuples, given below: Source address Destination address Source port Destination port Protocol name When multiple switches operate between end hosts, routers may set the number of senders in the packet if and only if the number of senders which that specific router hopes to set is more than that which is already set in the packet. Thus a packet obtained by the receiver contains the maximum flow/sender number value contained in any of the routers on the path the packet traversed. 2) Receiver Operation: The M21TCP-A receiver conveys the flow/sender number received in a packet back to the sender by encoding it in the ACK packet. It can either encode the sender number information in an additional IP or TCP header field. In the case of delayed ACKS, the flow/sender value in the latest received packet is used in the ACK. The ACK, which contains the number of senders encoded in an additional TCP header field is sent from the receiver to the sender. 3) Sender Operation: The first difference between the normal TCP sender and the M21TCP-A sender is that the M21TCP-A sender must support an additional 32- bit TCP field or an additional IP field (Both TCP and IP fields are used to obtain results in this paper). The sender calculates a maximum congestion window using this value with the formula in Equation 1. Max Wind = B (MHS N) N (1) B is the buffer size. The constant, M HS, is the minimum header size, which represents the combined minimum IP and TCP header size; it usually has a value of 42, N is the number of enders determined by the router. The sender uses the value received to limit its congestion window. The operation does not change the TCP s congestion control algorithm itself. Instead, it simply limits the congestion window by setting a maximum congestion window assignment. Equation 2 depicts an example of this. cwnd = min{cwnd + 1, maxcwnd} (2) It is worth noting that M21TCP-A is designed for single path TCP. For topologies with multiple paths between end hosts [20], this relies on Equal Cost MultiPath (ECMP), Valiant Load Balancing (VLB) and other existing mechanisms used with TCP to ensure that a given flow follows a single path. For flows that use multiple paths, additional mechanisms
6 Fig. 6. The total throughput of DROPTAIL, RED, ECNTCP, DCTCP and M21TCPA in the Incast scenario under FFW. will be required to get M21TCP-A to function properly. However, this hardly seems necessary because the main focus of this paper is the classical Incast scenario, with one switch between the sending servers and the client. V. PERFORMANCE RESULTS The performance of RED, ECN with RED, and DCTCP is evaluated and compared to that M21TCP-A in the classical Incast scenario using the NS-3 simulator. RED is simulated with min th = 15 and max th = 25. K is set to 20 and g to 0.16, as suggested by Alizedah [9]. Simulations are performed in NS-3 using the classical Incast scenario. We start by evaluating the performance of Droptail, ECN, RED (ECNTCP) and DCTCP under a fixed fragment workload. The fixed fragment SRU size is 256KB, thus from the fixed fragment workload characterization, the total block size is n SRU when the number of severs is n. The two metrics of interest are the throughput and latency of the flows. Figure 6 shows the throughput of each of the compared solutions. From the plot, RED performs worse than Droptail, ECN performs better than Droptail while DCTCP similar to ECN, delaying the onset of Incast substantially but not eliminating it. RED is a fair algorithm that is not designed to deal with the uniqueness of highly congested data centre traffic. It is not suited to the Incast scenario because it drops packets prematurely. This means that even at lower traffic volumes, there is a probability that a TCP flow will experience timeouts: fewer senders could cause Incast. The server then responds to the ECHO as it would normally respond to a timeout (halves the congestion window) [18]. In addition, since packet drops occur probabilistically, server timeouts may be staggered such that the total effective timeout from packet drops is much greater than RTOmin. In droptail queues, packets are dropped at close intervals and are more likely to be from the same server. Therefore, timeouts occur close to each other, leading to a lesser effective total timeout than with RED. ECN and DCTCP show great improvements on Droptail. They both also achieve roughly the same amount of throughput Fig. 7. The Completion time of DROPTAIL, RED, ECNTCP, DCTCP and M21TCPA in the Incast scenario under FFW. before Incast occurs (circa 940Mbps). Since TCP aggressively drops the window size on receipt of ECN ECHO, some [9] claim that it leads to low link utilisation because of a mismatch between the input rate and the link capacity. The high throughput in Figure 6 shows that this is not the case under fixed fragment DCN workloads. ECN actually causes short flows to complete quickly [21]. Nevertheless, algorithms like RED with ECN that function based on the queue length, find it difficult to deal with situations where there is low statistical multiplexing and the queue length oscillates rapidly [9].This causes queue build-up with little room to absorb microbursts. This is why Incast still occurs at 32 servers with ECN. Figures 6 and 7 shows that for fixed fragment workloads, DCTCP performs slightly better than ECN: Incast occurs at around 48 servers. By the admission of [9], Incast is the most difficult DCN traffic problem to solve. In their research, DCTCP is found to be ineffective under conditions where the number of senders is large enough such that each of the senders sending around 2 packets exceeds the static buffer size. Thus, even at its best, DCTCP still imposes limits on the number of senders at which Incast will not occur. We observe that M21TCP-A achieves and maintains a high throughput close to 900Mbps, while the throughput of other solutions experience a great collapse at some point. The latency of M21TCP-A increases gradually with increasing number of senders simply because the block size is greater. Expectedly, M21TCP-A performs much better than other solutions under the workload. There is a slight drop in throughput at 64 senders, but the decline is slight. At lower sender numbers, servers running M21TCP-A maintain a throughput greater or equal to other solutions. M21TCP-A prevents queue oscillations and build up, leading to a consistent, predictable solution which guarantees that the switch s transmission buffer will not overflow and therefore there will be no timeout (the main cause of Incast). The request completion times show that M21TCPA requests have a high probability of completing quicker than other solutions and below the latency targets and
7 deadlines of partition/aggregate workflow patterns, which can be as low as 10ms. VI. CONCLUSION Incast occurs when many parallel senders communicate with one client through a bottleneck link. It is a catastrophic throughput loss that disrupts the high throughput, low latency applications in data centre networks. In this report, the Incast problem was presented and simulated in the NS-3 simulator. It was validated that the root cause of Incast is buffer overflow at the congested switch, which leads to severe packet loss and consequently, TCP timeouts. M21TCP-A was proposed and tested against normal TCP with droptail, ECNTCP, and DCTCP. M21TCP-A is a congestion control scheme that informs senders of the number of parallel senders so that senders that they must not exceed, to prevent the switch buffer from overflowing. It was proved to prevent Incast for the maximum number of senders investigated: 64. In general, many to one modifications on the transport layer level offer an opportunity for data centre networks to be emancipated from previous limits on the number of concurrent servers involved in barrier synchronized flows like MapReduce. REFERENCES [1] A. Hammadi and L. Mhamdi, Review: A survey on architectures and energy efficiency in data center networks, Comput. Commun., vol. 40, pp. 1 21, Mar [2] A. Phanishayee, E. Krevat, V. Vasudevan, D. G. Andersen, G. R. Ganger, G. A. Gibson, and S. Seshan, Measurement and analysis of tcp throughput collapse in cluster-based storage systems, in Proceedings of the 6th USENIX Conference on File and Storage Technologies, ser. FAST 08, 2008, pp [3] Y. Chen, R. Griffith, J. Liu, R. H. Katz, and A. D. Joseph, Understanding tcp incast throughput collapse in datacenter networks, in Proceedings of the 1st ACM Workshop on Research on Enterprise Networking, ser. WREN 09, 2009, pp [4] R. Birke, D. Crisan, K. Barabash, A. Levin, C. DeCusatis, C. Minkenberg, and M. Gusat, Partition/aggregate in commodity 10g ethernet software-defined networking, in High Performance Switching and Routing (HPSR), 2012 IEEE 13th International Conference on, June 2012, pp [5] R. Kohavi, R. Longbotham, D. Sommerfield, and R. M. Henne, Controlled experiments on the web: Survey and practical guide, Data Min. Knowl. Discov., vol. 18, no. 1, pp , Feb [6] V. Vasudevan, A. Phanishayee, H. Shah, E. Krevat, D. G. Andersen, G. R. Ganger, G. A. Gibson, and B. Mueller, Safe and effective finegrained tcp retransmissions for datacenter communication, SIGCOMM Comput. Commun. Rev., vol. 39, no. 4, pp , Aug [7] A. Phanishayee, E. Krevat, V. Vasudevan, D. G. Andersen, G. R. Ganger, G. A. Gibson, and S. Seshan, Measurement and analysis of tcp throughput collapse in cluster-based storage systems, in Proceedings of the 6th USENIX Conference on File and Storage Technologies, ser. FAST 08. Berkeley, CA, USA: USENIX Association, 2008, pp. 12:1 12:14. [Online]. Available: [8] V. Vasudevan, A. Phanishayee, H. Shah, E. Krevat, D. G. Andersen, G. R. Ganger, G. A. Gibson, and B. Mueller, Safe and effective fine-grained tcp retransmissions for datacenter communication, in Proceedings of the ACM SIGCOMM 2009 conference on Data communication, ser. SIGCOMM 09. New York, NY, USA: ACM, 2009, pp [Online]. Available: [9] M. Alizadeh, A. Greenberg, D. A. Maltz, J. Padhye, P. Patel, B. Prabhakar, S. Sengupta, and M. Sridharan, Data center tcp (dctcp), in Proceedings of the ACM SIGCOMM 2010 conference, ser. SIGCOMM 10. New York, NY, USA: ACM, 2010, pp [Online]. Available: [10] B. Vamanan, J. Hasan, and T. Vijaykumar, Deadline-aware datacenter tcp (d2tcp), in Proceedings of the ACM SIGCOMM 2012 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication, ser. SIGCOMM 12, 2012, pp [11] C. Wilson, H. Ballani, T. Karagiannis, and A. Rowtron, Better never than late: meeting deadlines in datacenter networks, in Proceedings of the ACM SIGCOMM 2011 conference, ser. SIGCOMM 11. New York, NY, USA: ACM, 2011, pp [Online]. Available: [12] M. Alizadeh, S. Yang, M. Sharif, S. Katti, N. McKeown, B. Prabhakar, and S. Shenker, pfabric: Minimal near-optimal datacenter transport, in Proceedings of the ACM SIGCOMM 2013 Conference on SIGCOMM, ser. SIGCOMM 13, 2013, pp [13] M. Podlesny and C. Williamson, An application-level solution for the tcp-incast problem in data center networks, in Proceedings of the IWQoS, 2011, pp. 23:1 23:3. [14] J. Dean and S. Ghemawat, Mapreduce: Simplified data processing on large clusters, Commun. ACM, vol. 51, no. 1, pp , Jan [15] H. Wu, Z. Feng, C. Guo, and Y. Zhang, Ictcp: Incast congestion control for tcp in data-center networks, IEEE/ACM Trans. Netw., vol. 21, no. 2, pp , Apr [16] B. Braden, D. Clark, J. Crowcroft, B. Davie, S. Deering, D. Estrin, S. Floyd, V. Jacobson, G. Minshall, C. Partridge, L. Peterson, K. Ramakrishnan, S. Shenker, J. Wroclawski, and L. Zhang, Recommendations on queue management and congestion avoidance in the internet, United States, [17] V. Jacobson, Congestion avoidance and control, SIGCOMM Comput. Commun. Rev., vol. 18, no. 4, Aug [18] K. Ramakrishnan, S. Floyd, and D. Black, The addition of explicit congestion notification (ecn) to ip, United States, [19] C. L. Liu and J. W. Layland, Scheduling algorithms for multiprogramming in a hard-real-time environment, J. ACM, vol. 20, no. 1, pp , Jan [20] C. Guo, H. Wu, K. Tan, L. Shi, Y. Zhang, and S. Lu, Dcell: A scalable and fault-tolerant network structure for data centers, SIGCOMM Comput. Commun. Rev., vol. 38, no. 4, pp , Aug [21] F. Kelly, G. Raina, and T. Voice, Stability and fairness of explicit congestion control with small buffers, SIGCOMM Comput. Commun. Rev., vol. 38, no. 3, pp , Jul
# % # % & () # () + (, ( + + + ( (. /(. 0 + + ( (. /(. 0!12!3 &! 1. 4 ( /+ ) 0
! # % # % & () # () + (, ( + + + ( (. /(. 0 + + ( (. /(. 0!12!3 &! 1. 4 ( /+ ) 0 5 M21TCP: Overcoming TCP Incast Congestion in Data Centres Akintomide Adesanmi Lotfi Mhamdi School of Electronic & Electrical
Providing Reliable Service in Data-center Networks
Providing Reliable Service in Data-center Networks A.Suresh 1, S. Jaya Kumar 2 1 M.Tech (CSE) Student, Department of CSE, SRM University, Ramapuram, Chennai, India [email protected] 2 Assistant
TCP ISSUES IN DATA CENTER SYSTEM- SURVEY
TCP ISSUES IN DATA CENTER SYSTEM- SURVEY Raj Kumar Yadav Assistant Professor DES s College of Engineering, Dhamangaon Rly(MS), India, [email protected] A B S T R A C T Data center systems are at
Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow
International Journal of Soft Computing and Engineering (IJSCE) Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow Abdullah Al Masud, Hossain Md. Shamim, Amina Akhter
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.
Improving Flow Completion Time for Short Flows in Datacenter Networks
Improving Flow Completion Time for Short Flows in Datacenter Networks Sijo Joy, Amiya Nayak School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada {sjoy028, nayak}@uottawa.ca
Protagonist International Journal of Management And Technology (PIJMT) Online ISSN- 2394-3742. Vol 2 No 3 (May-2015) Active Queue Management
Protagonist International Journal of Management And Technology (PIJMT) Online ISSN- 2394-3742 Vol 2 No 3 (May-2015) Active Queue Management For Transmission Congestion control Manu Yadav M.Tech Student
17: Queue Management. Queuing. Mark Handley
17: Queue Management Mark Handley Queuing The primary purpose of a queue in an IP router is to smooth out bursty arrivals, so that the network utilization can be high. But queues add delay and cause jitter.
Performance improvement of active queue management with per-flow scheduling
Performance improvement of active queue management with per-flow scheduling Masayoshi Nabeshima, Kouji Yata NTT Cyber Solutions Laboratories, NTT Corporation 1-1 Hikari-no-oka Yokosuka-shi Kanagawa 239
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
Active Queue Management and Wireless Networks
Active Queue Management and Wireless Networks Vikas Paliwal November 13, 2003 1 Introduction Considerable research has been done on internet dynamics and it has been shown that TCP s congestion avoidance
MMPTCP: A Novel Transport Protocol for Data Centre Networks
MMPTCP: A Novel Transport Protocol for Data Centre Networks Morteza Kheirkhah FoSS, Department of Informatics, University of Sussex Modern Data Centre Networks FatTree It provides full bisection bandwidth
Congestion Control Review. 15-441 Computer Networking. Resource Management Approaches. Traffic and Resource Management. What is congestion control?
Congestion Control Review What is congestion control? 15-441 Computer Networking What is the principle of TCP? Lecture 22 Queue Management and QoS 2 Traffic and Resource Management Resource Management
Extensions to FreeBSD Datacenter TCP for Incremental Deployment Support
Extensions to FreeBSD Datacenter TCP for Incremental Deployment Support Midori Kato Fixstars Solutions [email protected] Rodney Van Meter Keio University [email protected] Lars Eggert NetApp [email protected]
Comparative Analysis of Congestion Control Algorithms Using ns-2
www.ijcsi.org 89 Comparative Analysis of Congestion Control Algorithms Using ns-2 Sanjeev Patel 1, P. K. Gupta 2, Arjun Garg 3, Prateek Mehrotra 4 and Manish Chhabra 5 1 Deptt. of Computer Sc. & Engg,
Using median filtering in active queue management for telecommunication networks
Using median filtering in active queue management for telecommunication networks Sorin ZOICAN *, Ph.D. Cuvinte cheie. Managementul cozilor de aşteptare, filtru median, probabilitate de rejectare, întârziere.
On implementation of DCTCP on three tier and fat tree data center network topologies
DOI 10.1186/s40064-016-2454-4 RESEARCH Open Access On implementation of DCTCP on three tier and fat tree data center network topologies Saima Zafar 1*, Abeer Bashir 1 and Shafique Ahmad Chaudhry 2 *Correspondence:
Transport layer issues in ad hoc wireless networks Dmitrij Lagutin, [email protected]
Transport layer issues in ad hoc wireless networks Dmitrij Lagutin, [email protected] 1. Introduction Ad hoc wireless networks pose a big challenge for transport layer protocol and transport layer protocols
AN IMPROVED SNOOP FOR TCP RENO AND TCP SACK IN WIRED-CUM- WIRELESS NETWORKS
AN IMPROVED SNOOP FOR TCP RENO AND TCP SACK IN WIRED-CUM- WIRELESS NETWORKS Srikanth Tiyyagura Department of Computer Science and Engineering JNTUA College of Engg., pulivendula, Andhra Pradesh, India.
TCP in Wireless Mobile Networks
TCP in Wireless Mobile Networks 1 Outline Introduction to transport layer Introduction to TCP (Internet) congestion control Congestion control in wireless networks 2 Transport Layer v.s. Network Layer
DESIGN OF ACTIVE QUEUE MANAGEMENT BASED ON THE CORRELATIONS IN INTERNET TRAFFIC
DESIGN OF ACTIVE QUEUE MANAGEMENT BASED ON THE CORRELATIONS IN INTERNET TRAFFIC KHALID S. AL-AWFI AND MICHAEL E. WOODWARD { k.s.r.alawf, m.e.woodward }@bradford.ac.uk Department of Computing, University
An enhanced TCP mechanism Fast-TCP in IP networks with wireless links
Wireless Networks 6 (2000) 375 379 375 An enhanced TCP mechanism Fast-TCP in IP networks with wireless links Jian Ma a, Jussi Ruutu b and Jing Wu c a Nokia China R&D Center, No. 10, He Ping Li Dong Jie,
Seamless Congestion Control over Wired and Wireless IEEE 802.11 Networks
Seamless Congestion Control over Wired and Wireless IEEE 802.11 Networks Vasilios A. Siris and Despina Triantafyllidou Institute of Computer Science (ICS) Foundation for Research and Technology - Hellas
ICTCP: Incast Congestion Control for TCP in Data Center Networks
ICTCP: Incast Congestion Control for TCP in Data Center Networks Haitao Wu, Zhenqian Feng, Chuanxiong Guo, Yongguang Zhang {hwu, v-zhfe, chguo, ygz}@microsoft.com, Microsoft Research Asia, China School
Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage
Lecture 15: Congestion Control CSE 123: Computer Networks Stefan Savage Overview Yesterday: TCP & UDP overview Connection setup Flow control: resource exhaustion at end node Today: Congestion control Resource
TCP, Active Queue Management and QoS
TCP, Active Queue Management and QoS Don Towsley UMass Amherst [email protected] Collaborators: W. Gong, C. Hollot, V. Misra Outline motivation TCP friendliness/fairness bottleneck invariant principle
Data Center TCP (DCTCP)
Data Center TCP () Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye, Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Microsoft Research Stanford University {albert,
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Robust Router Congestion Control Using Acceptance and Departure Rate Measures
Robust Router Congestion Control Using Acceptance and Departure Rate Measures Ganesh Gopalakrishnan a, Sneha Kasera b, Catherine Loader c, and Xin Wang b a {[email protected]}, Microsoft Corporation,
TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)
TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) *Slides adapted from a talk given by Nitin Vaidya. Wireless Computing and Network Systems Page
Master s Thesis. A Study on Active Queue Management Mechanisms for. Internet Routers: Design, Performance Analysis, and.
Master s Thesis Title A Study on Active Queue Management Mechanisms for Internet Routers: Design, Performance Analysis, and Parameter Tuning Supervisor Prof. Masayuki Murata Author Tomoya Eguchi February
PACE Your Network: Fair and Controllable Multi- Tenant Data Center Networks
PACE Your Network: Fair and Controllable Multi- Tenant Data Center Networks Tiago Carvalho Carnegie Mellon University and Universidade de Lisboa Hyong S. Kim Carnegie Mellon University Pittsburgh, PA,
Fuzzy Active Queue Management for Assured Forwarding Traffic in Differentiated Services Network
Fuzzy Active Management for Assured Forwarding Traffic in Differentiated Services Network E.S. Ng, K.K. Phang, T.C. Ling, L.Y. Por Department of Computer Systems & Technology Faculty of Computer Science
Analyzing Marking Mod RED Active Queue Management Scheme on TCP Applications
212 International Conference on Information and Network Technology (ICINT 212) IPCSIT vol. 7 (212) (212) IACSIT Press, Singapore Analyzing Marking Active Queue Management Scheme on TCP Applications G.A.
Low-rate TCP-targeted Denial of Service Attack Defense
Low-rate TCP-targeted Denial of Service Attack Defense Johnny Tsao Petros Efstathopoulos University of California, Los Angeles, Computer Science Department Los Angeles, CA E-mail: {johnny5t, pefstath}@cs.ucla.edu
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
TCP based Denial-of-Service Attacks to Edge Network: Analysis and Detection
TCP based Denial-of-Service Attacks to Edge Network: Analysis and Detection V. Anil Kumar 1 and Dorgham Sisalem 2 1 CSIR Centre for Mathematical Modelling and Computer Simulation, Bangalore, India 2 Fraunhofer
Energy Optimizations for Data Center Network: Formulation and its Solution
Energy Optimizations for Data Center Network: Formulation and its Solution Shuo Fang, Hui Li, Chuan Heng Foh, Yonggang Wen School of Computer Engineering Nanyang Technological University Singapore Khin
SJBIT, Bangalore, KARNATAKA
A Comparison of the TCP Variants Performance over different Routing Protocols on Mobile Ad Hoc Networks S. R. Biradar 1, Subir Kumar Sarkar 2, Puttamadappa C 3 1 Sikkim Manipal Institute of Technology,
A Survey: High Speed TCP Variants in Wireless Networks
ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com A Survey:
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX Shie-Yuan Wang Hung-Wei Chiu and Chih-Liang Chou Department of Computer Science, National Chiao Tung University, Taiwan Email: [email protected]
Transport Layer Protocols
Transport Layer Protocols Version. Transport layer performs two main tasks for the application layer by using the network layer. It provides end to end communication between two applications, and implements
Research of TCP ssthresh Dynamical Adjustment Algorithm Based on Available Bandwidth in Mixed Networks
Research of TCP ssthresh Dynamical Adjustment Algorithm Based on Available Bandwidth in Mixed Networks 1 Wang Zhanjie, 2 Zhang Yunyang 1, First Author Department of Computer Science,Dalian University of
Novel Approach for Queue Management and Improvisation of QOS for Communication Networks
Novel Approach for Queue Management and Improvisation of QOS for Communication Networks J. Venkatesan 1, S. Thirumal 2 1 M. Phil, Research Scholar, Arignar Anna Govt. Arts College, Cheyyar, T.V Malai Dt,
Deconstructing Datacenter Packet Transport
Deconstructing Datacenter Packet Transport Mohammad Alizadeh, Shuang Yang, Sachin Katti, Nick McKeown, Balaji Prabhakar, and Scott Schenker Stanford University U.C. Berkeley / ICSI {alizade, shyang, skatti,
APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM
152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented
A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network
ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network Mohammad Naimur Rahman
Load Balancing in Data Center Networks
Load Balancing in Data Center Networks Henry Xu Computer Science City University of Hong Kong HKUST, March 2, 2015 Background Aggregator Aggregator Aggregator Worker Worker Worker Worker Low latency for
Requirements for Simulation and Modeling Tools. Sally Floyd NSF Workshop August 2005
Requirements for Simulation and Modeling Tools Sally Floyd NSF Workshop August 2005 Outline for talk: Requested topic: the requirements for simulation and modeling tools that allow one to study, design,
Network congestion, its control and avoidance
MUHAMMAD SALEH SHAH*, ASIM IMDAD WAGAN**, AND MUKHTIAR ALI UNAR*** RECEIVED ON 05.10.2013 ACCEPTED ON 09.01.2014 ABSTRACT Recent years have seen an increasing interest in the design of AQM (Active Queue
Active Queue Management (AQM) based Internet Congestion Control
Active Queue Management (AQM) based Internet Congestion Control October 1 2002 Seungwan Ryu ([email protected]) PhD Student of IE Department University at Buffalo Contents Internet Congestion Control
Multipath TCP in Data Centres (work in progress)
Multipath TCP in Data Centres (work in progress) Costin Raiciu Joint work with Christopher Pluntke, Adam Greenhalgh, Sebastien Barre, Mark Handley, Damon Wischik Data Centre Trends Cloud services are driving
Passive Queue Management
, 2013 Performance Evaluation of Computer Networks Objectives Explain the role of active queue management in performance optimization of TCP/IP networks Learn a range of active queue management algorithms
1. The subnet must prevent additional packets from entering the congested region until those already present can be processed.
Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because routers are receiving packets faster than they can forward them, one
Delay-Based Early Congestion Detection and Adaptation in TCP: Impact on web performance
1 Delay-Based Early Congestion Detection and Adaptation in TCP: Impact on web performance Michele C. Weigle Clemson University Clemson, SC 29634-196 Email: [email protected] Kevin Jeffay and F. Donelson
Applying Active Queue Management to Link Layer Buffers for Real-time Traffic over Third Generation Wireless Networks
Applying Active Queue Management to Link Layer Buffers for Real-time Traffic over Third Generation Wireless Networks Jian Chen and Victor C.M. Leung Department of Electrical and Computer Engineering The
International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July-2015 1169 ISSN 2229-5518
International Journal of Scientific & Engineering Research, Volume 6, Issue 7, July-2015 1169 Comparison of TCP I-Vegas with TCP Vegas in Wired-cum-Wireless Network Nitin Jain & Dr. Neelam Srivastava Abstract
TCP/IP Over Lossy Links - TCP SACK without Congestion Control
Wireless Random Packet Networking, Part II: TCP/IP Over Lossy Links - TCP SACK without Congestion Control Roland Kempter The University of Alberta, June 17 th, 2004 Department of Electrical And Computer
Application Level Congestion Control Enhancements in High BDP Networks. Anupama Sundaresan
Application Level Congestion Control Enhancements in High BDP Networks Anupama Sundaresan Organization Introduction Motivation Implementation Experiments and Results Conclusions 2 Developing a Grid service
Lecture Objectives. Lecture 07 Mobile Networks: TCP in Wireless Networks. Agenda. TCP Flow Control. Flow Control Can Limit Throughput (1)
Lecture Objectives Wireless and Mobile Systems Design Lecture 07 Mobile Networks: TCP in Wireless Networks Describe TCP s flow control mechanism Describe operation of TCP Reno and TCP Vegas, including
Simulation-Based Comparisons of Solutions for TCP Packet Reordering in Wireless Network
Simulation-Based Comparisons of Solutions for TCP Packet Reordering in Wireless Network 作 者 :Daiqin Yang, Ka-Cheong Leung, and Victor O. K. Li 出 處 :Wireless Communications and Networking Conference, 2007.WCNC
TCP over Wireless Networks
TCP over Wireless Networks Raj Jain Professor of Computer Science and Engineering Washington University in Saint Louis Saint Louis, MO 63130 Audio/Video recordings of this lecture are available at: http://www.cse.wustl.edu/~jain/cse574-10/
About the Stability of Active Queue Management Mechanisms
About the Stability of Active Queue Management Mechanisms Dario Bauso, Laura Giarré and Giovanni Neglia Abstract In this paper, we discuss the influence of multiple bottlenecks on the stability of Active
Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?
18-345: Introduction to Telecommunication Networks Lectures 20: Quality of Service Peter Steenkiste Spring 2015 www.cs.cmu.edu/~prs/nets-ece Overview What is QoS? Queuing discipline and scheduling Traffic
Improving our Evaluation of Transport Protocols. Sally Floyd Hamilton Institute July 29, 2005
Improving our Evaluation of Transport Protocols Sally Floyd Hamilton Institute July 29, 2005 Computer System Performance Modeling and Durable Nonsense A disconcertingly large portion of the literature
Outline. TCP connection setup/data transfer. 15-441 Computer Networking. TCP Reliability. Congestion sources and collapse. Congestion control basics
Outline 15-441 Computer Networking Lecture 8 TCP & Congestion Control TCP connection setup/data transfer TCP Reliability Congestion sources and collapse Congestion control basics Lecture 8: 09-23-2002
Demand-Aware Flow Allocation in Data Center Networks
Demand-Aware Flow Allocation in Data Center Networks Dmitriy Kuptsov Aalto University/HIIT Espoo, Finland [email protected] Boris Nechaev Aalto University/HIIT Espoo, Finland [email protected]
Adaptive Coding and Packet Rates for TCP-Friendly VoIP Flows
Adaptive Coding and Packet Rates for TCP-Friendly VoIP Flows C. Mahlo, C. Hoene, A. Rostami, A. Wolisz Technical University of Berlin, TKN, Sekr. FT 5-2 Einsteinufer 25, 10587 Berlin, Germany. Emails:
A Passive Method for Estimating End-to-End TCP Packet Loss
A Passive Method for Estimating End-to-End TCP Packet Loss Peter Benko and Andras Veres Traffic Analysis and Network Performance Laboratory, Ericsson Research, Budapest, Hungary {Peter.Benko, Andras.Veres}@eth.ericsson.se
REM: Active Queue Management
: Active Queue Management Sanjeewa Athuraliya and Steven H. Low, California Institute of Technology Victor H. Li and Qinghe Yin, CUBIN, University of Melbourne Abstract We describe a new active queue management
An Improved TCP Congestion Control Algorithm for Wireless Networks
An Improved TCP Congestion Control Algorithm for Wireless Networks Ahmed Khurshid Department of Computer Science University of Illinois at Urbana-Champaign Illinois, USA [email protected] Md. Humayun
GREEN: Proactive Queue Management over a Best-Effort Network
IEEE GlobeCom (GLOBECOM ), Taipei, Taiwan, November. LA-UR -4 : Proactive Queue Management over a Best-Effort Network Wu-chun Feng, Apu Kapadia, Sunil Thulasidasan [email protected], [email protected], [email protected]
TCP Trunking for Bandwidth Management of Aggregate Traffic
TCP Trunking for Bandwidth Management of Aggregate Traffic H.T. Kung [email protected] Division of Engineering and Applied Sciences Harvard University Cambridge, MA 02138, USA S.Y. Wang [email protected]
Student, Haryana Engineering College, Haryana, India 2 H.O.D (CSE), Haryana Engineering College, Haryana, India
Volume 5, Issue 6, June 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A New Protocol
TinyFlow: Breaking Elephants Down Into Mice in Data Center Networks
TinyFlow: Breaking Elephants Down Into Mice in Data Center Networks Hong Xu, Baochun Li [email protected], [email protected] Department of Computer Science, City University of Hong Kong Department
Active Queue Management
Course of Multimedia Internet (Sub-course Reti Internet Multimediali ), AA 2010-2011 Prof. 6. Active queue management Pag. 1 Active Queue Management Active Queue Management (AQM) is a feature that can
Data Networks Summer 2007 Homework #3
Data Networks Summer Homework # Assigned June 8, Due June in class Name: Email: Student ID: Problem Total Points Problem ( points) Host A is transferring a file of size L to host B using a TCP connection.
Denial-of-Service Shrew Attacks
Denial-of-Service Shrew Attacks Bhuvana Mahalingam [email protected] 1. Introduction A Denial of Service Attack is defined as An incident in which a user or organization is deprived of the services of
Performance Analysis Of Active Queue Management (AQM) In VOIP Using Different Voice Encoder Scheme
Performance Analysis Of Active Queue Management (AQM) In VOIP Using Different Voice Encoder Scheme Samir Eid Mohammed, Mohamed H. M. Nerma Abstract: Voice over Internet Protocol (VoIP) is a rapidly growing
Parallel TCP Data Transfers: A Practical Model and its Application
D r a g a n a D a m j a n o v i ć Parallel TCP Data Transfers: A Practical Model and its Application s u b m i t t e d t o the Faculty of Mathematics, Computer Science and Physics, the University of Innsbruck
Applications. Network Application Performance Analysis. Laboratory. Objective. Overview
Laboratory 12 Applications Network Application Performance Analysis Objective The objective of this lab is to analyze the performance of an Internet application protocol and its relation to the underlying
Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks ABSTRACT:
Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks ABSTRACT: In view of the fast-growing Internet traffic, this paper propose a distributed traffic management
Per-Flow Queuing Allot's Approach to Bandwidth Management
White Paper Per-Flow Queuing Allot's Approach to Bandwidth Management Allot Communications, July 2006. All Rights Reserved. Table of Contents Executive Overview... 3 Understanding TCP/IP... 4 What is Bandwidth
Friends, not Foes Synthesizing Existing Transport Strategies for Data Center Networks
Friends, not Foes Synthesizing Existing Transport Strategies for Data Center Networks Ali Munir Michigan State University Ghufran Baig, Syed M. Irteza, Ihsan A. Qazi, Alex X. Liu, Fahad R. Dogar Data Center
A Congestion Control Algorithm for Data Center Area Communications
A Congestion Control Algorithm for Data Center Area Communications Hideyuki Shimonishi, Junichi Higuchi, Takashi Yoshikawa, and Atsushi Iwata System Platforms Research Laboratories, NEC Corporation 1753
Active Queue Management A router based control mechanism
Active Queue Management A router based control mechanism Chrysostomos Koutsimanis B.Sc. National Technical University of Athens Pan Gan Park B.Sc. Ajou University Abstract In this report we are going to
The Interaction of Forward Error Correction and Active Queue Management
The Interaction of Forward Error Correction and Active Queue Management Tigist Alemu, Yvan Calas, and Alain Jean-Marie LIRMM UMR 5506 CNRS and University of Montpellier II 161, Rue Ada, 34392 Montpellier
Data Center Network Structure using Hybrid Optoelectronic Routers
Data Center Network Structure using Hybrid Optoelectronic Routers Yuichi Ohsita, and Masayuki Murata Graduate School of Information Science and Technology, Osaka University Osaka, Japan {y-ohsita, murata}@ist.osaka-u.ac.jp
Data Center Network Architectures
Servers Servers Servers Data Center Network Architectures Juha Salo Aalto University School of Science and Technology [email protected] Abstract Data centers have become increasingly essential part of
Active Queue Management
Active Queue Management TELCOM2321 CS2520 Wide Area Networks Dr. Walter Cerroni University of Bologna Italy Visiting Assistant Professor at SIS, Telecom Program Slides partly based on Dr. Znati s material
