Large-scale Network Protocol Emulation on Commodity Cloud
|
|
|
- Cora Ross
- 10 years ago
- Views:
Transcription
1 Large-scale Network Protocol Emulation on Commodity Cloud Anirup Dutta University of Houston Omprakash Gnawali University of Houston ABSTRACT Network emulation allows us to evaluate network protocol implementations, typically in higher fidelity than simulations. This advantage comes at a cost. Emulation often requires much larger IO or computational resources than simulations. As a result, it is common to see some research projects doing simulations with up to hundred thousand nodes while emulations typically scale up to a few hundred nodes. In this paper, we present CloudNet, a network protocol emulation platform that leverages the commodity cloud computing service to scale emulations to thousands of nodes. CloudNet uses a light-weight virtualization technique called LXC containers to emulate a single node. The network protocol code and the protocol state for each node is maintained in its respective container. CloudNet then uses properties of the network topology to determine where to place these containers among many physical machines researchers might rent on the cloud service. CloudNet s careful mapping of nodes to the containers makes network performance more predictable and suitable for emulation even on a shared commodity cloud, which were previously thought to be unsuitable for serious network emulation. Through extensive experiments, we establish that CloudNet is scalable to thousand-node networks while providing accurate emulation results. I. INTRODUCTION The networking research community has a long history of building tools and methodologies to evaluate network protocols. analysis allows us to understand the protocols at the algorithmic level. We can implement the key ideas of the protocol in a simulator and study the protocol injecting a bit of realism, for example, wireless link model and protocol state machines [1] [3]. Emulation takes this study of protocols a step further. We can evaluate the actual implementation of the protocol, often in the same runtime environment as the deployment target. Emulation often is resource-intensive. An emulator must evaluate the entire implementation, not just the core idea behind the protocol. Despite this disadvantage, emulations are widely used because they produce results that are similar to the results on the testbeds, and the target deployments. Recent advances in node virtualization makes it possible to emulate a large number of nodes in a single physical machine. We can run the protocol code for each node in its own virtual machine and create a network of emulated virtual nodes. We can thus emulate several nodes in a single machine thereby enabling emulation of a large network on a small network of physical machines. This technique however does not scale because a physical machine cannot host more than a few tens of virtual machines while providing acceptable performance. Light-weight virtualization can help scale these emulations to very large networks. In light-weight virtualization, containers are the unit of virtualization. Each physical machine can host several hundred containers. Mininet [4], [5] is a recent effort that uses this approach and shows how to emulate a network of several hundred nodes in a single machine. However, this solution does not scale beyond the number of containers that can be run in a single machine. In this paper, we present CloudNet, a network emulator that can emulate networks with thousands of nodes. Our work leverages two technology trends to make this possible. First, CloudNet uses machines provided by the commodity cloud service. The researchers can perform experiments on a large number of machines for the small expense of renting machines from cloud service providers such as Amazon. Second, Cloud- Net leverages light-weight virtualization technique to further scale the size of the emulation. The lack of consistent performance in the shared cloud environment and the challenge it presents to network experiments have been identified as the key challenge in building an emulator in the Cloud. In a shared cloud environment such as Amazon EC2, it is possible for one of our emulation instances to share the same physical host with an instance running a webserver of a popular website. When the other instances in the same physical machine demand more resources, naturally fewer resources are available to our emulation instances. To address this challenge, CloudNet only uses Cluster instances, which have ample computation and communication resources and come with loose guarantees about resource allocation. This is in contrast to earlier work that used micro or small instances, which make large emulation affordable but come with the inconsistent and unpredictable performance. While using Cluster instances with ample resources addresses the performance consistency to some extent, it becomes prohibitively expensive to emulate large networks. CloudNet uses light-weight virtualization in each Cluster instance to economically scale the emulation while providing more consistent networking performance. Thus, CloudNet becomes as economical as emulation with micro or small instances but with more consistent performance. In this paper, we make these contributions:
2 OpenvSwitch Bridge Fig. 1: CloudNet emulation of a 6-node network. The different colors represent different networks. If we wish to send packets from 2 to 6 we need to do it via 3,4 and 5. We show how to achieve a partially consistent networking performance for network emulation in a shared cloud environment like Amazon EC2. We present the design and implementation of an emulator that economically scales to thousands of nodes. We present experiments to evaluate the correctness of the such large scale emulators. We use those experiments to evaluate CloudNet. We present, to our knowledge, the first emulation of DTN protocol on a 1000-node network as a case study that exercises the capabilities of CloudNet. II. CLOUDNET DESIGN In this section, we present the design of CloudNet, which meets these design goals: scaling to thousand-node networks and beyond; low cost platform; and correctness of emulation results. A. Network Nodes CloudNet uses light-weight virtualization mechanism called Linux Containers where each container represents a single node. The network protocol code for a node runs in its respective container. We used LXC for launching the containers in the machine. LXC uses Linux kernel mechanisms to provide lightweight virtual system with full resource isolation and resource control for running application or a system [6], [7]. Due to the shared kernel between the containers, it is possible to run hundreds of LXC containers, i.e., emulated nodes, in a single machine while ensuring CPU cycle and memory isolation across the containers. Thus, because of their light-weight nature with sufficient resource isolation, LXC containers are a good choice in node representation in our goal of designing a scalable emulator. B. Network Connectivity Each LXC container represents a node in the network we wish to emulate. The network connectivity between the containers represent the link between the nodes. We configure LXC container with at least two network interfaces: one connected to the default LXC bridge and the other connected to the Open vswitch bridge. Open vswitch [8] is a software switch which can be used to connect virtual machines. Open vswitch supports many standard protocols and management interfaces. It allows QoS for each virtual machine interface. It supports openflow protocol as well as many tunneling protocols like GRE, IPSEC, etc. The default LXC bridge and the Open vswitch bridge are not connected. We NAT out of the box for the LXC containers through the LXC bridge and we create the logical network for emulation experiments using Open vswitch. CloudNet provides two ways to define the topology of the emulated network. 1) Creating Topology Using OpenFlow: This approach to defining emulation topology using OpenFlow [9] is borrowed from Mininet. When there is a link between the nodes in the topology we wish to emulate, the controller allows forwarding between those nodes thereby creating an emulated link. We attach an OpenFlow controller to Open vswitch with custom rules on how Open vswitch must forward the packets. 2) Creating Topology Using Subnets: We can use subnets to provide or limit connectivity between different nodes in the emulated networks. We use the LXC bridge as the default gateway in our containers. We setup routing rules such that the packets meant for destination addresses belonging to the same subnets as the network interfaces of the containers will go through the Open vswitch bridge. Any packet meant for destinations other than those belonging to the same subnets as container s interfaces are dropped. Thus, if we need to provide a link between two nodes, together with our routing rules, we assign them ip addresses in the same subnet. The problem of determining the optimal number of subnets required to emulate a large and complex network and the membership of each subnet is equivalent to finding maximal cliques in a graph. Every maximal clique constitutes a subnet. The nodes belonging to the same clique are members of the same subnet. If a node is a member of multiple cliques, then it will be member of multiple subnets, which also means that it will have multiple virtual interfaces connected to Open vswitch. We find the maximal cliques in a graph using the Bron-Kerbosch algorithm [10] and we used a software library called ipgraph [11] which implements the Bron-Kerbosch algorithm. C. Link Attributes In many network emulations, the topology may have links with different loss rates or delays. For example, we may want to emulate links with propagation delay of several seconds in a DTN emulation. We use Netem to constrain the bandwidth and delay of a link. D. Distributing Nodes Across Instances Emulating a network with thousands of nodes will require thousands of containers, which will not fit in a single machine. Now, we describe how CloudNet distributes the containers across multiple machines while providing consistent network performance between the nodes. We run Open vswitch in each of the instances. All the LXC containers in each instance is connected to Open vswitch bridges on that instance. We used GRE (Generic Routing Encapsulation) tunnels to connect the Open vswitch bridges on the different instances. Connecting the bridges using GRE tunnels in Amazon EC2 requires using public IP addresses
3 for the instances. 1 CloudNet requires only a small number of Cluster Instances to emulate even a large network, and only the Instance (not individual nodes) needs public IP address. We use NTP to synchronize the time across the instances. E. Choice of Instance Type Previous work has shown that virtualization in Amazon EC2 causes highly unreliable network performance [12]. However in such work, researchers used small instances. Such inconsistent performance would not meet CloudNet s requirement of correct emulation. CloudNet uses a smaller number of instances with more resources in Amazon EC2, then uses LXC containers to decrease the number of instances required to emulate a large network. For example, cluster Instances have 60.5 GB of RAM with 88 ECUs (equivalent to 16 cores). These instances can be launched together in a single placement group to assure high and consistent network performance between the instances. To emulate a 1000 node network, with 200 containers per node, we only need five instances. Because we control the execution environment of the containers in each instance, we can provide more consistent network performance across the emulated nodes compared to running the nodes directly on top of Amazon EC2 and leaving it to Amazon to provision the network. Furthermore, there is high and consistent network connectivity between Cluster instances. Thus, the emulated nodes within these instances achieve consistent network performance. F. Difference from Mininet CloudNet borrows the technique of using lightweight virtualization to emulate a node from Mininet. However, Mininet can run on a single machine limiting its scalability to the number of containers one machine can host. CloudNet, on the other hand, carefully places the containers on different instances using the maximal clique organization, and connecting them with the topology defined by the user, thereby scaling beyond what a single machine can run. III. EVALUATION In this section, we study how CloudNet satisfies its design goals. We use tools such as iperf to validate the correctness of CloudNet and DTN [13] as a case study to evaluate the usefulness of CloudNet. A. Network Performance in Amazon EC2 Correctness is the most important requirement of any network emulator. To evaluate CloudNet s correctness, we create a network topology using CloudNet and run iperf over those topologies to measure bandwidth and jitter. If CloudNet emulation is correct and consistent, we expect the results of measurements in the experiment to match the specified properties of the topology. Otherwise, CloudNet results are not correct. In addition, we compare the correctness of emulation done on CloudNet vs directly on top of Amazon EC2. A network emulation directly on Amazon EC2 uses an instance to represent a node and uses virtual interfaces to connect the 1 One possible reason is that the visibility of the private IP addresses is limited to a certain portion of the EC2 cloud. Throughput () Kbps 1Mbps 11Mbps 20Mbps 54Mbps CloudNet EC2 Medium EC2 Small Emulation System Fig. 2: Boxplot of iperf throughput normalized by the nominal throughput for the setup varying from 100 Kbps to 54 Mbps. Jitter (ms) Kbps 1Mbps 11Mbps 20Mbps 54Mbps CloudNet EC2 Medium EC2 Small Emulation System Fig. 3: Boxplot of jitter in path latency measured during experiments for setups varying from 100 Kbps to 54 Mbps. instances to create the user specified network topology. This experiment will help us understand if CloudNet provides any advantage over running emulation directly on top of Amazon EC2. In each experiment, we created a small network of 10 nodes over 9 hops and made 50 bandwidth and jitter measurements using iperf. The experiments were performed multiple times a day over five days (to understand if performance consistency holds at different times of the day and different days of the week). Figure 2 shows throughput achieved by iperf when doing emulation with CloudNet versus using Amazon EC2 to do those emulations. The first observation is that iperf achieved constant normalized throughput across multiple experiments with 50 iterations each for all throughput settings. Thus, the measured throughput matches the ground truth. The second observation is that emulation directly on top of Amazon EC2 medium instances are less consistent (despite much higher instance rental cost as we show later). The results are worse (and unacceptable) with Amazon EC2 small instances. Thus, we conclude that throughput available to emulation on CloudNet is consistent and correct. Figure 3 shows the jitter in path latency measured during the experiments. The topologies used in these experiments were specified to have no jitter. Thus any considerable jitter could lead to inaccurate emulation results. We observe that the jitter with CloudNet is smaller than emulation that uses Amazon EC2 Medium instances, and significantly smaller than the emulation that uses Amazon EC2 Small instances. Thus,
4 Normal Uniform Pareto Pareto Normal 1 Hop - EMD Error (%) Hops - EMD Error (%) TABLE I: Earth movers distance between theoretical and observed values. Distribution 1 Hop Delay 9 Hop De- 9 Hop Delay / 1 Error (%) (sec) lay (sec) Hop Delay Normal Uniform Pareto Pareto Normal TABLE II: Latency analysis for the different distributions. we find that latency measurements on CloudNet emulations are close to the ground truth. B. Correctness of Network Impairments CloudNet can configure links to have certain delays as required by emulations. For example, create a network where certain links have a latency of 1s. In this section, we describe experiments to test if these impairments are correctly emulated in CloudNet. In these experiments, we configure links with 100ms latency and 20ms jitter that have normal, pareto, and uniform distributions. We use normal and pareto distributions provided in Netem but inject our own uniform distribution table to Netem kernel module. a) One-hop Experiments: We sent 2000 UDP packets between 2 nodes and measured the packet latency between the nodes. If CloudNet correctly emulates the delays, we expect the observed latency to be the same as the intended (i.e., generated) latency. Figure 4 shows the results from this experiment which compares observed, generated and theoretical latencies. We find that the plots of the observed delay values confirm to the type of distribution used for introducing delay with less than 0.6 percent error as shown in Table I. b) Multi-hop Experiments: We use CloudNet to setup a 10 node, i.e., 9-hop network. We sent UDP packets between the nodes at the two ends of the topology. If CloudNet works correctly, we expect the packets to take an average of 900ms to travel between the nodes. Figure 5 shows that that the observed latency closely matches the generated latency. Table I shows that the error is 0.76%-0.86%. c) Correctness of Latency Distributions and Averages: Next we evaluate if the distribution of latency generated by CloudNet is correct. We configure the nodes to generate latency with normal distribution in the 9-hop topology. The sum of independent normal distributions is also a normal distribution: we expect the 9-hop latency to be normally distributed. We performed QQ tests on the observed delay values for the 1 hop and 9 hop experiments to determine if the delays were distributed normally. We performed the normality test on the delay values. We found the P value for 1 hop is 0.22 and for 9 hops it is The mean in case of the 9 hop network should be equal to 9 times the mean for 1 hop network. We found that the measured one-hop to 9-hop latency confirm to this ratio with less than 0.77% error as shown in Table II. Through these experiments, we find that CloudNet can provide controlled and correct link properties to network emulation. Throughput (Kbps) Fig. 6: Iperf throughput results with maximum bandwidth limited to 100kbps on the 1000 node network. The node pairs have been sorted by the mean of the throughput values. C. Scalability To test the scalability of CloudNet, we generate a 1000 node topology with minimum, average, and maximum node degree of 1, 6.4, and 13 respectively. We setup this topology using the technique described in Section II. Every instance contained 200 containers. So we used 5 cluster instances. We randomly picked 100 pair of random nodes resulting in maximum and minimum path lengths of 22-hop and 1-hop respectively. We then used iperf to send UDP packets between all the pair of nodes simultaneously with each flow with a cap of 100kpps. The results from the tests are shown in the Figure 6. The net throughput achieved per flow is close to the expected value of 100kbps. To test if the latencies stay close to the ground truth even when the user topologies add delays, we emulate a 1000 node network with a link latency of 1 second on each link. Table III shows the mean RTT and standard deviation in delay between random pairs of nodes at 1-4 hops. There is a small base latency between the node-pairs: this is a system overhead of CloudNet. We observe that the overhead increases as expected with the number of hops but stays small. To compare Cloudnet s scalability with Mininet, we perform a 1000 node experiment on Mininet on a 8 core server with 48 GB of RAM. This experiment stressed Mininet in two ways. First, the experiment setup took more than an hour and the machine became unresponsive. Second, we were not able to use ping to send data between the nodes: the system dropped the packets because this experiment was overloading the server. This result should not come as a surprise: Mininet uses a single machine architecture and is designed to work with a few hundred nodes. Through these experiments, we found that CloudNet can scale to 1000-node topologies with predictable and expected performance.
5 Link Latency - Normal Distribution Link Latency - Uniform Distribution Link Latency - Pareto Distribution Fig. 4: vs vs delays in a 1 hop network with link latency based on given distributions Link Latency - Normal Distribution Link Latency - Uniform Distribution Link Latency - Pareto Distribution Fig. 5: vs vs delays in a 9-hop network with link latency based on given distributions. Hops RTT Mean (ms) σ TABLE III: RTT for paths of different hops in a 1000 node network. Each link was configured with a delay of 1s. D. Case Study As a case study of a realistic use of CloudNet, we perform large scale experiments with DTN2. We randomly chose 100 pair of nodes from a multi-hop 1000 node topology. We use dtnperf and dtping to send data between the node pairs. The one hop goodput obtained is similar to what was reported in prior work [14] when the authors did experiments with physical machines. The multi-hop goodputs, as expected, decrease with an increase in path length (Figure 7). Figure 8 shows the dtnping packet latency between the node pairs. The high variation in latency is due to bundle queuing in the intermediate hops as different flows cross each other. Thus, the throughput and latencies we obtained in our experiment match the expected output and also what has been previously reported. Table IV compares the cost of performing a similar experiment using small or medium instances separately for each node. The cost analysis affirms that CloudNet provides a cheaper alternative to performing emulation directly on top of EC2 instances, while providing correct results and scaling beyond state-of-the art solutions such as Mininet. Goodput(Mbps) Goodput(Mbps) Hop 2 Hops 3 Hops 4 Hops 5 Hops 6 Hops 7+ Hops Fig. 7: Throughput achieved between node pairs using DTN when each link has 1 s latency (top) and without any network impairment (bottom). Payload size is 100KB. IV. RELATED WORK In this section, we overview the types of tools the networking community has built to evaluate network protocols. Link Emulation: Single link emulation can be done on hardware (using channel emulators) or on software (using tools such as Netem). Prior work has shown that when correctly configured, Netem provides a realistic estimation of impaired network conditions and is sufficient for most networking experiments [15].
6 Ping Latency(s) Hop 2 Hops 3 Hops 4 Hops 5 Hops 6 Hops 7+ Hops Fig. 8: End-to-End latency for each pair of node using dtnping. Network Emulation: Mininet [4] [5] uses light-weight virtualization by isolating certain OS resources, thus allowing emulation of large networks in a single machine. However, scalability becomes an issue when we want to emulate larger networks than can be tested in a single physical machine. Emulab [16] light-weight virtualization technique, FreeBSD jails, to setup multiple virtual interfaces per process group, similar to Mininet and CloudNet. CloudNet provides better resource isolation across the emulated nodes than Emulab and shows how we can use it on the commodity clouds. There is some prior work in data centers to optimize VM placement and routing [17]. CloudNet uses the concept of placement groups in Amazon EC2 where the virtual machines are placed as close to each other so that we can efficiently use the resources. Network Emulation Timing: Time-Warp [18] explores the possibility of using time dilation in network emulation experiments. Future version of CloudNet may use this technique to offer added consistency in performance for emulations that requires very high-bandwidth. Slicetime is another effort to provide scalable and accurate network emulation [19]. Slicetime makes the simulations independent of real time constraint thus allowing simulation of complex and high performance networks when we have limited physical resources. V. DISCUSSION The experiments in this paper showed emulation of homogeneous network. CloudNet design accommodates emulation of heterogeneous networks. The containers within an instance share the host operating system. However, emulated nodes of different platforms can be placed on different instances running different OS. Heterogeneity in CPU and memory resources can be implemented directly with the container mechanism. A disadvantage of our setup is we cannot assure complete fidelity while using Amazon EC2 since we have no control over the infrastructure. However the containers that we launch within an Amazon EC2 instance are completely controllable. We can control the amount of memory, CPU and bandwidth available to them. Thus we turn an uncontrolled system into a system that provides predictable performance. CloudNet Medium Small EC2 Region Unit Price $2.4 $0.12 $0.06 U.S. East(N. Virginia) Number of Instances U.S. East(N. Virginia) Total Price $12 $120 $60 U.S. East(N. Virginia) TABLE IV: Cost of 1-hr emulation of 1000-node network VI. CONCLUSIONS In this paper, we presented CloudNet, which is an emulator that runs on cloud services such as Amazon EC2. CloudNet uses techniques to achieve consistent network performance despite having little control over the EC2 instrastructure. Through extensive experiments, we showed that the emulator replicates the results from experiments that used physical machines. We presented the results from emulation of DTN on a 1000-node network. Thus, we showed that CloudNet meets the design requirements of scalability and affordability. As future work, we will implement CloudNet on other cloud service providers. REFERENCES [1] P. Levis, N. Lee, M. Welsh, and D. Culler, Tossim: Accurate and scalable simulation of entire tinyos applications, in ACM SenSys. ACM, 2003, pp [2] ns2, The network simulator, 1989a. [3] G. Pongor, Omnet: Objective modular network testbed, in MASCOTS, [4] B. Lantz, B. Heller, and N. McKeown, A network in a laptop: rapid prototyping for software-defined networks, in HotNets, 2010, p. 19. [5] N. Handigol, B. Heller, V. Jeyakumar, B. Lantz, and N. McKeown, Reproducible network experiments using container-based emulation, in CoNext, [6] L. Organization, lxc linux containers, [7] S. Graber, lxc containers in ubuntu, 04/lxc-in-ubuntu lts/. [8] B. Pfaff, J. Pettit, T. Koponen, K. Amidon, M. Casado, and S. Shenker, Extending networking into the virtualization layer, Proc. HotNets (October 2009), [9] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, Openflow: enabling innovation in campus networks, ACM SIGCOMM CCR, vol. 38, no. 2, [10] C. Bron and J. Kerbosch, Algorithm 457: finding all cliques of an undirected graph, CACM, vol. 16, no. 9, pp , [11] G. Csardi and T. Nepusz, The igraph software package for complex network research, InterJournal, Complex Systems, vol. 1695, [12] G. Wang and T. E. Ng, The impact of virtualization on network performance of amazon ec2 data center, in INFOCOM. IEEE, [13] K. Fall, A delay-tolerant network architecture for challenged internets, in SIGCOMM. ACM, [14] W.-B. Pottner, An empirical performance comparision of dtn bundle protocol implementations, poettner-tr pdf. [15] E. Kissel and M. Swany, Validating linux network emulation, http: //damsl.cs.indiana.edu/projects/phoebus/netem validate.pdf. [16] M. Hibler, R. Ricci, L. Stoller, J. Duerig, S. Guruprasad, T. Stack, K. Webb, and J. Lepreau, Large-scale virtualization in the emulab network testbed, in NSDI. USENIX Association, 2008, pp [17] J. W. Jiang, T. Lan, S. Ha, M. Chen, and M. Chiang, Joint vm placement and routing for data center traffic engineering, in INFOCOM. IEEE, 2012, pp [18] D. Gupta, K. Yocum, M. McNett, A. C. Snoeren, A. Vahdat, and G. M. Voelker, To infinity and beyond: time warped network emulation, in SOSP. ACM, 2005, pp [19] E. Weingärtner, F. Schmidt, H. Vom Lehn, T. Heer, and K. Wehrle, Slicetime: A platform for scalable and accurate network emulation, in NSDI. USENIX Association, 2011, pp
Implementation of Address Learning/Packet Forwarding, Firewall and Load Balancing in Floodlight Controller for SDN Network Management
Research Paper Implementation of Address Learning/Packet Forwarding, Firewall and Load Balancing in Floodlight Controller for SDN Network Management Raphael Eweka MSc Student University of East London
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support
OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
Multiple Service Load-Balancing with OpenFlow
2012 IEEE 13th International Conference on High Performance Switching and Routing Multiple Service Load-Balancing with OpenFlow Marc Koerner Technische Universitaet Berlin Department of Telecommunication
Network performance in virtual infrastructures
Network performance in virtual infrastructures A closer look at Amazon EC2 Alexandru-Dorin GIURGIU University of Amsterdam System and Network Engineering Master 03 February 2010 Coordinators: Paola Grosso
packet retransmitting based on dynamic route table technology, as shown in fig. 2 and 3.
Implementation of an Emulation Environment for Large Scale Network Security Experiments Cui Yimin, Liu Li, Jin Qi, Kuang Xiaohui National Key Laboratory of Science and Technology on Information System
Design and Management of DOT: A Distributed OpenFlow Testbed
Design and Management of DOT: A Distributed OpenFlow Testbed Arup Raton Roy, Md. Faizul Bari, Mohamed Faten Zhani, Reaz Ahmed, and Raouf Boutaba David R. Cheriton School of Computer Science, University
Scalable Network Virtualization in Software-Defined Networks
Scalable Network Virtualization in Software-Defined Networks Dmitry Drutskoy Princeton University Eric Keller University of Pennsylvania Jennifer Rexford Princeton University ABSTRACT Network virtualization
Path Optimization in Computer Networks
Path Optimization in Computer Networks Roman Ciloci Abstract. The main idea behind path optimization is to find a path that will take the shortest amount of time to transmit data from a host A to a host
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX
Comparisons of SDN OpenFlow Controllers over EstiNet: Ryu vs. NOX Shie-Yuan Wang Hung-Wei Chiu and Chih-Liang Chou Department of Computer Science, National Chiao Tung University, Taiwan Email: [email protected]
A Method for Load Balancing based on Software- Defined Network
, pp.43-48 http://dx.doi.org/10.14257/astl.2014.45.09 A Method for Load Balancing based on Software- Defined Network Yuanhao Zhou 1, Li Ruan 1, Limin Xiao 1, Rui Liu 1 1. State Key Laboratory of Software
Steroid OpenFlow Service: Seamless Network Service Delivery in Software Defined Networks
Steroid OpenFlow Service: Seamless Network Service Delivery in Software Defined Networks Aaron Rosen and Kuang-Ching Wang Holcombe Department of Electrical and Computer Engineering Clemson University Clemson,
OpenFlow: Load Balancing in enterprise networks using Floodlight Controller
OpenFlow: Load Balancing in enterprise networks using Floodlight Controller Srinivas Govindraj, Arunkumar Jayaraman, Nitin Khanna, Kaushik Ravi Prakash [email protected], [email protected],
TCP Labs. WACREN Network Monitoring and Measurement Workshop Antoine Delvaux [email protected] perfsonar developer 30.09.
TCP Labs WACREN Network Monitoring and Measurement Workshop Antoine Delvaux [email protected] perfsonar developer 30.09.2015 Hands-on session We ll explore practical aspects of TCP Checking the effect
Frequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
Network Virtualization History. Network Virtualization History. Extending networking into the virtualization layer. Problem: Isolation
Network irtualization History Network irtualization and Data Center Networks 263-3825-00 SDN Network irtualization Qin Yin Fall Semester 203 Reference: The Past, Present, and Future of Software Defined
Towards an Elastic Distributed SDN Controller
Towards an Elastic Distributed SDN Controller Advait Dixit, Fang Hao, Sarit Mukherjee, T.V. Lakshman, Ramana Kompella Purdue University, Bell Labs Alcatel-Lucent ABSTRACT Distributed controllers have been
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Savita Shiwani Computer Science,Gyan Vihar University, Rajasthan, India G.N. Purohit AIM & ACT, Banasthali University, Banasthali,
A Network in a Laptop: Rapid Prototyping for So7ware- Defined Networks
A Network in a Laptop: Rapid Prototyping for So7ware- Defined Networks Bob Lantz, Brandon Heller, Nick McKeown Stanford University HotNets 2010, 10/20/10 1 2 Wouldn t it be amazing if systems papers were
Network Virtualization
Network Virtualization What is Network Virtualization? Abstraction of the physical network Support for multiple logical networks running on a common shared physical substrate A container of network services
How To Monitor And Test An Ethernet Network On A Computer Or Network Card
3. MONITORING AND TESTING THE ETHERNET NETWORK 3.1 Introduction The following parameters are covered by the Ethernet performance metrics: Latency (delay) the amount of time required for a frame to travel
2. Research and Development on the Autonomic Operation. Control Infrastructure Technologies in the Cloud Computing Environment
R&D supporting future cloud computing infrastructure technologies Research and Development on Autonomic Operation Control Infrastructure Technologies in the Cloud Computing Environment DEMPO Hiroshi, KAMI
JMEE: A SCALABLE FRAMEWORK FOR JTRS WAVEFORM MODELING & EVALUATION
JMEE: A SCALABLE FRAMEWORK FOR JTRS WAVEFORM MODELING & EVALUATION Sheetalkumar Doshi, Unghee Lee, Rajive Bagrodia, Chris Burns, Scalable Network Technologies Inc, Los Angeles CA Scalable Network Technologies
Network Architecture and Topology
1. Introduction 2. Fundamentals and design principles 3. Network architecture and topology 4. Network control and signalling 5. Network components 5.1 links 5.2 switches and routers 6. End systems 7. End-to-end
CHAPTER 6. VOICE COMMUNICATION OVER HYBRID MANETs
CHAPTER 6 VOICE COMMUNICATION OVER HYBRID MANETs Multimedia real-time session services such as voice and videoconferencing with Quality of Service support is challenging task on Mobile Ad hoc Network (MANETs).
Quantifying TCP Performance for IPv6 in Linux- Based Server Operating Systems
Cyber Journals: Multidisciplinary Journals in Science and Technology, Journal of Selected Areas in Telecommunications (JSAT), November Edition, 2013 Volume 3, Issue 11 Quantifying TCP Performance for IPv6
Networking Virtualization Using FPGAs
Networking Virtualization Using FPGAs Russell Tessier, Deepak Unnikrishnan, Dong Yin, and Lixin Gao Reconfigurable Computing Group Department of Electrical and Computer Engineering University of Massachusetts,
Xperience of Programmable Network with OpenFlow
International Journal of Computer Theory and Engineering, Vol. 5, No. 2, April 2013 Xperience of Programmable Network with OpenFlow Hasnat Ahmed, Irshad, Muhammad Asif Razzaq, and Adeel Baig each one is
OpenFlow-Based Dynamic Server Cluster Load Balancing with Measurement Support
OpenFlow-Based Dynamic Server Cluster Load Balancing with Measurement Support Qingwei Du and Huaidong Zhuang College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics,
Bandwidth Allocation in a Network Virtualization Environment
Bandwidth Allocation in a Network Virtualization Environment Juan Felipe Botero [email protected] Xavier Hesselbach [email protected] Department of Telematics Technical University of Catalonia
A collaborative model for routing in multi-domains OpenFlow networks
A collaborative model for routing in multi-domains OpenFlow networks Xuan Thien Phan, Nam Thoai Faculty of Computer Science and Engineering Ho Chi Minh City University of Technology Ho Chi Minh city, Vietnam
Towards Lightweight Logging and Replay of Embedded, Distributed Systems
Towards Lightweight Logging and Replay of Embedded, Distributed Systems (Invited Paper) Salvatore Tomaselli and Olaf Landsiedel Computer Science and Engineering Chalmers University of Technology, Sweden
Analysis and Simulation of VoIP LAN vs. WAN WLAN vs. WWAN
ENSC 427 Communication Networks Final Project Report Spring 2014 Analysis and Simulation of VoIP Team #: 2 Kadkhodayan Anita ([email protected], 301129632) Majdi Yalda ([email protected], 301137361) Namvar Darya
Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments
Software-Defined Networking Architecture Framework for Multi-Tenant Enterprise Cloud Environments Aryan TaheriMonfared Department of Electrical Engineering and Computer Science University of Stavanger
GUI Tool for Network Designing Using SDN
GUI Tool for Network Designing Using SDN P.B.Arun Prasad, Varun, Vasu Dev, Sureshkumar Assistant Professor Department of Computer Science and Engineering, Saranathan College of Engineering, Tamil Nadu,
Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc
(International Journal of Computer Science & Management Studies) Vol. 17, Issue 01 Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc Dr. Khalid Hamid Bilal Khartoum, Sudan [email protected]
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
1.1. Abstract. 1.2. VPN Overview
1.1. Abstract Traditionally organizations have designed their VPN networks using layer 2 WANs that provide emulated leased lines. In the last years a great variety of VPN technologies has appeared, making
Power-efficient Virtual Machine Placement and Migration in Data Centers
2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing Power-efficient Virtual Machine Placement and Migration
Analysis on Virtualization Technologies in Cloud
Analysis on Virtualization Technologies in Cloud 1 V RaviTeja Kanakala, V.Krishna Reddy, K.Thirupathi Rao 1 Research Scholar, Department of CSE, KL University, Vaddeswaram, India I. Abstract Virtualization
Dynamic Security Traversal in OpenFlow Networks with QoS Guarantee
International Journal of Science and Engineering Vol.4 No.2(2014):251-256 251 Dynamic Security Traversal in OpenFlow Networks with QoS Guarantee Yu-Jia Chen, Feng-Yi Lin and Li-Chun Wang Department of
Performance Measurement of Wireless LAN Using Open Source
Performance Measurement of Wireless LAN Using Open Source Vipin M Wireless Communication Research Group AU KBC Research Centre http://comm.au-kbc.org/ 1 Overview General Network Why Network Performance
EstiNet OpenFlow Network Simulator and Emulator
TPICS IN NETWRK TESTING EstiNet penflow Network Simulator and Emulator Shie-Yuan Wang, National Chiao Tung University Chih-Liang Chou and Chun-Ming Yang, EstiNet Technologies, Inc. ABSTRACT In this article,
Performance of Network Virtualization in Cloud Computing Infrastructures: The OpenStack Case.
Performance of Network Virtualization in Cloud Computing Infrastructures: The OpenStack Case. Franco Callegati, Walter Cerroni, Chiara Contoli, Giuliano Santandrea Dept. of Electrical, Electronic and Information
Paolo Costa [email protected]
joint work with Ant Rowstron, Austin Donnelly, and Greg O Shea (MSR Cambridge) Hussam Abu-Libdeh, Simon Schubert (Interns) Paolo Costa [email protected] Paolo Costa CamCube - Rethinking the Data Center
OpenFlow Based Load Balancing
OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple
OpenFlow and Onix. OpenFlow: Enabling Innovation in Campus Networks. The Problem. We also want. How to run experiments in campus networks?
OpenFlow and Onix Bowei Xu [email protected] [1] McKeown et al., "OpenFlow: Enabling Innovation in Campus Networks," ACM SIGCOMM CCR, 38(2):69-74, Apr. 2008. [2] Koponen et al., "Onix: a Distributed Control
QUALITY OF SERVICE METRICS FOR DATA TRANSMISSION IN MESH TOPOLOGIES
QUALITY OF SERVICE METRICS FOR DATA TRANSMISSION IN MESH TOPOLOGIES SWATHI NANDURI * ZAHOOR-UL-HUQ * Master of Technology, Associate Professor, G. Pulla Reddy Engineering College, G. Pulla Reddy Engineering
A NOVEL RESOURCE EFFICIENT DMMS APPROACH
A NOVEL RESOURCE EFFICIENT DMMS APPROACH FOR NETWORK MONITORING AND CONTROLLING FUNCTIONS Golam R. Khan 1, Sharmistha Khan 2, Dhadesugoor R. Vaman 3, and Suxia Cui 4 Department of Electrical and Computer
Network Programmability Using POX Controller
Network Programmability Using POX Controller Sukhveer Kaur 1, Japinder Singh 2 and Navtej Singh Ghumman 3 1,2,3 Department of Computer Science and Engineering, SBS State Technical Campus, Ferozepur, India
High-Fidelity Switch Models for Software-Defined Network Emulation
High-Fidelity Switch Models for Software-Defined Network Emulation Danny Yuxing Huang, Kenneth Yocum, and Alex C. Snoeren Department of Computer Science and Engineering University of California, San Diego
Information- Centric Networks. Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics
Information- Centric Networks Section # 13.2: Alternatives Instructor: George Xylomenos Department: Informatics Funding These educational materials have been developed as part of the instructors educational
A Wireless Mesh Network NS-3 Simulation Model: Implementation and Performance Comparison With a Real Test-Bed
A Wireless Mesh Network NS-3 Simulation Model: Implementation and Performance Comparison With a Real Test-Bed Dmitrii Dugaev, Eduard Siemens Anhalt University of Applied Sciences - Faculty of Electrical,
Performance of VMware vcenter (VC) Operations in a ROBO Environment TECHNICAL WHITE PAPER
Performance of VMware vcenter (VC) Operations in a ROBO Environment TECHNICAL WHITE PAPER Introduction Many VMware customers have virtualized their ROBO (Remote Office Branch Office) offices in order to
Quality of Service Analysis of Video Conferencing over WiFi and Ethernet Networks
ENSC 427: Communication Network Quality of Service Analysis of Video Conferencing over WiFi and Ethernet Networks Simon Fraser University - Spring 2012 Claire Liu Alan Fang Linda Zhao Team 3 csl12 at sfu.ca
Programmable Networking with Open vswitch
Programmable Networking with Open vswitch Jesse Gross LinuxCon September, 2013 2009 VMware Inc. All rights reserved Background: The Evolution of Data Centers Virtualization has created data center workloads
Performance of Host Identity Protocol on Nokia Internet Tablet
Performance of Host Identity Protocol on Nokia Internet Tablet Andrey Khurri Helsinki Institute for Information Technology HIP Research Group IETF 68 Prague March 23, 2007
Network Simulation Traffic, Paths and Impairment
Network Simulation Traffic, Paths and Impairment Summary Network simulation software and hardware appliances can emulate networks and network hardware. Wide Area Network (WAN) emulation, by simulating
ENSC 427: Communication Networks. Analysis of Voice over IP performance on Wi-Fi networks
ENSC 427: Communication Networks Spring 2010 OPNET Final Project Analysis of Voice over IP performance on Wi-Fi networks Group 14 members: Farzad Abasi ([email protected]) Ehsan Arman ([email protected]) http://www.sfu.ca/~faa6
Software Defined Network (SDN)
Georg Ochs, Smart Cloud Orchestrator ([email protected]) Software Defined Network (SDN) University of Stuttgart Cloud Course Fall 2013 Agenda Introduction SDN Components Openstack and SDN Example Scenario
VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing
Journal of Information & Computational Science 9: 5 (2012) 1273 1280 Available at http://www.joics.com VON/K: A Fast Virtual Overlay Network Embedded in KVM Hypervisor for High Performance Computing Yuan
EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications
ECE6102 Dependable Distribute Systems, Fall2010 EWeb: Highly Scalable Client Transparent Fault Tolerant System for Cloud based Web Applications Deepal Jayasinghe, Hyojun Kim, Mohammad M. Hossain, Ali Payani
ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy
ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy OVERVIEW The global communication and the continuous growth of services provided through the Internet or local infrastructure require to
Analysis of QoS Routing Approach and the starvation`s evaluation in LAN
www.ijcsi.org 360 Analysis of QoS Routing Approach and the starvation`s evaluation in LAN 1 st Ariana Bejleri Polytechnic University of Tirana, Faculty of Information Technology, Computer Engineering Department,
SDN. What's Software Defined Networking? Angelo Capossele
SDN What's Software Defined Networking? Angelo Capossele Outline Introduction to SDN OpenFlow Network Functions Virtualization Some examples Opportunities Research problems Security Case study: LTE (Mini)Tutorial
Overview and Deployment Guide. Sophos UTM on AWS
Overview and Deployment Guide Sophos UTM on AWS Overview and Deployment Guide Document date: November 2014 1 Sophos UTM and AWS Contents 1 Amazon Web Services... 4 1.1 AMI (Amazon Machine Image)... 4 1.2
SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE
VSPEX IMPLEMENTATION GUIDE SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE Silver Peak Abstract This Implementation Guide describes the deployment of Silver Peak
Open Source Network: Software-Defined Networking (SDN) and OpenFlow
Open Source Network: Software-Defined Networking (SDN) and OpenFlow Insop Song, Ericsson LinuxCon North America, Aug. 2012, San Diego CA Objectives Overview of OpenFlow Overview of Software Defined Networking
NComputing L-Series LAN Deployment
NComputing L-Series LAN Deployment Best Practices for Local Area Network Infrastructure Scope: NComputing s L-Series terminals connect to a host computer through an Ethernet interface and IP protocol.
ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy
ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy OVERVIEW The global communication and the continuous growth of services provided through the Internet or local infrastructure require to
Disaster-Resilient Backbone and Access Networks
The Workshop on Establishing Resilient Life-Space in the Cyber-Physical Integrated Society, March. 17, 2015, Sendai, Japan Disaster-Resilient Backbone and Access Networks Shigeki Yamada ([email protected])
Facility Usage Scenarios
Facility Usage Scenarios GDD-06-41 GENI: Global Environment for Network Innovations December 22, 2006 Status: Draft (Version 0.1) Note to the reader: this document is a work in progress and continues to
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?
MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect [email protected] Validating if the workload generated by the load generating tools is applied
Limitations of Current Networking Architecture OpenFlow Architecture
CECS 572 Student Name Monday/Wednesday 5:00 PM Dr. Tracy Bradley Maples OpenFlow OpenFlow is the first open standard communications interface that enables Software Defined Networking (SDN) [6]. It was
Emulation of SDN-Supported Automation Networks
Emulation of SDN-Supported Automation Networks Peter Danielis, Vlado Altmann, Jan Skodzik, Eike Bjoern Schweissguth, Frank Golatowski, Dirk Timmermann University of Rostock Institute of Applied Microelectronics
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心
Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心 1 SDN Introduction Decoupling of control plane from data plane
DDoS Attack Protection in the Era of Cloud Computing and Software-Defined Networking
DDoS Attack Protection in the Era of Cloud Computing and Software-Defined Networking Bing Wang Yao Zheng Wenjing Lou Y. Thomas Hou Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
International Journal of Advanced Research in Computer Science and Software Engineering
Volume 2, Issue 9, September 2012 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com An Experimental
OPENFLOW-BASED LOAD BALANCING GONE WILD
OPENFLOW-BASED LOAD BALANCING GONE WILD RICHARD WANG MASTER S THESIS IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE MASTER OF SCIENCE IN ENGINEERING DEPARTMENT OF COMPUTER SCIENCE PRINCETON UNIVERSITY
Software Defined Networking Basics
Software Defined Networking Basics Anupama Potluri School of Computer and Information Sciences University of Hyderabad Software Defined Networking (SDN) is considered as a paradigm shift in how networking
Architecture of distributed network processors: specifics of application in information security systems
Architecture of distributed network processors: specifics of application in information security systems V.Zaborovsky, Politechnical University, Sait-Petersburg, Russia [email protected] 1. Introduction Modern
POX CONTROLLER PERFORMANCE FOR OPENFLOW NETWORKS. Selçuk Yazar, Erdem Uçar POX CONTROLLER ЗА OPENFLOW ПЛАТФОРМА. Селчук Язар, Ердем Учар
УПРАВЛЕНИЕ И ОБРАЗОВАНИЕ MANAGEMENT AND EDUCATION TOM IX (6) 2013 VOL. IX (6) 2013 POX CONTROLLER PERFORMANCE FOR OPENFLOW NETWORKS Selçuk Yazar, Erdem Uçar POX CONTROLLER ЗА OPENFLOW ПЛАТФОРМА Селчук
Design and implementation of server cluster dynamic load balancing in virtualization environment based on OpenFlow
Design and implementation of server cluster dynamic load balancing in virtualization environment based on OpenFlow Wenbo Chen Hui Li Qiang Ma Zhihao Shang Lanzhou University Lanzhou University Lanzhou
Load Distribution in Large Scale Network Monitoring Infrastructures
Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanjuàs-Cuxart, Pere Barlet-Ros, Gianluca Iannaccone, and Josep Solé-Pareta Universitat Politècnica de Catalunya (UPC) {jsanjuas,pbarlet,pareta}@ac.upc.edu
Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow
International Journal of Soft Computing and Engineering (IJSCE) Performance Analysis of AQM Schemes in Wired and Wireless Networks based on TCP flow Abdullah Al Masud, Hossain Md. Shamim, Amina Akhter
I. ADDITIONAL EVALUATION RESULTS. A. Environment
1 A. Environment I. ADDITIONAL EVALUATION RESULTS The tests have been performed in a virtualized environment with Mininet 1.0.0 [?]. Mininet is tool to create a virtual network running actual kernel, switch
Influence of Load Balancing on Quality of Real Time Data Transmission*
SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 6, No. 3, December 2009, 515-524 UDK: 004.738.2 Influence of Load Balancing on Quality of Real Time Data Transmission* Nataša Maksić 1,a, Petar Knežević 2,
Improving Flow Completion Time for Short Flows in Datacenter Networks
Improving Flow Completion Time for Short Flows in Datacenter Networks Sijo Joy, Amiya Nayak School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada {sjoy028, nayak}@uottawa.ca
Cisco Application Networking for IBM WebSphere
Cisco Application Networking for IBM WebSphere Faster Downloads and Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? slides stolen from Jennifer Rexford, Nick McKeown, Michael Schapira, Scott Shenker, Teemu Koponen, Yotam Harchol and David
How To Compare Performance Of A Router On A Hypervisor On A Linux Virtualbox 2.5 (Xen) To A Virtualbox 3.5.2 (Xeen) 2.2.5-Xen-Virtualization (X
Performance Evaluation of Virtual Routers in Para-virtual Environment 1. Abhishek Bajaj [email protected] 2. Anargha Biswas [email protected] 3. Ambarish Kumar [email protected] 4.
THE GEOMORPHIC VIEW OF NETWORKING: A NETWORK MODEL AND ITS USES
THE GEOMORPHIC VIEW OF NETWORKING: A NETWORK MODEL AND ITS USES Pamela Zave AT&T Laboratories Research Florham Park, New Jersey, USA Jennifer Rexford Princeton University Princeton, New Jersey, USA THE
Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load
Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load Pooja.B. Jewargi Prof. Jyoti.Patil Department of computer science and engineering,
Dynamic Resource Allocation in Software Defined and Virtual Networks: A Comparative Analysis
Dynamic Resource Allocation in Software Defined and Virtual Networks: A Comparative Analysis Felipe Augusto Nunes de Oliveira - GRR20112021 João Victor Tozatti Risso - GRR20120726 Abstract. The increasing
