VMFlow: Leveraging VM Mobility to Reduce Network Power Costs in Data Centers

Size: px
Start display at page:

Download "VMFlow: Leveraging VM Mobility to Reduce Network Power Costs in Data Centers"

Transcription

1 VMFlow: Leveraging VM Mobility to Reduce Network Power Costs in Data Centers Vijay Mann 1,,AvinashKumar 2, Partha Dutta 1, and Shivkumar Kalyanaraman 1 1 IBM Research, India {vijamann,parthdut,shivkumar-k}@in.ibm.com 2 Indian Institute of Technology, Delhi {avinash.ee}@student.iitd.ac.in Abstract. Networking costs play an important role in the overall costs of a modern data center. Network power, for example, has been estimated at 10-20% of the overall data center power consumption. Traditional power saving techniques in data centers focus on server power reduction through Virtual Machine (VM) migration and server consolidation, without taking into account the network topology and the current network traffic. On the other hand, recent techniques to save network power have not yet utilized the various degrees of freedom that current and future data centers will soon provide. These include VM migration capabilities across the entire data center network, on demand routing through programmable control planes, and high bisection bandwidth networks. This paper presents VMFlow: a framework for placement and migration of VMs that takes into account both the network topology as well as network traffic demands, to meet the objective of network power reduction while satisfying as many network demands as possible. We present network power aware VM placement and demand routing as an optimization problem. We show that the problem is NP-complete, and present a fast heuristic for the same. Next, we present the design of a simulator that implements this heuristic and simulates its executions over a data center network with a CLOS topology. Our simulation results using real data center traces demonstrate that, by applying an intelligent VM placement heuristic, VMFlow can achieve 15-20% additional savings in network power while satisfying 5-6 times more network demands as compared to recently proposed techniques for saving network power. Keywords: networking aspects in cloud services, energy and power management, green networking, VM placement. 1 Introduction In current data centers, networking costs contribute significantly to the overall expenditure. These include capital expenditure (CAPEX) costs such as cost Corresponding author: Vijay Mann, IBM Research, 4, Block C, Institutional Area, Vasant Kunj, New Delhi , India. J. Domingo-Pascual et al. (Eds.): NETWORKING 2011, Part I, LNCS 6640, pp , c IFIP International Federation for Information Processing 2011

2 VMFlow: Leveraging VM Mobility to Reduce Network Power Costs 199 of networking equipment which has been estimated at 15% of total costs in a data center [6]. Other networking costs include operational expenditure (OPEX) such as power consumption by networking equipment. It has been estimated that network power comprise of 10-20% of the overall data center power consumption [9]. For example, network power was 3 billion kwh in the US alone in Traditionally, data center networks have comprised of single rooted tree network topologies which are ill-suited for large data centers, as the core of the network becomes highly oversubscribed leading to contention [7]. To overcome some of these performance problems of traditional data center networks, such as poor bisection bandwidth and poor performance isolation, recent research in data center networks has proposed new network architectures such as VL2 [7], Portland [16], and BCube [8]. Data centers are also increasingly adopting virtualization and comprise of physical machines with multiple Virtual Machines (VMs) provisioned on each physical machine. These VMs can be migrated at downtime or at runtime (live). Furthermore, emergence of programmable control plane in switches through standardized interfaces such as Openflow [3], has enabled on demand changes in routing paths. With the above recent advances, opportunities have emerged to save both network CAPEX and OPEX. Network CAPEX is being reduced through several measures such as increased utilization of networking devices, and a move towards cheaper and faster data plane implemented in merchant silicon, while all the intelligence lies in a separate sophisticated control plane [3]. Components of OPEX, such as network power costs, are being reduced through techniques that switch off unutilized network devices as the data center architectures move to higher levels of redundancy in order to achieve higher bisection bandwidth. However, most of the recent techniques to save network power usage do not seem to utilize all the features that current and future data center will soon provide. For instance, recent techniques to save network power in [9] exploit the time-varying property of network traffic and increased redundancy levels in modern network architectures, but do not consider VM migration. On the other hand, current VM placement and migration techniques such as [19] mainly target server power reduction through server consolidation but do not take into account network topology and current network traffic. A recent work by Meng et.al [15]explorednetworktrafficaware VM placement for various network architectures. However, their work did not focus on network power reduction. There are multiple reasons why saving network power is increasingly becoming more important in modern data centers [13]. Larger data centers and new networking technologies will require more network switches, as well as switches with higher capacities, which in turn will consume more power. In addition, rapid advances in server consolidation are improving server power savings, and hence, network power is becoming a significant fraction of the total data center power. Also, server consolidation results in more idle servers and networking devices, providing more opportunity for turning off these devices. Finally, turning off switches results in less heat, thereby reducing the cooling costs. In this context, this paper presents VMFlow: a framework for placement and migration of VMs that takes into account both the network topology as well

3 200 V. Mann et al. as network traffic demands, to meet the objective of network power reduction while satisfying as many network demands as possible. We make the following contributions: 1. We formulate the VM placement and routing of traffic demands for reducing network power as an optimization problem. 2. We show that a decision version of the problem is NP-complete, and present a fast heuristic for the same. 3. We present the design of a simulator that implements this heuristic and simulates its runs over a data center network with a CLOS topology. 4. We validate our heuristic and compare it to other known techniques through extensive simulation. Our simulation uses real data center traces and the results demonstrate that, by applying an intelligent VM placement heuristic, VMFlow can achieve 15-20% additional savings in network power while satisfying 5-6 times more network demands as compared to recently proposed techniques for saving network power. 5. We extend our technique to handle server consolidation. The rest of this paper is organized as follows. We give an overview of related research in Section 2. Section 3 presents the technical problem formulation. Section 4 describes our greedy heuristic. We describe the design and implementation of our simulation framework and demonstrate the efficacy of our approach through simulation experiments in Section 5. Section 6 extends our technique to handle server consolidation. Finally, we conclude the paper in Section 7. 2 Background and Related Work Data center network architectures. Conventional data centers are typically organized in a multi-tiered network hierarchy [1]. Servers are organized in racks, where each rack has a Top-of-Rack (ToR) switch. ToRs connect to one (or two, for redundancy) aggregation switches. Aggregation switches connect to a few core switches in the top-tier. Such a hierarchical design faces multiple problems in scaling-up with number of servers, e..g, the network links in higher tiers become progressively over-subscribed with increasing number of servers. Recently, some new data center network architectures have been proposed to address these issues [7,8,16]. VL2 [7] is one such recently proposed data center network architecture (Figure 1), where the aggregation and the core switches are connected using a CLOS topology, i.e., the links between the two tiers form a complete bipartite graph. Network traffic flows are load balanced across the aggregation and core tier using Valiant Load Balancing (VLB), where the aggregation switch randomly chooses a core switch to route a given flow. VL2 architecture provides higher bisection bandwidth and more redundancy in the data center network. Traffic-aware VM placement. VM placement has been extensively studied for resource provisioning of servers in a data center, including reducing server power usage [19]. Some recent papers have studied VM placement for optimizing network traffic in data centers [12,15]. These approaches differ from VMFlow in

4 VMFlow: Leveraging VM Mobility to Reduce Network Power Costs 201 two important ways: (a) the techniques do not optimize for network power, and (b) their solutions do not specify the routing paths for the network demands. Agg. switch Core Switch Agg. switch Core Switch Agg. switch TOR TOR TOR Core Switch Agg. switch TOR Fig. 1. CLOS network topology Agg. switch TOR Network power optimization. Most of the recent research on data center energy efficiency has focused on reducing the two major components of data center power usage: servers and cooling [6, 9]. Recently, some papers have focussed on reducing the power consumed by networking elements (which we call, network power) in a data center [9, 13, 17]. In the ElasticTree approach presented in [9], a network power manager dynamically turns on or off the network elements (switches and links), and routes the flows over the active network elements, based on the current network traffic demands. In the primary technique presented in ElasticTree, called greedy bin-packing, every demand is considered sequentially, and the demand is routed using the leftmost path that has sufficient capacity to route the demand. Here, a path is considered to be on the left of another path if, at each switch layer of a structured data center topology, the switch on the first path is either to the left or is identical to the switch on the second path. ElasticTree also proposes an topology-aware approach that improves upon the computation time as compared to greedy bin packing however, it does not improve the network power in the solution. The VMFlow framework that we present in this paper fundamentally differs from ElasticTree [9] and from the work in [13,17] because we exploit the flexibility of VM placements that is available in the modern data centers. Also, for a given demand, we jointly perform the VM placement and flow routing by greedily selecting the VM placements as well as the routing paths that require minimum additional network power. 3 Network-Aware VM Placement Problem In this section we present the Network-Aware Virtual Machine Placement (NAVP) Problem. Our problem formulation extends the ElasticTree formulation in [9]. Input. The data center network is composed of network switches and links that connect the hosts (physical servers). The data center network topology is modeled using a weighted directed graph G =(V,E), where V are the set of vertices and the E V V is the set of directed edges. A link e =(u, v) has capacity (maximum supported rate) of C(e). There are three types of vertices

5 202 V. Mann et al. in V : the switches, the hosts, and a special vertex v E.Vertexv E models the external clients that communicate with the data center. The edges represent the communication links between the switches, and between the switches and the hosts. (We use edges and links interchangeably in this paper.) Let there be n hosts H = {h 1,...,h n },andq switches W = {w 1,...,w q }. We consider a set of m Virtual Machines (VMs) M = {vm 1,...,vm m },where m n, and at most one VM can be placed on a host (we consider the case of multiple VMs per server in Section 6). The network traffic source or destination is one of the m VMs or an external client. We model the traffic to and from any external clients as traffic to and from v E, respectively. Let M be M {v E }. We are given a set of K demands (rates) among the nodes in M,wherethej th demand requires a rate of r j from source s j M to destination d j M,and is denoted by (s j,d j,r j ). When a switch w i or a link e is powered on, let P (w i )andp (e) denote the power required to operate them, respectively. A VM placement is a (oneto-one) mapping Π : M H that specifies that the mapping of VM vm i to host h Π(vmi). In addition, we assume that in all VM placements, v E is mapped to itself. Constraints. We model the VM placement problem as a variant of the multicommodity flow problem [4]. A flow assignment specifies the amount of traffic flow on every edge for every source-destination demand. We say that a flow assignment on G satisfies a set of demands if the following three standard constraints holds for the flow assignment: edge capacity, flow conservation and demand satisfaction. (Please see [14] for the detailed constraints.) In addition, we consider another set of constraints that result from the power requirements: (1) a link can be assigned a non-zero flow only if it is powered on, and (2) a link can be powered on only if the switches that are the link s end nodes are powered on. Thus, the total (network) power required by a flow assignment is the sum of the power required for powering on all links with nonzero flow assignment, and the power required for powering on all switches that are end nodes of some link that has a non-zero flow assignment. Due to adverse effect of packet reordering on TCP throughput, it is undesirable to split a traffic flow of a source-destination demand [11]. Therefore, the NAVP problem requires that a demand must be satisfied using a network flow that uses only one path in the network graph (unsplittable flow constraint). The optimization problem. Given the above-mentioned K demands among the nodes in M, we say that a given VM placement Π is feasible if there is a flow assignment on G that satisfies the following K demands among the hosts: the j th demands requires a rate of r j from source host Π(s j ) to destination host Π(d j ). Then the Network-Aware VM Placement (NAVP) problem is stated as follows: among all possible feasible VM placements, find a placement and an associated flow assignment that has the minimum total power. Problem Complexity. The following decision version of the NAVP problem is NP-complete: given a constant B, does there exist a feasible VM placement and

6 VMFlow: Leveraging VM Mobility to Reduce Network Power Costs 203 an associated flow assignment that has total power less than or equal to B. We show the NP-completeness by reduction from the bin packing problem [18]. Theorem 1. The decision version of the Network-Aware VM Placement (NAVP) problem is NP-complete. Please see [14] for the proof. 4 Heuristic Design We now present a greedy heuristic for the NAVP problem. The algorithm considers the demands one by one, and for each demand, the algorithm greedily chooses a VM placement and a flow assignment (on a path) that needs minimum increase in the total power of the network. We now describe the algorithm in more details. (The pseudocode is presented in Figure 2.) Primary variables. The algorithm maintains four primary variables: W on and E on which are the set of switches and edges that have been already powered on, respectively, the set H free of hosts that have not yet been occupied by a VM, and a function RC that gives the residual or free capacity of the edges. For each VM v, we assume that initially the placement Π(v) is set to. In each iteration of the main loop (in function VMFlow), the algorithm selects a demand in the descending order of the demand rates, and tries to find a feasible VM placement. If a feasible VM placement is found, then the primary variables are updated accordingly; otherwise, the demand is skipped. Residual graph. To find a feasible VM placement for a demand (s j,d j,r j ), the algorithm construct a residual graph G res. The residual graph contains all switches, and all hosts where VMs s j and d j can be possibly placed (sets Φ(s j ) and Φ(d j ), respectively). G res also contains every edge among these nodes that have at least r j residual capacity. Therefore, in G res, any path between two hosts has enough capacity to route the demand. The algorithm next focuses on finding VM placements for s j and d j such that there is path between the VMs that requires minimum additional power. To that end, the nodes and edges in the residual graph are assigned weights equal to the amount of additional power required to power them on. Thus, the weight of a switch v is set to 0 if it already powered on, and set to P (v), otherwise. Edges are assigned weights in a similar manner. However, the weights of the hosts are set to so that they cannot be used as an intermediate node in a routing path. Next, the weight of a path in the residual graph (pathw t) is defined as the sum of the weights of all intermediate edges and nodes on the path (and excludes the weights of the end nodes of the path). VM placement. In the residual graph G res, the algorithm considers all possible pairs of hosts for placing s j and d j (where the first element in the pair is from Φ(s j ) and the second element is from Φ(d j )). Among all such pairs of hosts, the algorithm selects a pair of hosts that has a minimum weight path. (A minimum

7 204 V. Mann et al. 1: Input: described in input part of Section 3 2: function Initialization 3: W on ; E on {set of switches and edges currently powered on} 4: H free H {set of hosts currently not occupied by a VM} 5: e E, RC(e) C(e) {current residual capacity of edge e} 6: function VMFlow 7: select a demand (s j,d j,r j) from the set of demands in the descending order of their rates 8: if Π(s j )= then Φ(s j) H free else Φ(s j) {Π(s j )} 9: if Π(d j)= then Φ(d j) H free else Φ(d j) {Π(d j )} 10: V res W Φ(s j) Φ(d j) {residual nodes} 11: E res {(u, v) E :(RC(e) r j) (u, v V res)} {residual edges} 12: G res (V res,e res) {residual graph} 13: v W, if v W on then WT res(v) 0 else WT res(v) P (v) {switch wt. in G res} 14: v Φ(s j) Φ(d j), WT res(v) {host wt. in G res} 15: e E res, if e E on then WT res(e) 0 else WT res(e) P (e) {edge wt. in G res} 16: minw t Min{pathW t(g res,u,v):(u Φ(s j)) (v Φ(d j)) (u v)} {see Sec. 4} 17: if (minw t < ) then {found a routing for this demand} 18: (Π(s j ),Π(d j )) any (u, v) s.t. (u Φ(s j)) (v Φ(d j)) (u v) 19: (pathw t(g res,u,v)=minw t) assign the minimum weight path P from Π(s j )toπ(d j )totheflowfor(s j,d j,r j) 20: for all switches v on path P, W on W on {v} 21: for all edges e on path P, E on E on {e}; RC(e) RC(e) r j 22: H free H free \{Π(s j ),Π(d j )} 23: else 24: skip demand (s j,d j,r j) {cannot find a VM placement for this demand} Fig. 2. A greedy algorithm for NAVP weight path between the hosts is found using a variant of Dijkstra s shortest path algorithm whose description is omitted here due to lack of space.) The source and the destination VMs for the demand (s j and d j ) are placed at the selected hosts, and the flow assignment is done along a minimum weight path. Note that, sometimes G res may be disconnected and each hosts may lie in a distinct component of G res. (For instance, this scenario can occur when the residual edge capacities are such that no path can carry a flow of rate r j.) In this case, there is no path between any pair of hosts, and the algorithm is unable to find a feasible VM placement for this demand. Depending on the data center s service level objectives, the algorithm can either abort the placement algorithm, or continue by considering subsequent demands. It is easy to see that each iteration selects a placement and flow routing that require minimum increase in network power. Thus, although the heuristic is sub-optimal, each iteration selects a locally optimal placement and routing. Lemma 1. In each iteration, for a given traffic demand, the heuristic selects source and destination VM placements and a flow routing that need minimum increase in network power. A practical simplification. We now present a simple observation to reduce the computation time of our heuristic. We observe that in most conventional and modern Data Center network architectures, multiple host are placed under a Top-of-the-Rack (ToR) switch. Thus, for a given demand, if a source

8 VMFlow: Leveraging VM Mobility to Reduce Network Power Costs 205 (or destination) VM is placed on a host, then the parent ToR of the host needs to be turned on (if the ToR is not already on) to route the demand. Therefore, for VM placement, we first try to place both the source and destination VMs under the same ToR. If such a placement is possible, then only the common parent ToR needs to be turned on to route the demand. If no such ToR is available, then we compute a minimum weight path between all possible pairs of ToRs (where path weights include the end ToR weights), instead of computing minimum weight paths between all possible pairs of hosts. This simplification significantly reduces the computation time because the number of ToRs is much lower than the number of hosts. 5 Experimental Evaluation We developed a simulator to evaluate the effectiveness of VMFlow algorithm. It simulates a network topology with VMs, switches and links as a discrete event simulation. Each server (also called host) is assumed to host at most one VM (the case of multiple VMs per host is discussed in Section 6). VMs run applications that generate network traffic to other VMs, and VMs can migrate from one node to the other. Switches have predefined power curves most of the power is static power (i.e. power used to turn on the switch). At each time step, network traffic generated by VMs (denoted by an entry in the input traffic matrix) is read and the corresponding VMs are mapped to hosts as per the algorithm described in Section 4. The corresponding route between the hosts is also calculated based on the consumed network power, the network topology and available link capacities. At each time step of the simulation, we compute the total power consumed by the switches and the fraction of the total number of network demands that can be satisfied. A new input traffic matrix (described below), that represents the network traffic at that time stamp, is used at each time step of the simulation. Network Topology. The simulator creates a network based on the given topology. Our network topology consisted of 3 layers of switches: the top-of-the-rack (ToR) switches which connect to a layer of aggregate switches which in turn connect to the core switches. Each ToR has 20 servers connected to it. At most one virtual machine (VM) can be mapped to a server (the case of multiple VMs per server is discussed in Section 6). We assume a total of 1000 servers (and VMs). There are 50 ToR switches, and each server connects to a ToR over a 1 Gbps link. Each ToR connects to two aggregate switches over 1 or 10 Gbps links. We assume a CLOS topology between the aggregate and core switches similar to VL2 [7] with 1 or 10 Gbps links as shown in Figure 1. All switches are assumed to have a static power of 100 watts. This is because current networking devices are not energy proportional and even completely idle switches (0% utilization) consume 70-80% of their fully loaded power (100% utilization) [9]. Our simulator has a topology generator component that generates the required links between servers and ToRs, ToRs and aggregate switches, and aggregate switches and core switches. It takes in as input the number of servers and the static power values for each kind of switch. It then generates the number of ToR switches (assuming 20 servers under 1 ToR) and the number of aggregate and

9 206 V. Mann et al number of VMs % constant data Fig. 3. Histogram of number of VMs with their percentage constant data core switches using the formulae given in [7] for a CLOS topology. For (d*d)/4 ToRs, the number of aggregate switches are d and the number of core switches are d/2. For 1000 servers and 50 ToRs, this results in around 15 aggregate switches and around 8 core switches. Input Data. To drive the simulator, we needed a traffic matrix (i.e. which VM sends how much data to which VM). Such fine grained data is typically hard to obtain from real data centers [5, 10] because of the required server level instrumentation. We obtained data from a real data center with 17,000 VMs with aggregate incoming and outgoing network traffic from each VM. The data had snapshots of aggregate network traffic over 15 minute intervals spread over a duration of 5 days. To understand the variance in the data, we compared the discrete time differential (Δ) with the standard deviation (σ) of the data. Data that satisfied the following condition was considered constant: σ/2 Δ +σ/2. A histogram of percentage of data that was constant for all VMs is given in Figure 3. It can be observed that most of the VMs have a very large percentage of data that is constant and does not show much variance. A very large variance can be bad for VMFlow, as it will mean too frequent VM migrations, whereas a low variance means that migration frequency can be kept low. Our first task was to calculate a traffic matrix out of this aggregate per VM data. Given the huge data size, we chose data for a single day. Various methods have been proposed in literature for calculating traffic matrices. We used the simple gravity model [20] to calculate the traffic matrices for all 17,000 VMs at each time stamp on the given day. Simple gravity model uses the following equation to calculate the network traffic from one VM to another: D ij =(Di out Dj in)/(σ kdk in ). Here Dout i is the total outgoing traffic at VM i, anddj in is the total incoming traffic at VM j. Although, existing literature [10] points out that traffic matrices generated by simple gravity model tend to be too dense and those generated by sparsity maximization algorithms tend to be too sparse than real data center traffic matrices, simple gravity model is still widely used in literature to generate traffic matrices for data centers [15]. After generating the traffic matrices for each time stamp for the entire data center (17,000 VMs), we used the traffic matrices for the first 1000 VMs in order to reduce the simulation time required. Since gravity model tends to distribute all the traffic data observed over all the VMs in some proportion, it resulted in a large number of very low network demands. In order to make these demands

10 VMFlow: Leveraging VM Mobility to Reduce Network Power Costs 207 significant, we used a scale-up factor of 50 for all the data (i.e. each entry in all the input traffic matrices was multiplied by 50). Simulation Results. The simulator compares VMFlow approach with the ElasticTree s greedy bin-packing approach proposed by Heller et. al [9] to save network power. Recall that the ElasticTree s bin-packing approach chooses the leftmost path in a given layer that satisfies the network demand out of all the possible paths in a deterministic left-to-right order. These paths are then used to calculate the total network power at that time instance. For placing VMs for the ElasticTree approach, we followed a strategy that placed the VMs on nodes that had the same id as their VM id. We assume that all the network power comprise of power for turning on the switches, and the power required for turning on each network link (i.e., for ports on the end switches of the link) is zero. We calculated the total power consumed by the network and the proportion of network demands that were unsatisfied at a given time stamp for both VMFlow and ElasticTree approaches. In the first set of experiments, we simulated an oversubscribed network where the ToR-aggregate switch links and aggregate-core switch links had 1 Gbps capacity. This resulted in a 10:1 over subscription ratio in the ToR-aggregate switch link layer. The results for the oversubscribed network are shown in Figure 4(a) and Figure 4(b), respectively. VMFlow outperforms the ElasticTree approach by a factor of around 15% and the baseline (i.e., all switches on) by around 20% in terms of network power at any given time instance. More importantly, one can see the effectiveness of VMFlow in the percentage of network demands that remain unsatisfied. VMFlow saves all the network power while satisfying 5-6 times more network demands as compared to ElasticTree approach. Since the input data had very little variation over time, we conducted an experiment to compare VMFlow with and without any migration after the first placement was done using the VMFlow algorithm. In order to simulate no migration, VMFlow simulator calculated an optimal placement of VMs based on the first input traffic matrix (representing the traffic for the first time stamp) and only calculated the route at each subsequent time step. Figure 4(a) and Figure 4(b) show the performance of VMFlow approach without any migration. Cumulative Power in Watts VMFlow (Scaleup=50) VMFlow - no migration (Scaleup=50) Elastic Tree (Scaleup=50) All Switches On Time (in Mins) (a) Network power % Unsatisfied Demands VMFlow (Scaleup=50) VMFlow - no migration (Scaleup=50) Elastic Tree (Scaleup=50) Time (in Mins) (b) Unsatisfied network demands Fig. 4. Comparison at different time steps in the simulation

11 208 V. Mann et al VMFlow 1G VMFlow 10G Elastic Tree 1G Elastic Tree 10G 80 VMFlow 1G VMFlow 10G Elastic Tree 1G Elastic Tree 10G Cumulative Power in Watts % Unsatisfied Demands Scale Factor Scale Factor (a) Network power (b) Unsatisfied network demands Fig. 5. Comparison at various scale factors in the simulation One can note that even with low variance input data, VMFlow with migration outperforms VMFlow without migration slightly by a margin of 5% in terms of power. Both the approaches perform roughly the same with respect to percentage of unsatisfied network demands. This indicates that migration frequency has to be properly tuned keeping in mind the variance in network traffic. Each VM migration has a cost associated with it that depends on the size of VM, its current load, the nature of its workload and the Service Level Agreement (SLA) associated with it. We are currently working on a placement framework that will take into account this cost and the potential benefit a migration can achieve in terms of network power and network demand satisfaction. We also compared the effect of using various scale-up factors on the input network traffic data. In this set of experiments we used a single input traffic matrix (representing the traffic at the first time stamp) and simulated both an oversubscribed network (ToR-aggregate and aggregate-core switch links have 1 Gbps capacity) and a network with no over subscription(tor-aggregate and aggregate-core switch links have 10 Gbps capacity). The results are shown in Figure 5(a) and Figure 5(b). VMFlow outperforms the ElasticTree approach consistently, both in terms of network power reduction as well as percentage unsatisfied demands in the oversubscribed network (1G). However, in the no over subscription case (10G) the performance difference between VMFlow and ElasticTree is relatively lower. This is along expected lines, since VMFlow is expected to outperform mainly in cases where the top network layers are oversubscribed. 6 Handling Server Consolidation In the earlier sections, we have assumed that at most one VM can be mapped to a host; i.e., the VM placement mapping Π is one-to-one. It is, however, easy to modify our algorithm to handle the case when multiple VM are mapped to a host, i.e., in case of server consolidation. In modern data centers, server consolidation is often employed to save both on the capital expenditure (e.g., by consolidating several VMs onto a small number of powerful hosts), and operational expenditure (e..g, any unused host can be turned off to save power) [19]. Although the basic idea of server consolidation is simple, deciding which VMs

12 VMFlow: Leveraging VM Mobility to Reduce Network Power Costs 209 are co-located on a host is a complex exercise that depends on various factors such as fault and performance isolations, and application SLA. Thus, a group of VMs co-located during server consolidation may not be migrated to different hosts during network power optimization. Nevertheless, a group of VMs mapped to a single host can be migrated together to a different host, provided the new host has enough resources for that group of VMs. We now describe, how our greedy algorithm can be easily extended to handle server consolidation. We assume that we have an initial server consolidation phase which maps zero, one or more VMs to each host. A set of VMs that is mapped to the same host during server consolidation is called a VMset. We make the following two assumptions: (1) while placing VMs on the host, the only resource constraint we need to satisfy is on the compute resource, and (2) a group of VMs consolidated on a host are not separated (i.e., migrated to different hosts) during the network power optimization phase. Now, our greedy algorithm requires two simple modifications to handle server consolidation. First, instead of mapping a VM to a host, the algorithm maps a VMset to a host, and the traffic demand between two VMsets is the sum of the demands of individual VMs comprising the two VMsets. Second, in each iteration for placing the source and destination VMset of a traffic demand, the set of possible hosts (Φ(s j )andφ(d j )) is defined as the set of free hosts that have equal or more compute resources than the respective VMsets. We omit details of these simple modifications from this paper. We evaluated the performance of VMFlow in presence of server consolidation using the same simulation framework described in Section 5. Typically server consolidation is done based on the CPU and memory demands of the VMs and the individual host CPU and memory capacities. Since we did not have access to the CPU data for the 1000 VMs that were used in the earlier experiments, we generated synthetic CPU demands that were normally distributed with a given mean and standard deviation. We used the methodology given in [12], to generate mean and standard deviation of the CPU demands. We assumed the servers to be IBM HS22v blades [2], that go into a Bladecenter H chassis (each chassis can host 14 blades). We simulated an oversubscribed network where the server-tor links, the ToR-aggregate switch links and aggregate-core switch links had 1 Gbps capacity. We assumed 3 different configurations of HS22v blades and created a VM packing scheme based on CPU demands using first fit decreasing (FFD) bin-packing algorithm. Following 3 configurations of HS22v blades (with increasing CPU capacity) were used in the simulation: 1. 1 Intel Xeon E5503 with 2 cores at 2.0 GHz - assuming a mean of 1.6 GHz and std. dev of 0.6, 1000 VMs with normally distributed CPU demands were packed into 612 servers (a packing ratio of 1.63 VMs per server). This required a network with 44 TORs, 14 Aggregate and 7 Core switches Intel Xeon X5570 with 4 cores at 2.93 GHz - assuming a mean of 2.34 Ghz and std. dev of 0.93, 1000 VMs with normally distributed CPU demands were packed into 265 servers (a packing ratio of 3.76 VMs per server). This required a network with 19 TORs, 9 Aggregate and 5 Core switches.

13 210 V. Mann et al VMFlow (1.63 packing ratio) VMFlow (3.76 packing ratio) VMFlow (8.39 packing ratio) Elastic Tree (1.63 packing ratio) Elastic Tree (3.76 packing ratio) Elastic Tree (8.39 packing ratio) 100 VMFlow (1.63 packing ratio) VMFlow (3.76 packing ratio) VMFlow (8.39 packing ratio) Elastic Tree (1.63 packing ratio) Elastic Tree (3.76 packing ratio) Elastic Tree (8.39 packing ratio) Cumulative Power in Watts % Unsatisfied Demands Scale Factor Scale Factor (a) Network power (b) Unsatisfied network demands Fig. 6. Comparison at various scale factors and configurations in the server consolidation scenario simulation 3. 2 Intel Xeon X5570 with 4 cores at 2.93 GHz - assuming a mean of 2.34 Ghz and std. dev of 0.93, 1000 VMs with normally distributed CPU demands were packed into 119 servers (a packing ratio of 8.39 VMs per server). This required a network with 9 TORs, 6 Aggregate and 3 Core switches. The input network traffic matrix was modified such that input and output network demands for VMs that were packed together to create a VMset, were added. VMsets resulting from the VM packing step were then used by the simulator to compare VMFlow and Elastic tree at various network traffic scale factors. Results for network power and percentage unsatisfied demands are shown in Figures 6(a) and 6(b),respectively. VMFlow continues to outperform Elastic tree even in presence of server consolidation. While the network power savings are consistently in the range of 13-20% at higher scale factors, the gains for percentage unsatisfied demands decrease (5x at 1.63 packing ratio, 3x at 3.76 packing ratio, and 2x at 8.39 packing ratio) as more VMs are packed into a single server and network traffic matrix becomes denser. 7 Future Work This paper presented VMFlow, a framework for reducing power used by network elements in data centers. VMFlow uses the flexibility provided by VM placement and programmable flow-based routing, that are available in modern data centers, to reduce network power while satisfying a large fraction of the network traffic demands. We formulated the Network-Aware VM Placement as an optimization problem, proved that it is NP-Complete, and proposed a greedy heuristic for the problem. Our technique showed reasonable improvement (15-20%) in network power and significant improvement (5-6 times) in the fraction of satisfied demands as compared to previous network power optimization techniques. Each VM migration in a data center has a cost associated with it. Similarly, the benefits that can be achieved by a VM migration in terms of network power reduction and meeting more network demands, depends on the variance in network traffic over time. In future, we plan to work on a placement framework

14 VMFlow: Leveraging VM Mobility to Reduce Network Power Costs 211 that takes into account this cost and the potential benefit a migration can achieve in terms of network power and network demand satisfaction. We also plan to apply our technique to other network topologies and evaluate its benefits. References 1. Cisco: Data center: Load balancing data center services (2004) 2. IBM BladeCenter HS22V, /bc_hs22v_7871aag.html 3. The OpenFlow Switch, 4. Ahuja, R., Magnanti, T., Orlin, J.: Network Flows - Theory, Algorithms and Applications. Prentice-Hall, Englewood Cliffs (1993) 5. Benson, T., Akella, A.A.A., Zhang, M.: Understanding Data Center Traffic Characteristics. In: ACM SIGCOMM WREN Workshop (2009) 6. Greenberg, A., Hamilton, J., Maltz, D., Patel, P.: The Cost of a Cloud: Research Problems in Data Center Networks. In: ACM SIGCOMM CCR (January 2009) 7. Greenberg, A., Jain, N., Kandula, S., Kim, C., Lahiri, P., Maltz, D., Patel, P., Sengupta, S.: VL2: A Scalable and Flexible Data Center Network. In: ACM SIGCOMM (August 2009) 8. Guo, C., Lu, G., Li, D., Wu, H., Zhang, X., Shi, Y., Tian, C., Zhang, Y., Lu, S.: BCube: A High Performance, Server-centric Network Architecture for Modular Data Centers. In: ACM SIGCOMM (August 2009) 9. Heller, B., Seetharaman, S., Mahadevan, P., Yiakoumis, Y., Sharma, P., Banerjee, S., McKeown, N.: ElasticTree: Saving Energy in Data Center Networks. In: USENIX NSDI (April 2010) 10. Kandula, S., Sengupta, S., Greenberg, A., Patel, P., Chaiken, R.: The Nature of Data Center Traffic: Measurements and Analysis. In: ACM SIGCOMM Internet Measurement Conference, IMC (2009) 11. Kandula, S., Katabi, D., Sinha, S., Berger, A.: Dynamic load balancing without packet reordering. Computer Communication Review 37(2) (2007) 12. Korupolu, M.R., Singh, A., Bamba, B.: Coupled placement in modern data centers. In: IEEE IPDPS (2009) 13. Mahadevan, P., Banerjee, S., Sharma, P.: Energy proportionality of an enterprise network. In: 1st ACM SIGCOMM Workshop on Green Networking (2010) 14. Mann, V., Kumar, A., Dutta, P., Kalyanaraman, S.: VMFlow: Leveraging VM Mobility to Reduce Network Power Costs in Data Centers. Tech. rep. (2010) 15. Meng, X., Pappas, V., Zhang, L.: Improving the Scalability of Data Center Networks with Traffic-aware Virtual Machine Placement. In: IEEE INFOCOM (2010) 16. Mysore, R., Pamboris, A., Farrington, N., Huang, N., Miri, P., Radhakrishnan, S., Subramanya, V., Vahdat, A.: PortLand: A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric. In: ACM SIGCOMM (August 2009) 17. Shang, Y., Li, D., Xu, M.: Energy-aware routing in data center network. In: 1st ACM SIGCOMM Workshop on Green Networking (2010) 18. Vazirani, V.: Approximation Algorithms. Addison-Wesley, Reading (2001) 19. Verma, A., Ahuja, P., Neogi, A.: pmapper: Power and migration cost aware application placement in virtualized systems. In: ACM/IFIP/USENIX Middleware (2008) 20. Zhang, Y., Roughan, M., Duffield, N., Greeberg, A.: Fast accurate computation of large-scale IP traffic matrices from link loads. In: ACM SIGMETRICS (2003)

Load Balancing Mechanisms in Data Center Networks

Load Balancing Mechanisms in Data Center Networks Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider

More information

Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani

Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani Improving the Scalability of Data Center Networks with Traffic-aware Virtual Machine Placement Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani Overview:

More information

International Journal of Emerging Technology in Computer Science & Electronics (IJETCSE) ISSN: 0976-1353 Volume 8 Issue 1 APRIL 2014.

International Journal of Emerging Technology in Computer Science & Electronics (IJETCSE) ISSN: 0976-1353 Volume 8 Issue 1 APRIL 2014. IMPROVING LINK UTILIZATION IN DATA CENTER NETWORK USING NEAR OPTIMAL TRAFFIC ENGINEERING TECHNIQUES L. Priyadharshini 1, S. Rajanarayanan, M.E (Ph.D) 2 1 Final Year M.E-CSE, 2 Assistant Professor 1&2 Selvam

More information

Data Center Network Topologies: FatTree

Data Center Network Topologies: FatTree Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously

More information

Application-aware Virtual Machine Migration in Data Centers

Application-aware Virtual Machine Migration in Data Centers This paper was presented as part of the Mini-Conference at IEEE INFOCOM Application-aware Virtual Machine Migration in Data Centers Vivek Shrivastava,PetrosZerfos,Kang-wonLee,HaniJamjoom,Yew-HueyLiu,SumanBanerjee

More information

Energy Optimizations for Data Center Network: Formulation and its Solution

Energy Optimizations for Data Center Network: Formulation and its Solution Energy Optimizations for Data Center Network: Formulation and its Solution Shuo Fang, Hui Li, Chuan Heng Foh, Yonggang Wen School of Computer Engineering Nanyang Technological University Singapore Khin

More information

PCube: Improving Power Efficiency in Data Center Networks

PCube: Improving Power Efficiency in Data Center Networks PCube: Improving Power Efficiency in Data Center Networks Lei Huang, Qin Jia, Xin Wang Fudan University Shanghai, China 08300240053@fudan.edu.cn 08300240080@fudan.edu.cn xinw@fudan.edu.cn Shuang Yang Stanford

More information

Remedy: Network-aware Steady State VM Management for Data Centers

Remedy: Network-aware Steady State VM Management for Data Centers Remedy: Network-aware Steady State VM Management for Data Centers Vijay Mann 1, Akanksha Gupta 1, Partha Dutta 1, Anilkumar Vishnoi 1, Parantapa Bhattacharya 2, Rishabh Poddar 2, and Aakash Iyer 1 1 IBM

More information

On Tackling Virtual Data Center Embedding Problem

On Tackling Virtual Data Center Embedding Problem On Tackling Virtual Data Center Embedding Problem Md Golam Rabbani, Rafael Esteves, Maxim Podlesny, Gwendal Simon Lisandro Zambenedetti Granville, Raouf Boutaba D.R. Cheriton School of Computer Science,

More information

A Reliability Analysis of Datacenter Topologies

A Reliability Analysis of Datacenter Topologies A Reliability Analysis of Datacenter Topologies Rodrigo S. Couto, Miguel Elias M. Campista, and Luís Henrique M. K. Costa Universidade Federal do Rio de Janeiro - PEE/COPPE/GTA - DEL/POLI Email:{souza,miguel,luish}@gta.ufrj.br

More information

OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support

OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support OpenFlow based Load Balancing for Fat-Tree Networks with Multipath Support Yu Li and Deng Pan Florida International University Miami, FL Abstract Data center networks are designed for satisfying the data

More information

Green Routing in Data Center Network: Modeling and Algorithm Design

Green Routing in Data Center Network: Modeling and Algorithm Design Green Routing in Data Center Network: Modeling and Algorithm Design Yunfei Shang, Dan Li, Mingwei Xu Tsinghua University Beijing China, {shangyunfei, lidan, xmw}@csnet1.cs.tsinghua.edu.cn ABSTRACT The

More information

Depth-First Worst-Fit Search based Multipath Routing for Data Center Networks

Depth-First Worst-Fit Search based Multipath Routing for Data Center Networks Depth-First Worst-Fit Search based Multipath Routing for Data Center Networks Tosmate Cheocherngngarn, Hao Jin, Jean Andrian, Deng Pan, and Jason Liu Florida International University Miami, FL Abstract

More information

Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments

Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments Evaluating the Impact of Data Center Network Architectures on Application Performance in Virtualized Environments Yueping Zhang NEC Labs America, Inc. Princeton, NJ 854, USA Email: yueping@nec-labs.com

More information

Migrations and Network Memory Management

Migrations and Network Memory Management Remedy: Network-Aware Steady State VM Management for Data Centers Vijay Mann 1,, Akanksha Gupta 1, Partha Dutta 1, Anilkumar Vishnoi 1, Parantapa Bhattacharya 2, Rishabh Poddar 2, and Aakash Iyer 1 1 IBM

More information

Data Center Network Topologies: VL2 (Virtual Layer 2)

Data Center Network Topologies: VL2 (Virtual Layer 2) Data Center Network Topologies: VL2 (Virtual Layer 2) Hakim Weatherspoon Assistant Professor, Dept of Computer cience C 5413: High Performance ystems and Networking eptember 26, 2014 lides used and adapted

More information

Energy-aware Routing in Data Center Network

Energy-aware Routing in Data Center Network Energy-aware Routing in Data Center Network Yunfei Shang, Dan Li, Mingwei Xu Department of Computer Science and Technology Tsinghua University Beijing China, {shangyunfei, lidan, xmw}@csnet1.cs.tsinghua.edu.cn

More information

On Tackling Virtual Data Center Embedding Problem

On Tackling Virtual Data Center Embedding Problem On Tackling Virtual Data Center Embedding Problem Md Golam Rabbani, Rafael Pereira Esteves, Maxim Podlesny, Gwendal Simon Lisandro Zambenedetti Granville, Raouf Boutaba D.R. Cheriton School of Computer

More information

TeachCloud: A Cloud Computing Educational Toolkit

TeachCloud: A Cloud Computing Educational Toolkit TeachCloud: A Cloud Computing Educational Toolkit Y. Jararweh* and Z. Alshara Department of Computer Science, Jordan University of Science and Technology, Jordan E-mail:yijararweh@just.edu.jo * Corresponding

More information

Network-aware Virtual Machine Consolidation for Large Data Centers

Network-aware Virtual Machine Consolidation for Large Data Centers Network-aware Virtual Machine Consolidation for Large Data Centers by Kakadia Dharmesh, Nandish Kopri, Vasudeva Varma in The 3rd International Workshop on Network-aware Data Management Denver, Colorado.

More information

Scafida: A Scale-Free Network Inspired Data Center Architecture

Scafida: A Scale-Free Network Inspired Data Center Architecture Scafida: A Scale-Free Network Inspired Data Center Architecture László Gyarmati, Tuan Anh Trinh Network Economics Group Department of Telecommunications and Media Informatics Budapest University of Technology

More information

Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP

Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP IEEE INFOCOM 2011 Workshop on Cloud Computing Enabling Flow-based Routing Control in Data Center Networks using Probe and ECMP Kang Xi, Yulei Liu and H. Jonathan Chao Polytechnic Institute of New York

More information

Joint Power Optimization of Data Center Network and Servers with Correlation Analysis

Joint Power Optimization of Data Center Network and Servers with Correlation Analysis Joint Power Optimization of Data Center Network and Servers with Correlation Analysis Kuangyu Zheng, Xiaodong Wang, Li Li, and Xiaorui Wang The Ohio State University, USA {zheng.722, wang.357, li.2251,

More information

Survivable Virtual Infrastructure Mapping in Virtualized Data Centers

Survivable Virtual Infrastructure Mapping in Virtualized Data Centers 1 Survivable Virtual Infrastructure Mapping in Virtualized Data Centers Jielong Xu, Jian Tang, Kevin Kwiat, Weiyi Zhang and Guoliang Xue Abstract In a virtualized data center, survivability can be enhanced

More information

CIT 668: System Architecture

CIT 668: System Architecture CIT 668: System Architecture Data Centers II Topics 1. Containers 2. Data Center Network 3. Reliability 4. Economics Containers 1 Containers Data Center in a shipping container. 4-10X normal data center

More information

Portland: how to use the topology feature of the datacenter network to scale routing and forwarding

Portland: how to use the topology feature of the datacenter network to scale routing and forwarding LECTURE 15: DATACENTER NETWORK: TOPOLOGY AND ROUTING Xiaowei Yang 1 OVERVIEW Portland: how to use the topology feature of the datacenter network to scale routing and forwarding ElasticTree: topology control

More information

Datacenter Networks Are In My Way

Datacenter Networks Are In My Way Datacenter Networks Are In My Way Principals of Amazon James Hamilton, 2010.10.28 e: James@amazon.com blog: perspectives.mvdirona.com With Albert Greenberg, Srikanth Kandula, Dave Maltz, Parveen Patel,

More information

Data Center Network Architectures

Data Center Network Architectures Servers Servers Servers Data Center Network Architectures Juha Salo Aalto University School of Science and Technology juha.salo@aalto.fi Abstract Data centers have become increasingly essential part of

More information

Energy Constrained Resource Scheduling for Cloud Environment

Energy Constrained Resource Scheduling for Cloud Environment Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering

More information

PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric

PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric PortLand:! A Scalable Fault-Tolerant Layer 2 Data Center Network Fabric Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya,

More information

Minimizing Energy Consumption of Fat-Tree Data Center. Network

Minimizing Energy Consumption of Fat-Tree Data Center. Network Minimizing Energy Consumption of Fat-Tree Data Center Networks ABSTRACT Qing Yi Department of Computer Science Portland State University Portland, OR 9727 yiq@cs.pdx.edu Many data centers are built using

More information

Power-efficient Virtual Machine Placement and Migration in Data Centers

Power-efficient Virtual Machine Placement and Migration in Data Centers 2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing Power-efficient Virtual Machine Placement and Migration

More information

Network Aware Resource Allocation in Distributed Clouds

Network Aware Resource Allocation in Distributed Clouds 2012 Proceedings IEEE INFOCOM Network Aware Resource Allocation in Distributed Clouds Mansoor Alicherry Bell Labs India, Alcatel-Lucent Bangalore, India T.V. Lakshman Bell Labs, Alcatel-Lucent New Jersey,

More information

Experimental Framework for Mobile Cloud Computing System

Experimental Framework for Mobile Cloud Computing System Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 00 (2015) 000 000 www.elsevier.com/locate/procedia First International Workshop on Mobile Cloud Computing Systems, Management,

More information

Advanced Computer Networks. Datacenter Network Fabric

Advanced Computer Networks. Datacenter Network Fabric Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking

More information

Managing Network Reservation for Tenants in Oversubscribed Clouds

Managing Network Reservation for Tenants in Oversubscribed Clouds 213 IEEE 21st International Symposium on Modelling, Analysis & Simulation of Computer and Telecommunication Systems Managing Network Reservation for Tenants in Oversubscribed Clouds Mayank Mishra IIT Bombay,

More information

Virtual Machine Migration in an Over-committed Cloud

Virtual Machine Migration in an Over-committed Cloud Virtual Machine Migration in an Over-committed Cloud Xiangliang Zhang, Zon-Yin Shae, Shuai Zheng, and Hani Jamjoom King Abdullah University of Science and Technology (KAUST), Saudi Arabia IBM T. J. Watson

More information

Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat

Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat 1 PortLand In A Nutshell PortLand is a single logical

More information

Wireless Link Scheduling for Data Center Networks

Wireless Link Scheduling for Data Center Networks Wireless Link Scheduling for Data Center Networks Yong Cui Tsinghua University Beijing, 10084, P.R.China cuiyong@tsinghua.edu.cn Hongyi Wang Tsinghua University Beijing, 10084, P.R.China wanghongyi09@mails.

More information

Chapter 6. Paper Study: Data Center Networking

Chapter 6. Paper Study: Data Center Networking Chapter 6 Paper Study: Data Center Networking 1 Data Center Networks Major theme: What are new networking issues posed by large-scale data centers? Network Architecture? Topology design? Addressing? Routing?

More information

CARPO: Correlation-Aware Power Optimization in Data Center Networks

CARPO: Correlation-Aware Power Optimization in Data Center Networks : Correlation-Aware Power Optimization in Data Center Networks Xiaodong Wang, Yanjun Yao, Xiaorui Wang, Kefa Lu, Qing Cao The Ohio State University, USA University of Tennessee, Knoxville, USA Abstract

More information

AN ADAPTIVE DISTRIBUTED LOAD BALANCING TECHNIQUE FOR CLOUD COMPUTING

AN ADAPTIVE DISTRIBUTED LOAD BALANCING TECHNIQUE FOR CLOUD COMPUTING AN ADAPTIVE DISTRIBUTED LOAD BALANCING TECHNIQUE FOR CLOUD COMPUTING Gurpreet Singh M.Phil Research Scholar, Computer Science Dept. Punjabi University, Patiala gurpreet.msa@gmail.com Abstract: Cloud Computing

More information

Stability of QOS. Avinash Varadarajan, Subhransu Maji {avinash,smaji}@cs.berkeley.edu

Stability of QOS. Avinash Varadarajan, Subhransu Maji {avinash,smaji}@cs.berkeley.edu Stability of QOS Avinash Varadarajan, Subhransu Maji {avinash,smaji}@cs.berkeley.edu Abstract Given a choice between two services, rest of the things being equal, it is natural to prefer the one with more

More information

On the effect of forwarding table size on SDN network utilization

On the effect of forwarding table size on SDN network utilization IBM Haifa Research Lab On the effect of forwarding table size on SDN network utilization Rami Cohen IBM Haifa Research Lab Liane Lewin Eytan Yahoo Research, Haifa Seffi Naor CS Technion, Israel Danny Raz

More information

AIN: A Blueprint for an All-IP Data Center Network

AIN: A Blueprint for an All-IP Data Center Network AIN: A Blueprint for an All-IP Data Center Network Vasileios Pappas Hani Jamjoom Dan Williams IBM T. J. Watson Research Center, Yorktown Heights, NY Abstract With both Ethernet and IP powering Data Center

More information

A Distributed Energy Saving Approach for Ethernet Switches in Data Centers

A Distributed Energy Saving Approach for Ethernet Switches in Data Centers 37th Annual IEEE Conference on Local Computer Networks LCN 2012, Clearwater, Florida A Distributed Energy Saving Approach for Ethernet Switches in Data Centers Weisheng Si School of Computing, Engineering,

More information

A Comparative Study of Data Center Network Architectures

A Comparative Study of Data Center Network Architectures A Comparative Study of Data Center Network Architectures Kashif Bilal Fargo, ND 58108, USA Kashif.Bilal@ndsu.edu Limin Zhang Fargo, ND 58108, USA limin.zhang@ndsu.edu Nasro Min-Allah COMSATS Institute

More information

Dynamic Resource Allocation in Software Defined and Virtual Networks: A Comparative Analysis

Dynamic Resource Allocation in Software Defined and Virtual Networks: A Comparative Analysis Dynamic Resource Allocation in Software Defined and Virtual Networks: A Comparative Analysis Felipe Augusto Nunes de Oliveira - GRR20112021 João Victor Tozatti Risso - GRR20120726 Abstract. The increasing

More information

Optimizing Data Center Networks for Cloud Computing

Optimizing Data Center Networks for Cloud Computing PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,

More information

Traffic Engineering for Multiple Spanning Tree Protocol in Large Data Centers

Traffic Engineering for Multiple Spanning Tree Protocol in Large Data Centers Traffic Engineering for Multiple Spanning Tree Protocol in Large Data Centers Ho Trong Viet, Yves Deville, Olivier Bonaventure, Pierre François ICTEAM, Université catholique de Louvain (UCL), Belgium.

More information

Virtual Data Center Allocation with Dynamic Clustering in Clouds

Virtual Data Center Allocation with Dynamic Clustering in Clouds BNL-106312-2014-CP Virtual Data Center Allocation with Dynamic Clustering in Clouds Li Shi 33rd IEEE International Performance Computing and Communications Conference (IPCCC 2014) Austin, TX December 5-7,

More information

Resolving Packet Loss in a Computer Centre Applications

Resolving Packet Loss in a Computer Centre Applications International Journal of Computer Applications (975 8887) olume 74 No., July 3 Resolving Packet Loss in a Computer Centre Applications M. Rajalakshmi C.Angel K. M. Brindha Shree ABSTRACT The modern data

More information

Data Center Network Structure using Hybrid Optoelectronic Routers

Data Center Network Structure using Hybrid Optoelectronic Routers Data Center Network Structure using Hybrid Optoelectronic Routers Yuichi Ohsita, and Masayuki Murata Graduate School of Information Science and Technology, Osaka University Osaka, Japan {y-ohsita, murata}@ist.osaka-u.ac.jp

More information

Applying NOX to the Datacenter

Applying NOX to the Datacenter Applying NOX to the Datacenter Arsalan Tavakoli UC Berkeley Martin Casado and Teemu Koponen Nicira Networks Scott Shenker UC Berkeley, ICSI 1 Introduction Internet datacenters offer unprecedented computing

More information

2004 Networks UK Publishers. Reprinted with permission.

2004 Networks UK Publishers. Reprinted with permission. Riikka Susitaival and Samuli Aalto. Adaptive load balancing with OSPF. In Proceedings of the Second International Working Conference on Performance Modelling and Evaluation of Heterogeneous Networks (HET

More information

Leveraging Radware s ADC-VX to Reduce Data Center TCO An ROI Paper on Radware s Industry-First ADC Hypervisor

Leveraging Radware s ADC-VX to Reduce Data Center TCO An ROI Paper on Radware s Industry-First ADC Hypervisor Leveraging Radware s ADC-VX to Reduce Data Center TCO An ROI Paper on Radware s Industry-First ADC Hypervisor Table of Contents Executive Summary... 3 Virtual Data Center Trends... 3 Reducing Data Center

More information

Coordinating Virtual Machine Migrations in Enterprise Data Centers and Clouds

Coordinating Virtual Machine Migrations in Enterprise Data Centers and Clouds Coordinating Virtual Machine Migrations in Enterprise Data Centers and Clouds Haifeng Chen (1) Hui Kang (2) Guofei Jiang (1) Yueping Zhang (1) (1) NEC Laboratories America, Inc. (2) SUNY Stony Brook University

More information

Lecture 7: Data Center Networks"

Lecture 7: Data Center Networks Lecture 7: Data Center Networks" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Nick Feamster Lecture 7 Overview" Project discussion Data Centers overview Fat Tree paper discussion CSE

More information

The Case for Fine-Grained Traffic Engineering in Data Centers

The Case for Fine-Grained Traffic Engineering in Data Centers The Case for Fine-Grained Traffic Engineering in Data Centers Theophilus Benson, Ashok Anand, Aditya Akella and Ming Zhang University of Wisconsin-Madison; Microsoft Research Abstract In recent years,

More information

Beyond the Stars: Revisiting Virtual Cluster Embeddings

Beyond the Stars: Revisiting Virtual Cluster Embeddings Beyond the Stars: Revisiting Virtual Cluster Embeddings Matthias Rost Technische Universität Berlin September 7th, 2015, Télécom-ParisTech Joint work with Carlo Fuerst, Stefan Schmid Published in ACM SIGCOMM

More information

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 3, May-June 2015 RESEARCH ARTICLE OPEN ACCESS Ensuring Reliability and High Availability in Cloud by Employing a Fault Tolerance Enabled Load Balancing Algorithm G.Gayathri [1], N.Prabakaran [2] Department of Computer

More information

Topology Switching for Data Center Networks

Topology Switching for Data Center Networks Topology Switching for Data Center Networks Kevin C. Webb, Alex C. Snoeren, and Kenneth Yocum UC San Diego Abstract Emerging data-center network designs seek to provide physical topologies with high bandwidth,

More information

A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network

A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network Mohammad Naimur Rahman

More information

A Security State Transfer Model for Virtual Machine Migration in Cloud Infrastructure

A Security State Transfer Model for Virtual Machine Migration in Cloud Infrastructure A Security State Transfer Model for Virtual Machine Migration in Cloud Infrastructure Santosh Kumar Majhi Department of Computer Science and Engineering VSS University of Technology, Burla, India Sunil

More information

Joint Virtual Machine Assignment and Traffic Engineering for Green Data Center Networks

Joint Virtual Machine Assignment and Traffic Engineering for Green Data Center Networks Joint Virtual Machine Assignment and Traffic Engineering for Green Data Center Networks Lin Wang, Fa Zhang, Athanasios V. Vasilakos, Chenying Hou, Zhiyong Liu Institute of Computing Technology, Chinese

More information

How To Improve Traffic Engineering

How To Improve Traffic Engineering Software Defined Networking-based Traffic Engineering for Data Center Networks Yoonseon Han, Sin-seok Seo, Jian Li, Jonghwan Hyun, Jae-Hyoung Yoo, James Won-Ki Hong Division of IT Convergence Engineering,

More information

Impact of Ethernet Multipath Routing on Data Center Network Consolidations

Impact of Ethernet Multipath Routing on Data Center Network Consolidations Impact of Ethernet Multipath Routing on Data Center Network Consolidations Dallal Belabed, Stefano Secci, Guy Pujolle, Deep Medhi Sorbonne Universities, UPMC Univ Paris 6, UMR 766, LIP6, F-7, Paris, France.

More information

International Journal of Advance Research in Computer Science and Management Studies

International Journal of Advance Research in Computer Science and Management Studies Volume 3, Issue 6, June 2015 ISSN: 2321 7782 (Online) International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online

More information

Improving the Scalability of Data Center Networks with Traffic-aware Virtual Machine Placement

Improving the Scalability of Data Center Networks with Traffic-aware Virtual Machine Placement Improving the Scalability of Data Center Networs with Traffic-aware Virtual Machine Placement Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center 19 Syline Drive, Hawthorne, NY 1532

More information

Joint Host-Network Optimization for Energy-Efficient Data Center Networking

Joint Host-Network Optimization for Energy-Efficient Data Center Networking Host-Network Optimization for Energy-Efficient Data Center Networking Hao Jin, Tosmate Cheocherngngarn, Dmita Levy, Alex Smith, Deng Pan, Jason Liu, and Niki Pissinou Florida International University,

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

More information

Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load

Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load Pooja.B. Jewargi Prof. Jyoti.Patil Department of computer science and engineering,

More information

Juniper Networks QFabric: Scaling for the Modern Data Center

Juniper Networks QFabric: Scaling for the Modern Data Center Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications

More information

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation

More information

Poisson Shot-Noise Process Based Flow-Level Traffic Matrix Generation for Data Center Networks

Poisson Shot-Noise Process Based Flow-Level Traffic Matrix Generation for Data Center Networks Poisson Shot-Noise Process Based Flow-Level Traffic Matrix Generation for Data Center Networks Yoonseon Han, Jae-Hyoung Yoo, James Won-Ki Hong Division of IT Convergence Engineering, POSTECH {seon54, jwkhong}@postech.ac.kr

More information

A SLA-Based Method for Big-Data Transfers with Multi-Criteria Optimization Constraints for IaaS

A SLA-Based Method for Big-Data Transfers with Multi-Criteria Optimization Constraints for IaaS A SLA-Based Method for Big-Data Transfers with Multi-Criteria Optimization Constraints for IaaS Mihaela-Catalina Nita, Cristian Chilipirea Faculty of Automatic Control and Computers University Politehnica

More information

Data Center Networking with Multipath TCP

Data Center Networking with Multipath TCP Data Center Networking with Multipath TCP Costin Raiciu, Christopher Pluntke, Sebastien Barre, Adam Greenhalgh, Damon Wischik, Mark Handley Hotnets 2010 報 告 者 : 莊 延 安 Outline Introduction Analysis Conclusion

More information

Scaling 10Gb/s Clustering at Wire-Speed

Scaling 10Gb/s Clustering at Wire-Speed Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400

More information

Networking in the Big Data Era

Networking in the Big Data Era Networking in the Big Data Era Nelson L. S. da Fonseca Institute of Computing, State University of Campinas, Brazil e-mail: nfonseca@ic.unicamp.br IFIP/IEEE NOMS, Krakow May 7th, 2014 Outline What is Big

More information

Optical interconnection networks for data centers

Optical interconnection networks for data centers Optical interconnection networks for data centers The 17th International Conference on Optical Network Design and Modeling Brest, France, April 2013 Christoforos Kachris and Ioannis Tomkos Athens Information

More information

Demand-Aware Flow Allocation in Data Center Networks

Demand-Aware Flow Allocation in Data Center Networks Demand-Aware Flow Allocation in Data Center Networks Dmitriy Kuptsov Aalto University/HIIT Espoo, Finland dmitriy.kuptsov@hiit.fi Boris Nechaev Aalto University/HIIT Espoo, Finland boris.nechaev@hiit.fi

More information

VDC Planner: Dynamic Migration-Aware Virtual Data Center Embedding for Clouds

VDC Planner: Dynamic Migration-Aware Virtual Data Center Embedding for Clouds VDC Planner: Dynamic Migration-Aware Virtual Data Center Embedding for Clouds Mohamed Faten Zhani, Qi Zhang, Gwendal Simon and Raouf Boutaba David R. Cheriton School of Computer Science, University of

More information

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...

More information

BURSTING DATA BETWEEN DATA CENTERS CASE FOR TRANSPORT SDN

BURSTING DATA BETWEEN DATA CENTERS CASE FOR TRANSPORT SDN BURSTING DATA BETWEEN DATA CENTERS CASE FOR TRANSPORT SDN Abhinava Sadasivarao, Sharfuddin Syed, Ping Pan, Chris Liou (Infinera) Inder Monga, Andrew Lake, Chin Guok Energy Sciences Network (ESnet) IEEE

More information

Data Center Networking with Multipath TCP

Data Center Networking with Multipath TCP Data Center Networking with Costin Raiciu, Christopher Pluntke, Sebastien Barre, Adam Greenhalgh, Damon Wischik, Mark Handley University College London, Universite Catholique de Louvain ABSTRACT Recently

More information

Mega Data Center for Elastic Internet Applications

Mega Data Center for Elastic Internet Applications 1 Mega Data Center for Elastic Internet Applications Hangwei Qian VMWare qianhangwei@gmail.com Michael Rabinovich Case Western Reserve University misha@case.edu Abstract This paper outlines a scalable

More information

Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012

Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012 VL2: A Scalable and Flexible Data Center Network 15744: Computer Networks, Fall 2012 Presented by Naveen Chekuri Outline Introduction Solution Approach Design Decisions Addressing and Routing Evaluation

More information

3D On-chip Data Center Networks Using Circuit Switches and Packet Switches

3D On-chip Data Center Networks Using Circuit Switches and Packet Switches 3D On-chip Data Center Networks Using Circuit Switches and Packet Switches Takahide Ikeda Yuichi Ohsita, and Masayuki Murata Graduate School of Information Science and Technology, Osaka University Osaka,

More information

Network Aware Virtual Machine and Image Placement in a Cloud

Network Aware Virtual Machine and Image Placement in a Cloud Network Aware Virtual Machine and Image Placement in a Cloud David Breitgand Amir Epstein Alex Glikson Virtualization Technologies, System Technologies & Services IBM Research - Haifa, Israel Email: {davidbr,

More information

Network Virtualization for Large-Scale Data Centers

Network Virtualization for Large-Scale Data Centers Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning

More information

VMDC 3.0 Design Overview

VMDC 3.0 Design Overview CHAPTER 2 The Virtual Multiservice Data Center architecture is based on foundation principles of design in modularity, high availability, differentiated service support, secure multi-tenancy, and automated

More information

A Network Flow Approach in Cloud Computing

A Network Flow Approach in Cloud Computing 1 A Network Flow Approach in Cloud Computing Soheil Feizi, Amy Zhang, Muriel Médard RLE at MIT Abstract In this paper, by using network flow principles, we propose algorithms to address various challenges

More information

5 Performance Management for Web Services. Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology. stadler@ee.kth.

5 Performance Management for Web Services. Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology. stadler@ee.kth. 5 Performance Management for Web Services Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology stadler@ee.kth.se April 2008 Overview Service Management Performance Mgt QoS Mgt

More information

Dynamic Load Balancing of Virtual Machines using QEMU-KVM

Dynamic Load Balancing of Virtual Machines using QEMU-KVM Dynamic Load Balancing of Virtual Machines using QEMU-KVM Akshay Chandak Krishnakant Jaju Technology, College of Engineering, Pune. Maharashtra, India. Akshay Kanfade Pushkar Lohiya Technology, College

More information

S-CORE: Scalable Communication Cost Reduction in Data Center Environments

S-CORE: Scalable Communication Cost Reduction in Data Center Environments S-CORE: Scalable Communication Cost Reduction in Data Center Environments Fung Po Tso, Konstantinos Oikonomou, Eleni Kavvadia, Gregg Hamilton and Dimitrios P. Pezaros School of Computing Science, University

More information

Advanced Load Balancing Mechanism on Mixed Batch and Transactional Workloads

Advanced Load Balancing Mechanism on Mixed Batch and Transactional Workloads Advanced Load Balancing Mechanism on Mixed Batch and Transactional Workloads G. Suganthi (Member, IEEE), K. N. Vimal Shankar, Department of Computer Science and Engineering, V.S.B. Engineering College,

More information

Adaptive Probing: A Monitoring-Based Probing Approach for Fault Localization in Networks

Adaptive Probing: A Monitoring-Based Probing Approach for Fault Localization in Networks Adaptive Probing: A Monitoring-Based Probing Approach for Fault Localization in Networks Akshay Kumar *, R. K. Ghosh, Maitreya Natu *Student author Indian Institute of Technology, Kanpur, India Tata Research

More information

Hedera: Dynamic Flow Scheduling for Data Center Networks

Hedera: Dynamic Flow Scheduling for Data Center Networks Hedera: Dynamic Flow Scheduling for Data Center Networks Mohammad Al-Fares Sivasankar Radhakrishnan Barath Raghavan * Nelson Huang Amin Vahdat UC San Diego * Williams College - USENIX NSDI 2010 - Motivation!"#$%&'($)*

More information

Diamond: An Improved Fat-tree Architecture for Largescale

Diamond: An Improved Fat-tree Architecture for Largescale Diamond: An Improved Fat-tree Architecture for Largescale Data Centers Yantao Sun, Jing Chen, Qiang Liu, and Weiwei Fang School of Computer and Information Technology, Beijing Jiaotong University, Beijng

More information

Simulation of Heuristic Usage for Load Balancing In Routing Efficiency

Simulation of Heuristic Usage for Load Balancing In Routing Efficiency Simulation of Heuristic Usage for Load Balancing In Routing Efficiency Nor Musliza Mustafa Fakulti Sains dan Teknologi Maklumat, Kolej Universiti Islam Antarabangsa Selangor normusliza@kuis.edu.my Abstract.

More information