Energy-aware joint management of networks and cloud infrastructures

Size: px
Start display at page:

Download "Energy-aware joint management of networks and cloud infrastructures"

Transcription

1 Energy-aware joint management of networks and cloud infrastructures Bernardetta Addis 1, Danilo Ardagna 2, Antonio Capone 2, Giuliana Carello 2 1 LORIA, Université de Lorraine, France 2 Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Italy Abstract Fueled by the massive adoption of Cloud services, the CO2 emissions of the Information and Communication Technology (ICT) systems are rapidly increasing. Overall service centers and networks account for 2-4% of global CO2 emissions and it is expected they can reach up to 10% in 5-10 years. Service centers and communication networks have been managed independently so far, but the new generation of Cloud systems can be based on a strict integration of service centers and networking infrastructures. Moreover, geographically-distributed service centers are being widely adopted to keep cloud services close to end users and to guarantee high performance. The geographical distribution of the computing facilities offers many opportunities for optimizing energy consumption and costs by means of a clever distribution of the computational workload exploiting different availability of renewable energy sources, but also different time zones and hourly energy pricing. Energy and cost savings can be pursued by dynamically allocating computing resources to applications at a global level, while communication networks allow to assign flexibly load requests and to move data. Even if in the last years a quite large research effort has been devoted to the energy efficiency of service centers and communication networks, limited work has been done for exploring the opportunities of integrated approaches able to exploit possible synergies between geographically distributed service centers and networks for accessing and interconnecting them. In this paper we propose an optimization framework able to jointly manage the use of brown and green energy in an integrated system and to guarantee quality requirements. We propose an efficient and accurate problem formulation that can be solved for real-size instances in few minutes to optimality. Numerical results, on a set of randomly generated instances and a case study representative of a large Cloud provider, show that the availability of green energy have a big impact on optimal energy management policies and that the contribution of the network is far from being negligible. Keywords: Green IT; Cloud service; Energy management; Green Networking; Optimization; Mathematical Programming; Renewable energy. 1

2 1 Introduction The debate on climate change and carbon dioxide emission reduction is fostering the development of new green policies aimed at decreasing the environmental impact of human activities. How to contrast global warming and how to enhance energy efficiency have been put on top of the list of the world s global challenges. ICT plays a key role in this greening process. Since the beginning, ICT applications have been considered as part of the solution as they can greatly improve the environmental performance of all the other sectors of the world economy. More recently, the awareness of the potential impact of the carbon emissions of the ICT sector itself has rapidly increased. Overall, the combination of the energy consumption of service centers and communication networks accounts for 2-4% of global CO2 emissions (comparable, e.g., to the emissions due to the global air traffic) and it is projected to reach up to 10% in 5-10 years, fueled by the expected massive adoption of Cloud computing [1, 2]. Service centers investment grew by 22.1% during 2012 and it is expected it will further grow by another 14.5% in 2013 [1]. So, one of the main challenges for Cloud computing is to be able to reduce its carbon and energy footprints, while keeping up with the high growth rate of storage, server and communication infrastructures. Even if computing and networking components of the system have been designed and managed quite independently so far, the current trend is to have them more strongly integrated for improving performance and efficiency of Cloud services offered to end users [3]. The integration of computing and networking components into a new generation of Cloud systems can be used not only to provide service flexibility to end users, but also to manage in a flexible way resources available in geographically-distributed computing centers and in the network interconnecting them. A key enabler of Cloud/network cooperation is the use of geographically distributed service centers. Distributed Cloud service provisioning allows to better balance the traf- 2

3 fic/computing workload and to bring computing and storage services closer to the end users, for a better application experience [4]. Moreover, from an energy point of view, having geographically distributed service centers allows Cloud providers to optimize energy consumption by exploiting load variations and energy cost variations over time in different locations. The level of flexibility in the use of resources in different service centers depends on the application domains and basically comes from the geographic distribution of virtual machines hosting service applications, the use of dynamic geographically load redirect mechanisms, and the intelligent use of storage systems with data partitioning and replication techniques. This flexibility in service centers management also has a relevant impact on the communication network mainly for two reasons. First, since the service requests from users are delivered through an access network to service centers hosting the virtual machines, moving dynamically the workload among service centers can completely change the traffic pattern observed by the network. Second, an interconnection network is used to internally connect service centers and redirect end-user request or even move virtual machines and data with, again, a non-negligible impact on the traffic load when reconfiguration decisions are taken by the Cloud management system. Access and interconnection networks can be actually implemented in several different ways as private Intranets or using the public Internet, depending on the Cloud provider policies and the specific application domains. In any case, there is a large number of possible interaction models among telecommunications and Cloud provider that regulate their service agreements from a technical and economical perspective. The main contribution of this paper is to explore the possibility of jointly and optimally managing service centers and the network connecting them, with the aim of reducing the Cloud energy cost and consumption. We develop an optimization framework that is able to show the potential savings that can be achieved with the joint management 3

4 and to point out the relevant parameters that impact on the overall system performance. The resource utilization and load allocation scheduling is performed on a daily basis assuming a central decision point (one of the service centers) and the availability of traffic patterns for different time periods. However, since the computational time is quite small (order of a few minutes), the time period lenght can be decreased, and the granularity can be finer, so as to follow better unpredictable traffic variation. The approach proposed is based on a MILP (Mixed Integer Linear Programming) model which is solved to optimality with a state-of-the-art solver. The model assumes a Cloud service provider adopting the PaaS (Platform as a Service) approach and optimizes the load allocation to a set of geographically distributed service centers (SCs) where virtual machines (VMs) are assigned to physical servers in order to serve requests belonging to different classes. The goal is minimize the total energy cost considering the time-varying nature of energy costs and the availability of green energy at different locations. The traffic can be routed to SCs using a geographical network whose capacity constraints and energy consumption are accounted for. We formally prove that the problem is NP-hard since it is equivalent to a Set Covering problem. We present a set of numerical results on a realistic case study that considers the real worldwide geographical distribution of SCs of a Cloud provider and the variations of energy cost and green energy availability in different world regions. Moreover, we show some results to characterize the scalability of the optimization framework proposed and the sensitivity to workload and green energy prediction errors. Even if a large literature on the improvement of energy efficiency of service centers (see e.g. [5]) and communication networks (see e.g. [6]) exists, very few studies have considered so far the cooperation between Cloud and network providers for a joint energy management, and, to the best of our knowledge, no one has proposed a joint optimization framework able to exploit low energy modes in physical servers and networking devices 4

5 (see Section 6 for a detailed analysis of related work). From a practical point of view, we assume that the economic advantages coming from the energy savings can be managed through service agreements between Cloud and network operators taking into account the contributions of different system components that can be quantified by the proposed model. The paper is organized as follows. Section 2 describes the integrated approach and the problem addressed. Section 3 describes the proposed MILP model. Section 4 reports on the experimental tests and the obtained results. Section 6 overviews other literature approaches. Finally, conclusions are drawn in Section 7. 2 The integrated framework for sustainable Clouds In this work we consider a PaaS provider operating a virtualized service infrastructure comprising multiple Service Centers (SCs) distributed over multiple physical sites. This scenario is frequent nowadays. Indeed, Cloud providers own multiple geographically distributed SCs, each including thousands of physical servers. For example, Amazon offers EC2 services in nine worldwide regions (located in USA, South America, Europe, Australia, and Asia) and each region is further dispersed in different availability zones [7]. We denote with N the set of SCs. Physical sites are connected by a wide area network operated by a partner network provider (see Figure 1). The network is modeled considering virtual paths connecting the SCs (a full mesh of end-to-end paths is assumed). The PaaS provider supports the execution of multiple transactional services, each representing a different customer application. For the sake of simplicity, we assume that each VM hosts a single service, and multiple VMs implementing the same service can be run in parallel on different servers. The hosted services can be heterogeneous with respect to computing demands, workload intensities, and bandwidth requirements. Services with different computing and 5

6 workload profiles are categorized into independent request classes, where the system overall serves a set K of request classes. Given the application workload profiles, the aim is to decide which SC serves each request, taking into account performance and energy as primary factors. As SC energy consumption is considered, the application workload can be forwarded from one SC to another so as to exploit cheaper or green energy where it is available. However, this has an impact on the network energy consumption. The overall problem consists in assigning application workload to service centers in order to jointly minimize SC and network energy costs. The problem is solved on a daily basis: one day in advance, a prediction of the application workload is considered (as in [8, 9, 10, 11]) and requests are assigned to service centers for the next 24 hours. The considered 24 hours horizon is discretized and divided into time bands. We denote with T the set of time bands. Each SC is characterized by a specific traffic profile for each time band: the local arrival rate for requests of class k K at the SC i N at time t T is denoted with λik t and we assume that the workload profile is periodic [12] 1. Furthermore, as in [13, 14, 15] we assume that the SCs implement a distributed file system and data are replicated in multiple geographical locations for availability reasons. As in [13, 14, 15] we assume that the SCs implement a distributed file system and data are replicated in multiple geographical locations for availability reasons. Each SC has a set of different types of VMs L that can serve incoming requests. Among the many resources, we focus on the CPU and bandwidth as representative resources for the resource allocation problem, consistently with [16, 17, 18, 19, 20]. Different request classes need 1 It is worth noting that the computational times, discussed in Section 4, are such that even a finer granularity planning can be performed with our model. Besides, it is possible to solve the model several times during the day, thus taking into account unexpected traffic variations that can modify the predicted 24 hours profile. However, the analysis of more fine grained time scales and their interrelationship with the one day ahead planning is out of the scope of this paper. 6

7 Service Center Green energy sources Virtualized Servers Load Manager VM 1 VM 2 VM n App 1 App 2 App n Operating Operating Operating System System System Virtual Machine Monitor Brown energies Communication Network Figure 1: Reference Cloud System Architecture. different kinds of VMs and have different bandwidth and CPU requirements. Three parameters are used to model such features. Parameter m kl is equal to 1 if request of class k can be served by a VM of type l, parameter b k represents the bandwidth requirement of class k requests, while D k represents the overall CPU demanding time [21] for serving a request of class k on a VM of capacity 1. VM and SC have capacity limits: P il is the capacity of a VM of type l at the SC i, while C i is the overall SC i computing capacity. Energy consumption varies for different VMs. Furthermore, switching on or off a VM with respect to the previous time band also requires a certain amount of energy. Parameter α il denotes the energy consumption for running a type l VM in SC i at peak load (in kwh), while η il and θ il are the energy consumption for turning on and off a VM, respectively. Concerning network parameters, we assume that the energy consumption of Cloud service on an end-to-end path is zero if there is no traffic, while in the presence of traffic it has a fixed component and a load proportional component. In order, to better estimate 7

8 these components we consider the number of physical hops along the path and the energy consumption of routers interconnecting them (obviously the mapping of paths to physical links and routers depends on the network operator policies). We consider that the number of routers R ij in the path connecting each pair of SCs i and j is given, which affects the energy consumption of connecting i to j. Furthermore, a maximum amount of traffic Q ij can be sent from i to j. In fact a maximum bandwidth is available on the links belonging to the path connecting i and j, which cannot handle additional traffic when at full capacity. The energy consumption, in terms of kwh, for running and keeping idle a single router are denoted with γ ij and δ ij, respectively. Switching on and off a path consumes energy as well: τ ij and ξ ij. For each unit of energy consumed by the path connecting i to j at time t a cost fij t must be paid. In order to take into account service centers cooling energy costs, power distribution and uninterruptible power supply efficiency, we consider the Power Usage Effectiveness for SC i, denoted by ρ i. The Power Usage Effectiveness is defined as the total service center power divided by the IT equipment power. Renewable energy is considered as well, with cost and availability depending on the sites. We denote with c t i and g t i the cost for brown and green energy at each SC in the time band t, respectively. Γ t i is the renewable energy available at SC i at time t. As for the application workload, green energy sources can be evaluated by relying on prediction techniques [22, 23, 24]. The overall problem consists of assigning request classes to service centers, so as to guarantee that all requests are served by a suitable VM by minimizing SC and network energy costs, while not exceeding service centers and network capacities. The problem solution will provide the fraction of incoming workload served at each SC and redirected by the load managers to other SCs (see Figure 1), and the number of VMs at each SC to be allocated to each application. The main idea behind the approach is that requests 8

9 coming at SC i should be forwarded to another SC if the requests redirect is more efficient in terms of energy costs than serving the requests locally. Note that, in another SC energy costs might be lower thanks to time zone differences. The problem parameters we will use in this paper are summarized in Table 1. We assume to solve the problem on a daily basis at a central decision point (any SC). In alternative the solution can be also computed in parallel at different SCs by sharing the incoming workload prediction without adding a significant overhead in the system (the information is shared once a day). 3 An optimization approach for energy management The load management problem is formulated as a Mixed Integer Linear Programming (MILP) optimization model which takes into account many other important features, such as energy consumption, bandwidth and capacity constraints, and green energy generation. 3.1 Variables The problem can be formulated using several sets of variables. Continuous non negative variable x t ijkl represents the arrival rate of requests of class k at SC i, which are served in SC j by a VM of type l, at time t. In other terms, x t ijkl variables represent the optimal configuration of request forwarding or in-site serving of the Cloud system in the time interval t. Note that x t iikl indicates the request rate that is originated and served locally, i.e., l L xiikl t λt ik and l L xiikl t = λt ik if requests are not redirected. Integer variable w t il represents the number of VMs of type l used in SC i at time band t. Two integer variables represent the number of VMs of type l to be turned on (and off) in each SC i at time band t: w t il (and respectively, wt il ). 9

10 N set of Service Centers T set of time bands L set of VM types K set of request classes λik t Local incoming workload arrival rate for request class k at SC i at time band t (Req/hour) m kl VM requirement parameter: equal to 1 if class k K can be served by type l VM, 0 otherwise b k bandwidth requirement for request class k D k demanding time for serving request class k on a VM of capacity 1 P il capacity of a VM of type l at SC i C i SC i overall computing capacity U average VMs utilization α il energy consumption for running a type l VM in SC i η il energy consumption for switching on a type l VM in SC i θ il energy consumption for switching off a type l VM in SC i ρ i PUE value for SC i ci t cost for energy in SC i at time band t R ij total number of routers in the link (i, j) Q ij maximum bandwidth available on path (i, j) γ ij energy consumption for a router in path (i, j) δ ij energy consumption for a router in path (i, j) in idle state τ ij energy consumption for switching on a router in path (i, j) ξ ij energy consumption for switching off a router in path (i, j) fij t cost for energy in path (i, j) wil 0 number of active VMs of type l in SC i at time 0 z 0 ij link (i, j) status at time 0 Γi t green energy available in SC i at time band t cost for green energy in SC i at time band t g t i Table 1: Sets and parameters. 10

11 Other variables are associated to the network: zij t is a binary variable which is equal to 1 if the path connecting i to j is active at time t, 0 otherwise. Similarly to SCs, z t ij and zt ij indicate whether each path has to be turned off with respect to time t 1. Finally, continuous variable yi t models the amount of green energy used in SC i at time t. The decision variables are summarized in Table 2. xijkl t wil t wil t wil t zij t zij t zij t yi t Arrival rate for class k request redirected from SC i to SC j served with a type l VM Number of type l VMs running in SC i at time band t Number of VMs of type l turned on with respect to time t 1 in SC i Number of VMs of type l turned off with respect to time t 1 in SC i Whether the link (i, j) is active in time band t (binary) Whether the link (i, j) has to be turned on with respect to time t 1 (binary) Whether the link (i, j) has to be turned off with respect to time t 1 (binary) Green energy used in SC i at time t Table 2: Decision Variables. 3.2 Objective function As mentioned above, the aim of the problem is to reduce energy cost along the considered time horizon and to exploit green energy where available. The following objective function accounts for these goals. min t T + t T { [ ci t i N i,j N f t ij R ij ] } ρ i (α il wil t + η ilwil t + θ ilwil t ) yt i + gi t yt i [ l L δ ij z t ij + τ ijz t ij + ξ ijz t ij + (γ ij δ ij ) k K b k l L x t ijkl Q ij ] (1) 11

12 The total energy required at SC i at the time band t is represented by the term ρ i l L (α il wil t + η ilwil t + θ ilwil t ), namely the sum of the energy needed to keep the VMs running and the energy needed to switch them on and off. The cost for the total energy is split into green and brown energy costs. The amount of green energy used is simply given by the variable yi t, while the brown energy is the difference between the total energy needed in the SC and the green energy actually used. Concerning the network, the total amount of energy cost to be minimized is computed summing each path energy cost, which depends on the cost of keeping the routers working and the cost of turning on and off the routers. Besides, the cost of the energy consumed by the the routers is considered, which is proportional to the total bandwidth in use, namely the fraction with respect to the total bandwidth available. Network is assumed not to use green energy. The energy costs of SC (first term of the objective function) and network (second term) are jointly minimized. 3.3 Constraints Several constraints are needed to guarantee a proper description of the service centers and of the network behavior Workload assignment constraints First of all, we need to guarantee that all the requests are served, either by the SC at which they occur or by another to which they are forwarded. Further, they must be served by a suitable class of VM. The following constraints guarantee that these requirements are satisified. l L xijkl t = λt ik i N, k K, t T (2) j N x t ijkl λt ik m kl i, j N, k K, l L, t T (3) 12

13 In details, constraints (2) make sure that all the incoming traffic is served by any of the SC. Equations (3) ensure that each type of request in each SC is served by the suitable type of VM Service center capacity constraints The following capacity constraints are added to ensure that service centers capacity is not exceeded. wil t D k x t jikl j N k K U i N, l L, t T (4) P il wil t C i i N (5) l L Constraints (4) determines the number of VMs of type l required to serve the overall incoming workload at site i. As in other literature approaches [8, 25] and currently implemented by Cloud providers (see, e.g., AWS Elastic Beanstalk [26]), these constraints force a suitable number of VMs to be active in i in time slot t, guaranteeing that the aver- D k x t jikl age VM utilization is less or equal to a threshold U. Note that j N k K equals to wil t VMs utilization under workload sharing. In practice U is usually set around 60%. Since w t il must be integer while the right hand side may be not, the constraints may not be tight, and therefore inequalities are needed. Finally, constraints (5) ensure that the number of VMs running at SC i in time t is lower than the available resources. 13

14 3.3.3 Network constraints A network description must be included in the model, guaranteeing that the network capacity is not exceeded and that the network energy consumption is properly computed. b k xijkl t Q ijzij t i, j N, t T (6) k K l L zij t = zt ji i, j N, t T (7) Formula k K b k l L x t ijkl compute the portion of link bandwidth used for both directions of the traffic between SC i to SC j, thus determining the bandwidth used on each path of the network in each time slot. Constraints (6) ensure that traffic on each path does not exceed the available capacity and force zij t to be one if the amount of used bandwidth on (i, j) is strictly positive, thus determining the active paths at time t. Furthermore, constraints (7) guarantee that if a path is active in one direction, it is also active in the other, as paths are bidirectional. Note that parameters Q ij may not be the actual bandwidth along the path between i and j, but a reduced value that can guarantee that the capacity available for traffic load is just a fraction of the total one. This is one of the typical traffic engineering instruments adopted by network operators to control the quality of service experienced by traffic flows Time continuity constraints VMs and paths switching on and off must be computed along the considered time horizon. The following constraints represent the relations between consecutive time bands. 14

15 w t il wt il wt 1 il i N, l L, t T (8) w t il wt 1 il w t il i N, l L, t T (9) z t il zt il zt 1 il i N, l L, t T (10) z t il zt 1 il z t il i N, l L, t T (11) Concerning SCs, (8) and (9) identify how many VMs have to be switched on or off on with respect to time band t 1. Concerning the network, equations (10) and (11) define which paths have to be turned on or switched to idle mode with respect to the previous time band Green energy The following constraints concerns the green energy consumed. y t i ρ i L ( αil wil t + η ilwil t θ ilwil) t i N, t T (12) l=1 yi t Γt i i N, t T (13) Equation (12) ensures that the green energy used in each SC does not exceed the total energy needed. Equations (13) guarantee that the availability of green energy sources in each SC is not exceeded. The lower cost of green energy (we assume gi t < ct i for all i and t) forces the optimal solution to prefer those SCs which have the possibility to exploit green energy produced in-site, and to use all the green energy available in a site before starting to use brown energy. Therefore, requests will be forwarded to those sites where green energy is available, provided that the network cost is lower than the savings that would be obtained. 15

16 3.3.6 Domain constrains Finally, the model is completed by defining the variables domains. x t ijkl R+ i, j N, k K, l L, t T (14) w t il, wt il, wt il N i N, l L, t T (15) z t ij, zt ij, zt il {0, 1} i, j N, t T (16) y t i R+ i N, t T (17) In Appendix B we show that this probelm is NP-hard since it is equivalent to an Set Covering Problem. 4 Experimental analysis Our resource management model has been evaluated under a variety of systems and workload configurations. Section 4.1 presents the settings for our experiments, while Section 4.2 reports on the scalability results we achieved while solving our models with commercial MILP solvers. Finally, Section 4.3 presents a cost-benefit evaluation of our solution, which is compared to the case where resource allocation is performed locally and independently at each SC without workload redirection and green energy is not available. 4.1 Experimental settings Our approach is evaluated considering a large set of randomly generated instances. The model parameters were randomly generated uniformly in the ranges reported in Table 3. The number of service centers varies between 9 and 36 considering a world wide distribution, while the number of request classes varies between 120 and 600. In particular SCs are located in four macro-regions: West USA, East USA, Europe, and Asia, where 16

17 most of the SCs belonging to each area have the same time band. In order to evaluate the impact of interaction among different geographical areas, each instance includes at least one SC in several macro regions. SCs location has an impact on the energy costs and on the availability of green energy sources. This latter also depends on the size of the SC if solar energy is considered. According to [27] we consider SCs with size in the range ,000sqm. Furthermore, the overhead of the cooling system is included in energy costs by considering service centers PUE according to the values reported in [2]. For what concerns applications incoming workload, we consider 24 time bands and we have built synthetic workloads from the trace of requests registered at the Web site of a large University in Italy. The trace contains the number of sessions registered over 100 physical servers for a one year period in 2006, on a hourly basis. Realistic workloads are built assuming that the request arrivals follow non-homogeneous Poisson processes and extracting the request traces corresponding to the days with the highest workloads experienced during the observation periods. Some noise is also added, as in [28, 29, 11]. Figure 2 shows a representative instance of the normalized local arrival rate of a given class. In order to vary also the applications peak workload we consider the total number of Internet users [30] for each world region where SCs are located and set the λik t peak values proportionally to this number. Overall, the peak values vary between 100 and 1000 req/sec. We considered 9 types of VMs varying their capacity in the range between 1 and 20, according to current IaaS/PaaS provider offers [7, 31]. The computing performance parameters of the applications have been randomly generated uniformly in the ranges reported in Table 3 as in [32, 29, 11, 28]. Applications network requirements have been generated considering the SPECWeb2009 industry benchmark results [33]. 17

18 Figure 2: Normalized local arrival rate over a day (UTC+1). The switching energy consumption of VMs has been evaluated as in [34], while VMs power consumption has been generated according to the results reported for the SPECVint industry benchmark [35], considering the peak power consumption of the reference physical servers and averaging the power consumption among the VMs running during a test. The overall SC capacity has been determined varying the number of physical servers between 5,000 and 16,000 and assuming a 1:8 ratio for the physical to virtual resources assignment (i.e., 1 physical core is assigned to 8 virtual cores of equal capacity) according to SPECVint recent results [35]. For what concerns the wide area network connecting remote SCs, paths capacity varies between 0.5 and 1 Gbps. A typical path connecting SCs is build up both by physical lines (such as optical fibers) and network components (such as routers and switches). Thus, we estimate the energy consumption of each path as the energy consumed by its routers, proportionally to the bandwidth in use. We have estimated the number of routers connecting different world regions through the traceroute application. Results (reported in Tables 11 of the Appendix) show that, the number of hops within the same region is often one, while the largest number (18), is achieved when connecting Europe with West USA 18

19 and Asia with East USA. Due to the difficulty in obtaining detailed energy consumption profiles of commercial routers and the heterogeneity of devices used by network providers, we have considered a simplified scenario taking into account a single reference router. The model considered is the Juniper E320 [36], which is quite popular and whose power parameters are publicly available and reported in Table 3. Obviously, the model proposed is general and results can be easily extended to other router energy profiles. Finally, from the cost point of view, we assumed that for each path (i, j) energy cost is equal to the average energy cost between service centers i and j. The cost of energy per MWh varies between 10 and 65 Euro/MWh. Energy costs data have been obtained by considering several energy Market Managers responsible for those country in which SCs are located, taking into account also the energy price variability depending on the time of the day. The list of energy Market Managers we have considered is reported in Table 12 of the Appendix; for China, where there is no data disclosure on energy prices we have considered an unofficial source [37]. Figure 3 shows the energy cost trend in the four geographical macro-regions with respect to time zone UTC+1. Figure 3: Service centers energy cost during each time band (UTC+1). Regarding green energies, we have considered a subset of SCs able to produce green energy or close to clean energy facilities. For example, Google and other Cloud market 19

20 leaders have launched initiatives aiming at increasing the amount of electricity derived from renewable sources, rather than relying on electricity produced only with coal [38]. In [39], Google stated that energy coming from renewable sources represents the 25% of the overall Google s electricity use in 2010 and it is argued that this percentage could grow up to 35% in According to [40], [41], [42], we have derived the availability of renewable energy sources (namely, solar, wind, and geothermal energy), in the four macro-regions. Details are reported in Figure 4. Figure 4: Green SCs location. Finally, for what concerns green energy costs, we assume that energy is produced locally within the SC, so that the only cost to be considered is the capital cost required to realize the facilities. Consequently, parameters g t i are set to zero. b k [10, 210] KB D k [0.1, 1] s α il [60, 90] Wh η il [2, 3] Wh θ il [0.28,14.11] Wh γ ij 3.84 KWh δ ij KWh τ ij KWh ξ ij KWh ρ i [1.2, 1.7] Table 3: Performance and Energy parameters. 20

PARVIS - Performance management of VIrtualized Systems

PARVIS - Performance management of VIrtualized Systems PARVIS - Performance management of VIrtualized Systems Danilo Ardagna joint work with Mara Tanelli and Marco Lovera, Politecnico di Milano ardagna@elet.polimi.it Milan, November 23 2010 Data Centers, Virtualization,

More information

Flexible Distributed Capacity Allocation and Load Redirect Algorithms for Cloud Systems

Flexible Distributed Capacity Allocation and Load Redirect Algorithms for Cloud Systems Flexible Distributed Capacity Allocation and Load Redirect Algorithms for Cloud Systems Danilo Ardagna 1, Sara Casolari 2, Barbara Panicucci 1 1 Politecnico di Milano,, Italy 2 Universita` di Modena e

More information

Multi-layer MPLS Network Design: the Impact of Statistical Multiplexing

Multi-layer MPLS Network Design: the Impact of Statistical Multiplexing Multi-layer MPLS Network Design: the Impact of Statistical Multiplexing Pietro Belotti, Antonio Capone, Giuliana Carello, Federico Malucelli Tepper School of Business, Carnegie Mellon University, Pittsburgh

More information

Hierarchical Approach for Green Workload Management in Distributed Data Centers

Hierarchical Approach for Green Workload Management in Distributed Data Centers Hierarchical Approach for Green Workload Management in Distributed Data Centers Agostino Forestiero, Carlo Mastroianni, Giuseppe Papuzzo, Mehdi Sheikhalishahi Institute for High Performance Computing and

More information

CURTAIL THE EXPENDITURE OF BIG DATA PROCESSING USING MIXED INTEGER NON-LINEAR PROGRAMMING

CURTAIL THE EXPENDITURE OF BIG DATA PROCESSING USING MIXED INTEGER NON-LINEAR PROGRAMMING Journal homepage: http://www.journalijar.com INTERNATIONAL JOURNAL OF ADVANCED RESEARCH RESEARCH ARTICLE CURTAIL THE EXPENDITURE OF BIG DATA PROCESSING USING MIXED INTEGER NON-LINEAR PROGRAMMING R.Kohila

More information

Generalized Nash Equilibria for the Service Provisioning Problem in Cloud Systems

Generalized Nash Equilibria for the Service Provisioning Problem in Cloud Systems Generalized Nash Equilibria for the Service Provisioning Problem in Cloud Systems Danilo Ardagna, Barbara Panicucci Mauro Passacantando Report n. 2011.27 1 Generalized Nash Equilibria for the Service Provisioning

More information

A Game Theoretic Formulation of the Service Provisioning Problem in Cloud Systems

A Game Theoretic Formulation of the Service Provisioning Problem in Cloud Systems A Game Theoretic Formulation of the Service Provisioning Problem in Cloud Systems Danilo Ardagna 1, Barbara Panicucci 1, Mauro Passacantando 2 1 Politecnico di Milano,, Italy 2 Università di Pisa, Dipartimento

More information

A dynamic optimization model for power and performance management of virtualized clusters

A dynamic optimization model for power and performance management of virtualized clusters A dynamic optimization model for power and performance management of virtualized clusters Vinicius Petrucci, Orlando Loques Univ. Federal Fluminense Niteroi, Rio de Janeiro, Brasil Daniel Mossé Univ. of

More information

Black-box Performance Models for Virtualized Web. Danilo Ardagna, Mara Tanelli, Marco Lovera, Li Zhang ardagna@elet.polimi.it

Black-box Performance Models for Virtualized Web. Danilo Ardagna, Mara Tanelli, Marco Lovera, Li Zhang ardagna@elet.polimi.it Black-box Performance Models for Virtualized Web Service Applications Danilo Ardagna, Mara Tanelli, Marco Lovera, Li Zhang ardagna@elet.polimi.it Reference scenario 2 Virtualization, proposed in early

More information

An Energy-Aware Methodology for Live Placement of Virtual Machines with Variable Profiles in Large Data Centers

An Energy-Aware Methodology for Live Placement of Virtual Machines with Variable Profiles in Large Data Centers An Energy-Aware Methodology for Live Placement of Virtual Machines with Variable Profiles in Large Data Centers Rossella Macchi: Danilo Ardagna: Oriana Benetti: Politecnico di Milano eni s.p.a. Politecnico

More information

Benchmarking Hadoop & HBase on Violin

Benchmarking Hadoop & HBase on Violin Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages

More information

Energy Constrained Resource Scheduling for Cloud Environment

Energy Constrained Resource Scheduling for Cloud Environment Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering

More information

Run-time Resource Management in SOA Virtualized Environments. Danilo Ardagna, Raffaela Mirandola, Marco Trubian, Li Zhang

Run-time Resource Management in SOA Virtualized Environments. Danilo Ardagna, Raffaela Mirandola, Marco Trubian, Li Zhang Run-time Resource Management in SOA Virtualized Environments Danilo Ardagna, Raffaela Mirandola, Marco Trubian, Li Zhang Amsterdam, August 25 2009 SOI Run-time Management 2 SOI=SOA + virtualization Goal:

More information

Cloud Management: Knowing is Half The Battle

Cloud Management: Knowing is Half The Battle Cloud Management: Knowing is Half The Battle Raouf BOUTABA David R. Cheriton School of Computer Science University of Waterloo Joint work with Qi Zhang, Faten Zhani (University of Waterloo) and Joseph

More information

Analysis and Optimization Techniques for Sustainable Use of Electrical Energy in Green Cloud Computing

Analysis and Optimization Techniques for Sustainable Use of Electrical Energy in Green Cloud Computing Analysis and Optimization Techniques for Sustainable Use of Electrical Energy in Green Cloud Computing Dr. Vikash K. Singh, Devendra Singh Kushwaha Assistant Professor, Department of CSE, I.G.N.T.U, Amarkantak,

More information

Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing

Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing Deep Mann ME (Software Engineering) Computer Science and Engineering Department Thapar University Patiala-147004

More information

An Energy-aware Multi-start Local Search Metaheuristic for Scheduling VMs within the OpenNebula Cloud Distribution

An Energy-aware Multi-start Local Search Metaheuristic for Scheduling VMs within the OpenNebula Cloud Distribution An Energy-aware Multi-start Local Search Metaheuristic for Scheduling VMs within the OpenNebula Cloud Distribution Y. Kessaci, N. Melab et E-G. Talbi Dolphin Project Team, Université Lille 1, LIFL-CNRS,

More information

Setting deadlines and priorities to the tasks to improve energy efficiency in cloud computing

Setting deadlines and priorities to the tasks to improve energy efficiency in cloud computing Setting deadlines and priorities to the tasks to improve energy efficiency in cloud computing Problem description Cloud computing is a technology used more and more every day, requiring an important amount

More information

Sla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing

Sla Aware Load Balancing Algorithm Using Join-Idle Queue for Virtual Machines in Cloud Computing Sla Aware Load Balancing Using Join-Idle Queue for Virtual Machines in Cloud Computing Mehak Choudhary M.Tech Student [CSE], Dept. of CSE, SKIET, Kurukshetra University, Haryana, India ABSTRACT: Cloud

More information

Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani

Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani Improving the Scalability of Data Center Networks with Traffic-aware Virtual Machine Placement Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani Overview:

More information

CiteSeer x in the Cloud

CiteSeer x in the Cloud Published in the 2nd USENIX Workshop on Hot Topics in Cloud Computing 2010 CiteSeer x in the Cloud Pradeep B. Teregowda Pennsylvania State University C. Lee Giles Pennsylvania State University Bhuvan Urgaonkar

More information

Figure 1. The cloud scales: Amazon EC2 growth [2].

Figure 1. The cloud scales: Amazon EC2 growth [2]. - Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 shinji10343@hotmail.com, kwang@cs.nctu.edu.tw Abstract One of the most important issues

More information

A SWOT ANALYSIS ON CISCO HIGH AVAILABILITY VIRTUALIZATION CLUSTERS DISASTER RECOVERY PLAN

A SWOT ANALYSIS ON CISCO HIGH AVAILABILITY VIRTUALIZATION CLUSTERS DISASTER RECOVERY PLAN A SWOT ANALYSIS ON CISCO HIGH AVAILABILITY VIRTUALIZATION CLUSTERS DISASTER RECOVERY PLAN Eman Al-Harbi 431920472@student.ksa.edu.sa Soha S. Zaghloul smekki@ksu.edu.sa Faculty of Computer and Information

More information

Power Management in Cloud Computing using Green Algorithm. -Kushal Mehta COP 6087 University of Central Florida

Power Management in Cloud Computing using Green Algorithm. -Kushal Mehta COP 6087 University of Central Florida Power Management in Cloud Computing using Green Algorithm -Kushal Mehta COP 6087 University of Central Florida Motivation Global warming is the greatest environmental challenge today which is caused by

More information

A Middleware Strategy to Survive Compute Peak Loads in Cloud

A Middleware Strategy to Survive Compute Peak Loads in Cloud A Middleware Strategy to Survive Compute Peak Loads in Cloud Sasko Ristov Ss. Cyril and Methodius University Faculty of Information Sciences and Computer Engineering Skopje, Macedonia Email: sashko.ristov@finki.ukim.mk

More information

2004 Networks UK Publishers. Reprinted with permission.

2004 Networks UK Publishers. Reprinted with permission. Riikka Susitaival and Samuli Aalto. Adaptive load balancing with OSPF. In Proceedings of the Second International Working Conference on Performance Modelling and Evaluation of Heterogeneous Networks (HET

More information

Beyond the Stars: Revisiting Virtual Cluster Embeddings

Beyond the Stars: Revisiting Virtual Cluster Embeddings Beyond the Stars: Revisiting Virtual Cluster Embeddings Matthias Rost Technische Universität Berlin September 7th, 2015, Télécom-ParisTech Joint work with Carlo Fuerst, Stefan Schmid Published in ACM SIGCOMM

More information

Strategic planning in LTL logistics increasing the capacity utilization of trucks

Strategic planning in LTL logistics increasing the capacity utilization of trucks Strategic planning in LTL logistics increasing the capacity utilization of trucks J. Fabian Meier 1,2 Institute of Transport Logistics TU Dortmund, Germany Uwe Clausen 3 Fraunhofer Institute for Material

More information

CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms

CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms Rodrigo N. Calheiros, Rajiv Ranjan, Anton Beloglazov, César A. F. De Rose,

More information

Energy Efficient MapReduce

Energy Efficient MapReduce Energy Efficient MapReduce Motivation: Energy consumption is an important aspect of datacenters efficiency, the total power consumption in the united states has doubled from 2000 to 2005, representing

More information

Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.

Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age. Volume 3, Issue 10, October 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Load Measurement

More information

The Answer Is Blowing in the Wind: Analysis of Powering Internet Data Centers with Wind Energy

The Answer Is Blowing in the Wind: Analysis of Powering Internet Data Centers with Wind Energy The Answer Is Blowing in the Wind: Analysis of Powering Internet Data Centers with Wind Energy Yan Gao Accenture Technology Labs Zheng Zeng Apple Inc. Xue Liu McGill University P. R. Kumar Texas A&M University

More information

A Proposed Service Broker Strategy in CloudAnalyst for Cost-Effective Data Center Selection

A Proposed Service Broker Strategy in CloudAnalyst for Cost-Effective Data Center Selection A Proposed Service Broker Strategy in CloudAnalyst for Cost-Effective Selection Dhaval Limbani*, Bhavesh Oza** *(Department of Information Technology, S. S. Engineering College, Bhavnagar) ** (Department

More information

IT@Intel. Comparing Multi-Core Processors for Server Virtualization

IT@Intel. Comparing Multi-Core Processors for Server Virtualization White Paper Intel Information Technology Computer Manufacturing Server Virtualization Comparing Multi-Core Processors for Server Virtualization Intel IT tested servers based on select Intel multi-core

More information

Optimal Planning of Closed Loop Supply Chains: A Discrete versus a Continuous-time formulation

Optimal Planning of Closed Loop Supply Chains: A Discrete versus a Continuous-time formulation 17 th European Symposium on Computer Aided Process Engineering ESCAPE17 V. Plesu and P.S. Agachi (Editors) 2007 Elsevier B.V. All rights reserved. 1 Optimal Planning of Closed Loop Supply Chains: A Discrete

More information

MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT

MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT 1 SARIKA K B, 2 S SUBASREE 1 Department of Computer Science, Nehru College of Engineering and Research Centre, Thrissur, Kerala 2 Professor and Head,

More information

A hierarchical multicriteria routing model with traffic splitting for MPLS networks

A hierarchical multicriteria routing model with traffic splitting for MPLS networks A hierarchical multicriteria routing model with traffic splitting for MPLS networks João Clímaco, José Craveirinha, Marta Pascoal jclimaco@inesccpt, jcrav@deecucpt, marta@matucpt University of Coimbra

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

The Quest for Energy Efficiency. A White Paper from the experts in Business-Critical Continuity

The Quest for Energy Efficiency. A White Paper from the experts in Business-Critical Continuity The Quest for Energy Efficiency A White Paper from the experts in Business-Critical Continuity Abstract One of the most widely discussed issues throughout the world today is the rapidly increasing price

More information

Load Balancing and Switch Scheduling

Load Balancing and Switch Scheduling EE384Y Project Final Report Load Balancing and Switch Scheduling Xiangheng Liu Department of Electrical Engineering Stanford University, Stanford CA 94305 Email: liuxh@systems.stanford.edu Abstract Load

More information

A Scalable Network Monitoring and Bandwidth Throttling System for Cloud Computing

A Scalable Network Monitoring and Bandwidth Throttling System for Cloud Computing A Scalable Network Monitoring and Bandwidth Throttling System for Cloud Computing N.F. Huysamen and A.E. Krzesinski Department of Mathematical Sciences University of Stellenbosch 7600 Stellenbosch, South

More information

Software-Defined Networks Powered by VellOS

Software-Defined Networks Powered by VellOS WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible

More information

Energy-Aware Data Centre Management

Energy-Aware Data Centre Management Energy-Aware Data Centre Management Cathryn Peoples, Gerard Parr and Sally McClean {c.peoples, gp.parr, si.mcclean}@ulster.ac.uk India-UK Advanced Technology Centre of Excellence in Next Generation Networks,

More information

Juniper Networks QFabric: Scaling for the Modern Data Center

Juniper Networks QFabric: Scaling for the Modern Data Center Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications

More information

Last time. Data Center as a Computer. Today. Data Center Construction (and management)

Last time. Data Center as a Computer. Today. Data Center Construction (and management) Last time Data Center Construction (and management) Johan Tordsson Department of Computing Science 1. Common (Web) application architectures N-tier applications Load Balancers Application Servers Databases

More information

USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES

USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES Carlos Oliveira, Vinicius Petrucci, Orlando Loques Universidade Federal Fluminense Niterói, Brazil ABSTRACT In

More information

CLOUD PERFORMANCE TESTING - KEY CONSIDERATIONS (COMPLETE ANALYSIS USING RETAIL APPLICATION TEST DATA)

CLOUD PERFORMANCE TESTING - KEY CONSIDERATIONS (COMPLETE ANALYSIS USING RETAIL APPLICATION TEST DATA) CLOUD PERFORMANCE TESTING - KEY CONSIDERATIONS (COMPLETE ANALYSIS USING RETAIL APPLICATION TEST DATA) Abhijeet Padwal Performance engineering group Persistent Systems, Pune email: abhijeet_padwal@persistent.co.in

More information

Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows

Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows TECHNISCHE UNIVERSITEIT EINDHOVEN Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows Lloyd A. Fasting May 2014 Supervisors: dr. M. Firat dr.ir. M.A.A. Boon J. van Twist MSc. Contents

More information

Virtual Machine Instance Scheduling in IaaS Clouds

Virtual Machine Instance Scheduling in IaaS Clouds Virtual Machine Instance Scheduling in IaaS Clouds Naylor G. Bachiega, Henrique P. Martins, Roberta Spolon, Marcos A. Cavenaghi Departamento de Ciência da Computação UNESP - Univ Estadual Paulista Bauru,

More information

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled

More information

CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT

CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT 81 CHAPTER 5 WLDMA: A NEW LOAD BALANCING STRATEGY FOR WAN ENVIRONMENT 5.1 INTRODUCTION Distributed Web servers on the Internet require high scalability and availability to provide efficient services to

More information

Graph Database Proof of Concept Report

Graph Database Proof of Concept Report Objectivity, Inc. Graph Database Proof of Concept Report Managing The Internet of Things Table of Contents Executive Summary 3 Background 3 Proof of Concept 4 Dataset 4 Process 4 Query Catalog 4 Environment

More information

On the effect of forwarding table size on SDN network utilization

On the effect of forwarding table size on SDN network utilization IBM Haifa Research Lab On the effect of forwarding table size on SDN network utilization Rami Cohen IBM Haifa Research Lab Liane Lewin Eytan Yahoo Research, Haifa Seffi Naor CS Technion, Israel Danny Raz

More information

Comparison of Windows IaaS Environments

Comparison of Windows IaaS Environments Comparison of Windows IaaS Environments Comparison of Amazon Web Services, Expedient, Microsoft, and Rackspace Public Clouds January 5, 215 TABLE OF CONTENTS Executive Summary 2 vcpu Performance Summary

More information

2. Research and Development on the Autonomic Operation. Control Infrastructure Technologies in the Cloud Computing Environment

2. Research and Development on the Autonomic Operation. Control Infrastructure Technologies in the Cloud Computing Environment R&D supporting future cloud computing infrastructure technologies Research and Development on Autonomic Operation Control Infrastructure Technologies in the Cloud Computing Environment DEMPO Hiroshi, KAMI

More information

Cloud Computing Trends

Cloud Computing Trends UT DALLAS Erik Jonsson School of Engineering & Computer Science Cloud Computing Trends What is cloud computing? Cloud computing refers to the apps and services delivered over the internet. Software delivered

More information

RevoScaleR Speed and Scalability

RevoScaleR Speed and Scalability EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution

More information

Experimentation driven traffic monitoring and engineering research

Experimentation driven traffic monitoring and engineering research Experimentation driven traffic monitoring and engineering research Amir KRIFA (Amir.Krifa@sophia.inria.fr) 11/20/09 ECODE FP7 Project 1 Outline i. Future directions of Internet traffic monitoring and engineering

More information

Cloud Computing Capacity Planning. Maximizing Cloud Value. Authors: Jose Vargas, Clint Sherwood. Organization: IBM Cloud Labs

Cloud Computing Capacity Planning. Maximizing Cloud Value. Authors: Jose Vargas, Clint Sherwood. Organization: IBM Cloud Labs Cloud Computing Capacity Planning Authors: Jose Vargas, Clint Sherwood Organization: IBM Cloud Labs Web address: ibm.com/websphere/developer/zones/hipods Date: 3 November 2010 Status: Version 1.0 Abstract:

More information

Evaluating HDFS I/O Performance on Virtualized Systems

Evaluating HDFS I/O Performance on Virtualized Systems Evaluating HDFS I/O Performance on Virtualized Systems Xin Tang xtang@cs.wisc.edu University of Wisconsin-Madison Department of Computer Sciences Abstract Hadoop as a Service (HaaS) has received increasing

More information

GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY

GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY TECHNICAL WHITE PAPER MAY 1 ST, 2012 GRIDCENTRIC S VIRTUAL MEMORY STREAMING (VMS) TECHNOLOGY SIGNIFICANTLY IMPROVES THE COST OF THE CLASSIC VIRTUAL MACHINE

More information

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient.

The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient. The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM 2012-13 CALIENT Technologies www.calient.net 1 INTRODUCTION In datacenter networks, video, mobile data, and big data

More information

IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications

IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications Open System Laboratory of University of Illinois at Urbana Champaign presents: Outline: IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications A Fine-Grained Adaptive

More information

Characterizing Task Usage Shapes in Google s Compute Clusters

Characterizing Task Usage Shapes in Google s Compute Clusters Characterizing Task Usage Shapes in Google s Compute Clusters Qi Zhang 1, Joseph L. Hellerstein 2, Raouf Boutaba 1 1 University of Waterloo, 2 Google Inc. Introduction Cloud computing is becoming a key

More information

Questions to be responded to by the firm submitting the application

Questions to be responded to by the firm submitting the application Questions to be responded to by the firm submitting the application Why do you think this project should receive an award? How does it demonstrate: innovation, quality, and professional excellence transparency

More information

WORKFLOW ENGINE FOR CLOUDS

WORKFLOW ENGINE FOR CLOUDS WORKFLOW ENGINE FOR CLOUDS By SURAJ PANDEY, DILEBAN KARUNAMOORTHY, and RAJKUMAR BUYYA Prepared by: Dr. Faramarz Safi Islamic Azad University, Najafabad Branch, Esfahan, Iran. Workflow Engine for clouds

More information

INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD

INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD M.Rajeswari 1, M.Savuri Raja 2, M.Suganthy 3 1 Master of Technology, Department of Computer Science & Engineering, Dr. S.J.S Paul Memorial

More information

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized

More information

Performance of networks containing both MaxNet and SumNet links

Performance of networks containing both MaxNet and SumNet links Performance of networks containing both MaxNet and SumNet links Lachlan L. H. Andrew and Bartek P. Wydrowski Abstract Both MaxNet and SumNet are distributed congestion control architectures suitable for

More information

Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths

Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths Satyajeet S. Ahuja, Srinivasan Ramasubramanian, and Marwan Krunz Department of ECE, University of Arizona, Tucson,

More information

Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0

Muse Server Sizing. 18 June 2012. Document Version 0.0.1.9 Muse 2.7.0.0 Muse Server Sizing 18 June 2012 Document Version 0.0.1.9 Muse 2.7.0.0 Notice No part of this publication may be reproduced stored in a retrieval system, or transmitted, in any form or by any means, without

More information

White Paper. Recording Server Virtualization

White Paper. Recording Server Virtualization White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...

More information

Optimizing Shared Resource Contention in HPC Clusters

Optimizing Shared Resource Contention in HPC Clusters Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs

More information

can you effectively plan for the migration and management of systems and applications on Vblock Platforms?

can you effectively plan for the migration and management of systems and applications on Vblock Platforms? SOLUTION BRIEF CA Capacity Management and Reporting Suite for Vblock Platforms can you effectively plan for the migration and management of systems and applications on Vblock Platforms? agility made possible

More information

ACANO SOLUTION VIRTUALIZED DEPLOYMENTS. White Paper. Simon Evans, Acano Chief Scientist

ACANO SOLUTION VIRTUALIZED DEPLOYMENTS. White Paper. Simon Evans, Acano Chief Scientist ACANO SOLUTION VIRTUALIZED DEPLOYMENTS White Paper Simon Evans, Acano Chief Scientist Updated April 2015 CONTENTS Introduction... 3 Host Requirements... 5 Sizing a VM... 6 Call Bridge VM... 7 Acano Edge

More information

Cloud Federation to Elastically Increase MapReduce Processing Resources

Cloud Federation to Elastically Increase MapReduce Processing Resources Cloud Federation to Elastically Increase MapReduce Processing Resources A.Panarello, A.Celesti, M. Villari, M. Fazio and A. Puliafito {apanarello,acelesti, mfazio, mvillari, apuliafito}@unime.it DICIEAMA,

More information

Adaptive Tolerance Algorithm for Distributed Top-K Monitoring with Bandwidth Constraints

Adaptive Tolerance Algorithm for Distributed Top-K Monitoring with Bandwidth Constraints Adaptive Tolerance Algorithm for Distributed Top-K Monitoring with Bandwidth Constraints Michael Bauer, Srinivasan Ravichandran University of Wisconsin-Madison Department of Computer Sciences {bauer, srini}@cs.wisc.edu

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Zadara Storage Cloud A whitepaper. @ZadaraStorage

Zadara Storage Cloud A whitepaper. @ZadaraStorage Zadara Storage Cloud A whitepaper @ZadaraStorage Zadara delivers two solutions to its customers: On- premises storage arrays Storage as a service from 31 locations globally (and counting) Some Zadara customers

More information

CloudAnalyst: A CloudSim-based Visual Modeller for Analysing Cloud Computing Environments and Applications

CloudAnalyst: A CloudSim-based Visual Modeller for Analysing Cloud Computing Environments and Applications CloudAnalyst: A CloudSim-based Visual Modeller for Analysing Cloud Computing Environments and Applications Bhathiya Wickremasinghe 1, Rodrigo N. Calheiros 2, and Rajkumar Buyya 1 1 The Cloud Computing

More information

Low-Carbon Routing Algorithms For Cloud Computing Services in IP-over-WDM Networks

Low-Carbon Routing Algorithms For Cloud Computing Services in IP-over-WDM Networks Low-Carbon Routing Algorithms For Cloud Computing Services in IP-over-WDM Networks TREND Plenary Meeting, Volos-Greece 01-05/10/2012 TD3.3: Energy-efficient service provisioning and content distribution

More information

Empirical Examination of a Collaborative Web Application

Empirical Examination of a Collaborative Web Application Empirical Examination of a Collaborative Web Application Christopher Stewart (stewart@cs.rochester.edu), Matthew Leventi (mlventi@gmail.com), and Kai Shen (kshen@cs.rochester.edu) University of Rochester

More information

Relational Databases in the Cloud

Relational Databases in the Cloud Contact Information: February 2011 zimory scale White Paper Relational Databases in the Cloud Target audience CIO/CTOs/Architects with medium to large IT installations looking to reduce IT costs by creating

More information

Chao He he.chao@wustl.edu (A paper written under the guidance of Prof.

Chao He he.chao@wustl.edu (A paper written under the guidance of Prof. 1 of 10 5/4/2011 4:47 PM Chao He he.chao@wustl.edu (A paper written under the guidance of Prof. Raj Jain) Download Cloud computing is recognized as a revolution in the computing area, meanwhile, it also

More information

Technical Investigation of Computational Resource Interdependencies

Technical Investigation of Computational Resource Interdependencies Technical Investigation of Computational Resource Interdependencies By Lars-Eric Windhab Table of Contents 1. Introduction and Motivation... 2 2. Problem to be solved... 2 3. Discussion of design choices...

More information

When Does Colocation Become Competitive With The Public Cloud? WHITE PAPER SEPTEMBER 2014

When Does Colocation Become Competitive With The Public Cloud? WHITE PAPER SEPTEMBER 2014 When Does Colocation Become Competitive With The Public Cloud? WHITE PAPER SEPTEMBER 2014 Table of Contents Executive Summary... 2 Case Study: Amazon Ec2 Vs In-House Private Cloud... 3 Aim... 3 Participants...

More information

The International Journal Of Science & Technoledge (ISSN 2321 919X) www.theijst.com

The International Journal Of Science & Technoledge (ISSN 2321 919X) www.theijst.com THE INTERNATIONAL JOURNAL OF SCIENCE & TECHNOLEDGE Efficient Parallel Processing on Public Cloud Servers using Load Balancing Manjunath K. C. M.Tech IV Sem, Department of CSE, SEA College of Engineering

More information

R u t c o r Research R e p o r t. A Method to Schedule Both Transportation and Production at the Same Time in a Special FMS.

R u t c o r Research R e p o r t. A Method to Schedule Both Transportation and Production at the Same Time in a Special FMS. R u t c o r Research R e p o r t A Method to Schedule Both Transportation and Production at the Same Time in a Special FMS Navid Hashemian a Béla Vizvári b RRR 3-2011, February 21, 2011 RUTCOR Rutgers

More information

When Does Colocation Become Competitive With The Public Cloud?

When Does Colocation Become Competitive With The Public Cloud? When Does Colocation Become Competitive With The Public Cloud? PLEXXI WHITE PAPER Affinity Networking for Data Centers and Clouds Table of Contents EXECUTIVE SUMMARY... 2 CASE STUDY: AMAZON EC2 vs IN-HOUSE

More information

FPGA area allocation for parallel C applications

FPGA area allocation for parallel C applications 1 FPGA area allocation for parallel C applications Vlad-Mihai Sima, Elena Moscu Panainte, Koen Bertels Computer Engineering Faculty of Electrical Engineering, Mathematics and Computer Science Delft University

More information

Average and Marginal Reductions in Greenhouse Gas Emissions from Data Center Power Optimization

Average and Marginal Reductions in Greenhouse Gas Emissions from Data Center Power Optimization Average and Marginal Reductions in Greenhouse Gas Emissions from Data Center Power Optimization Sabrina Spatari, Civil, Arch., Environmental Engineering Nagarajan Kandasamy, Electrical and Computer Engineering

More information

Comparing major cloud-service providers: virtual processor performance. A Cloud Report by Danny Gee, and Kenny Li

Comparing major cloud-service providers: virtual processor performance. A Cloud Report by Danny Gee, and Kenny Li Comparing major cloud-service providers: virtual processor performance A Cloud Report by Danny Gee, and Kenny Li Comparing major cloud-service providers: virtual processor performance 09/03/2014 Table

More information

Load balancing model for Cloud Data Center ABSTRACT:

Load balancing model for Cloud Data Center ABSTRACT: Load balancing model for Cloud Data Center ABSTRACT: Cloud data center management is a key problem due to the numerous and heterogeneous strategies that can be applied, ranging from the VM placement to

More information

END-TO-END CLOUD White Paper, Mar 2013

END-TO-END CLOUD White Paper, Mar 2013 END-TO-END CLOUD White Paper, Mar 2013 TABLE OF CONTENTS Abstract... 2 Introduction and Problem Description... 2 Enterprise Challenges... 3 Setup Alternatives... 4 Solution Details... 6 Conclusion and

More information

Joint Optimization of Overlapping Phases in MapReduce

Joint Optimization of Overlapping Phases in MapReduce Joint Optimization of Overlapping Phases in MapReduce Minghong Lin, Li Zhang, Adam Wierman, Jian Tan Abstract MapReduce is a scalable parallel computing framework for big data processing. It exhibits multiple

More information

Cost-effective Resource Provisioning for MapReduce in a Cloud

Cost-effective Resource Provisioning for MapReduce in a Cloud 1 -effective Resource Provisioning for MapReduce in a Cloud Balaji Palanisamy, Member, IEEE, Aameek Singh, Member, IEEE Ling Liu, Senior Member, IEEE Abstract This paper presents a new MapReduce cloud

More information

Multilevel Communication Aware Approach for Load Balancing

Multilevel Communication Aware Approach for Load Balancing Multilevel Communication Aware Approach for Load Balancing 1 Dipti Patel, 2 Ashil Patel Department of Information Technology, L.D. College of Engineering, Gujarat Technological University, Ahmedabad 1

More information

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora Scribe: One of the running themes in this course is the notion of

More information

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 4, July-Aug 2014 RESEARCH ARTICLE An Efficient Service Broker Policy for Cloud Computing Environment Kunal Kishor 1, Vivek Thapar 2 Research Scholar 1, Assistant Professor 2 Department of Computer Science and Engineering,

More information