SLA-aware virtual resource management for cloud infrastructures
|
|
- Karin Simmons
- 2 years ago
- Views:
Transcription
1 SLA-aware virtual resource management for cloud infrastructures Hien Nguyen Van, Frederic Dang Tran, Jean-Marc Menaud To cite this version: Hien Nguyen Van, Frederic Dang Tran, Jean-Marc Menaud. SLA-aware virtual resource management for cloud infrastructures. 9th IEEE International Conference on Computer and Information Technology (CIT 09), Oct 2009, Xiamen, China. pp.1-8, <hal > HAL Id: hal https://hal.archives-ouvertes.fr/hal Submitted on 20 Apr 2010 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
2 SLA-aware virtual resource management for cloud infrastructures Hien Nguyen Van, Frédéric Dang Tran Orange Labs rue du Géréral Leclerc Issy Les Moulineaux Cedex 9, France {hien.nguyenvan, Jean-Marc Menaud Ascola, École des Mines de Nantes / INRIA, LINA 4, rue Alfred Kastler Nantes Cedex 3, France Abstract Cloud platforms host several independent applications on a shared resource pool with the ability to allocate computing power to applications on a per-demand basis. The use of server virtualization techniques for such platforms provide great flexibility with the ability to consolidate several virtual machines on the same physical server, to resize a virtual machine capacity and to migrate virtual machine across physical servers. A key challenge for cloud providers is to automate the management of virtual servers while taking into account both high-level QoS requirements of hosted applications and resource management costs. This paper proposes an autonomic resource manager to control the virtualized environment which decouples the provisioning of resources from the dynamic placement of virtual machines. This manager aims to optimize a global utility function which integrates both the degree of SLA fulfillment and the operating costs. We resort to a Constraint Programming approach to formulate and solve the optimization problem. Results obtained through simulations validate our approach. 1 Introduction Corporate data centers are in the process of adopting a cloud computing architecture where computing resources are provisioned on a per-demand basis, notably to handle peak loads, instead of being statically allocated. Such cloud infrastructures should improve the average utilization rates of IT resources which are currently in the 15-20% range. A key enabling technology of cloud systems is server virtualization which allows to decouple applications and services from the physical server infrastructure. Server virtualization makes it possible to execute concurrently several virtual machines (VM) on top of a single physical machine (PM), each VM hosting a complete software stack (operating system, middleware, applications) and being given a partition of the underlying resource capacity (CPU power and RAM size notably). On top of that, the live migration capability of hypervisors allows to migrate a virtual machine from one physical host to another with no or little interruption of service. The downside of the flexibility brought by virtualization is the added system management complexity for IT managers. Two levels of mapping must be managed (Figure 1): the provisioning stage is responsible for allocating resource capacity in the form of virtual machines to application. This stage is driven by performance goals associated with the business-level SLAs of the hosted applications (e.g. average response time, number of jobs completed per unit of time). Virtual machines must then be mapped to physical machines. This VM placement problem is driven by data center policies related to resource management costs. A typical example is to lower energy consumption by minimizing the number of active physical servers. This paper presents an autonomic resource management system which aims at fulfilling the following requirements: ability to automate the dynamic provisioning and placement of VMs taking into account both application-level SLAs and resource exploitation costs with high-level handles for the administrator to specify trade-offs between the two, support for heterogeneous applications and workloads including both enterprise online applications with stringent QoS requirements and batch-oriented CPUintensive applications, *This research is supported by the french Agence Nationale de la Recherche with the ANR-08-SEGI-017
3 SLA response time R throughput D SLA response time R throughput D SLA response time R throughput D Application Environment A Application Environment B Application Environment C VM VM VM VM VM VM VM VM Physical Machine Physical Machine Physical Machine Physical Machine Figure 1. Dynamic resource allocation support for arbitrary application topology: n-tier, single cluster, monolithic and capacity to scale: either in a scale-up fashion by adding more resource to a single server or in a scale-out fashion by adding more servers, Our proposed management system relies on a two-level architecture with a clear separation between applicationspecific functions and a generic global decision level. We resort to utility functions to map the current state of each application (workload, resource capacity, SLA) to a scalar value that quantify the satisfaction of each application with regard to its performance goals. These utility functions are also the means of communication with the global decision layer which constructs a global utility function including resource management costs. We separate the VM provisioning stage from the VM placement stage within the global decision layer autonomic loop and formulate both problems as Constraint Satisfaction Problems (CSP). Both problems are instances of an NP-hard knapsack problem for which a Constraint Programming approach is a good fit. The idea of Constraint Programming is to solve a problem by stating relations between variables in the form of constraints which must be satisfied by the solution. The remainder of this paper is organized as follows. Section 2 presents the architecture of the autonomic virtual resource management system. Next, we show some simulation results for different application environments in Section 3. We cover related work in Section 4. Finally, Section 5 concludes and presents future work. 2 Autonomic Virtual Resource Management 2.1 System Architecture Our management system architecture is shown in Figure 2. The datacenter consists of a set of physical machines (PM) each hosting multiple VMs through a hypervisor. We assume that the number of physical machines is fixed and that they all belong to the same cluster with the possibility to perform a live migration of a VM between two arbitrary PMs. An Application Environment (AE) encapsulates an application hosted by the cloud system. An AE is associated with specific performance goals specified in a SLA contract. An AE can embed an arbitrary application topology which can span one or multiple VMs (e.g. multi-tier Web application, master-worker grid application). We assume that resource allocation is performed with a granularity of a VM. In other words a running VM is associated with one and only one AE. Applications cannot request a VM with an arbitrary resource capacity in terms of CPU power and memory size (in the remainder of this paper, we will focus primarily on CPU and RAM capacity but our system could be extended to cope with other resource dimension such as network I/O). The VMs available to the application must be chosen among a set of pre-defined VM classes. Each VM class comes with a specific CPU and memory capacity e.g. 2Ghz of CPU capacity and 1Gb of memory. An application-specific Local Decision Module (LDM) is associated with each AE. Each LDM evaluates the opportunity of allocating more VMs or releasing existing VMs to/from the AE on the basis of the current workload using service-level metrics (response time, number of requests per second...) coming from application-specific monitoring s. The main job of the LDM is to compute a utility function which gives a measure of application satisfaction with a specific resource allocation (CPU, RAM) given its current workload and SLA goal. LDMs interact with a Global Decision Module (GDM) which is the decisionmaking entity within the autonomic control loop. The GDM is responsible for arbitrating resource requirements coming from every AE and treats each LDM as a black-box without being aware of the nature of the application or the way the LDM computes its utility function. The GDM receives as input (i) the utility functions from every LDM and (ii) system-level performance metrics (e.g. CPU load) from virtual and physical servers. The output of the GDM consists of management actions directed to the server hypervisor and notifications sent to LDMs. The latter notifies the LDM that (i) a new VM with specific resource capacity has been allocated to the application, (ii) an existing VM has been upgraded or downgraded, i.e its class and resource capacity has been changed and (iii) a VM belonging to the application is being preempted and that the application should relinquish it promptly. Management actions include the lifecyle management of VM (starting, stopping VMs) and the trigger of a live migration of a running VM, the latter operation being transparent as far as the hosted applications are concerned. We now formalize in more detail the inner workings of the local and global decision modules. Let A = (a 1,a 2,..., a i,..., a m ) denote the set of AEs and P =
4 Application Environment A Application Environment B u ( N ) u N ) A B A ( B demand VM Provisioning Ni VM Packing Hj monitor Local Decision Module VM VM Physical Machine Request/Release VMs Preempt VMs Global Decision Module VM VM Physical Machine ui Constraint solver actions LDM VM VM Physical Machine Constraint Solver Figure 3. Constraint Solving: Provisioning- Packing Figure 2. System architecture (p 1,p 2,..., p j,..., p q ) denote the set of PMs in the datacenter. There are c classes of VM available among the set S =(s 1,s 2,..., s k,..., s c ), where s k =(s cpu k,s ram k ) specifies the CPU capacity of the VM expressed in MHz and the memory capacity of the VM expressed in megabytes. 2.2 Local Decision Module The LDM is associated with an application-specific performance model. Our architecture does not make any assumption on the nature of this model, whether it is analytical or purely empirical. A performance model is used by the LDM to assess the level of service achieved with a given capacity of resources (processing unit, memory) and the current application workload. The LDM is associated with two utility functions: a fixed service-level utility function that maps the service level to a utility value and a dynamic resource-level utility function that maps a resource capacity to a utility value. The latter which is communicated to the GDM on every iteration of the autonomic control loop. The resource-level utility function u i for application a i is defined as u i = f i (N i ). N i is the VM allocation vector of application a i : N i =(n i1,n i2,..., n ik,..., n im ) where n ik is the number of VMs of class s k attributed to application a i. We require each application to provide upper bounds on the number of VM of each class ( N max i = (n max i1,n max i2,..., n max ik,..., n max im ) ) and on the total number of VM ( T max i ) that it is willing to accept. These application constraints are expressed as follows: n ik n max ik 1 i m and 1 k c (1) c k=1 n ik T max i 1 i m (2) Note that theses bounds allow to specify a scale-up application which is hosted by a single VM of varying capacity by setting T max i to Global Decision Module The GDM is responsible for two main tasks: determining the VM allocation vectors N i for each application a i (VM Provisioning), and placing these VMs on PMs in order to minimize the number of active PMs (VM Packing). These two phases are expressed as two Constraint Satisfaction Problems (CSP) which are handled by a Constraint Solver (Figure 3) VM Provisioning In this phase, we aim at finding the VM allocation vectors N i for each application a i while maximizing a global utility value U global. The VMs allocated to all applications are constrained by the total of capacity (CPU and RAM) of the physical servers: m i=1 c k=1 n ik.s cpu k m i=1 c k=1 n ik.s ram k q j=1 Ccpu j q j=1 Cram j where C cpu j and Cj ram are the CPU and RAM capacity of PM p j. The VM allocation vectors N i need to maximize a global utility function expressed as a weighted sum of the application-provided resource-level utility functions and an operating cost function: U global = maximize m i=1 ( ) α i u i ɛ.cost(n i ) where 0 <α i < 1, and m i=1 α i =1. ɛ is a coefficient that allows the administrator to make different tradeoffs between the fulfillment of the performance goals of the hosted application and the cost of operating the required resources. cost(n i ) is a function of VM allocation vectors N i and must share the same scale as application utility functions, i.e (0,1). This cost function is not hardwired into the GDM but can be specified arbitrarily. The output of the VM Provisioning phase is a set of vectors N i satisfying constraints 1, 2, 3 and maximizing U global. By comparing these allocation vectors with those computed during the previous iteration, the GDM is capable of determining which VMs must be created, destroyed (3) (4)
5 or resized. The placement of the newly created VMs as well as the possible migration of existing VMs is handled by the VM packing phase described next VM Packing The VM packing phase takes as input the VM allocation vectors N i and collapses them into the single vector V = (vm 1,vm 2,..., vm l,..., vm v ) which lists all VMs running at the current time. For each PM p j P, the bit vector H j =(h j1,h j2,...,h jl,...,h jv ) denotes the set of VMs assigned to p j (i.e.: h jl = 1 if p j is hosting vm l ). Let R =(r 1,r 2,..., r l,..., r v ) be the resource capacity (CPU, RAM) of all VMs, where r l =(r cpu l,rl ram ). We express the physical resource constraints as follows: v l=1 rcpu l.h jl C cpu j 1 j q (5) v l=1 rram l.h jl Cj ram 1 j q The goal is to minimize the number of active PMs X: X = q u j,whereu j = j=1 { 1 vml V h jl =1 0 otherwise (6) The solving of the VM packing CSP produces the VM placement vectors H j which are used to place VMs on PMs. Since the GDM is run on a periodic basis, the GDM computes the difference with the VM placement produced as a result of the previous iteration, determines which VM needs to be migrated. An optimal migration plan is produced as described in [2] to minimize the number of migration required to reach the new VM-to-PM assignment. Minimizing the cost of a reconfiguration provides a plan with few migrations and steps and a maximum degree of parallelism, thus reducing the duration and impact of a reconfiguration. The migration cost of a VM is approximated as proportional to the amount of memory allocated to the VM. 3 Validations & Results To illustrate and validate our architecture and algorithms, we present some simulation results which show how the system attributes resources to multiple applications with different utility functions and how the resource arbitration process can be controlled through each application s weight α i and the ɛ factor. Our simulator relies on the Choco [7] constraint solver to implement the VM provisioning and packing phases as separate constraint solving problems. Our simulated environment consists a cluster of 4 PMs of capacity (4000 MHz, 4000 MB) which can host the two following applications: Table 1. Virtual machine classes VM class s 1 s 2 s 3 s 4 CPU capacity (MHz) RAM capacity (MB) Application A is a multiplayer online game where the load of player connections is spread on a cluster on servers. The SLA goal of this application is the average response time of requests. This application has stringent real-time requirements and the associated utility function is shown in Figure 4(b) with a target response time τ SLA. Application B is a Web application implemented as a resizable cluster of Web servers. The SLA goal of the application is also the average response time of requests. The workload is measured as the number of requests per second. The utility function of this application is shown in Figure 4(c). VM configurations are chosen among 4 pre-defined classes as shown in Table 1. We examine how the system behaves by injecting a synthetic workload measured in terms of requests per second which is distributed to applications by a round-robin algorithm. An empirical performance model based on experimental data allows to determine the average response time obtained for a given workload and a given CPU capacity as illustrated in Figure 4(a). Considering that CPU power is more important than memory, we define the cost function as Cost(CPU) = CPU demand /CP U total, i.e. the ratio between the CPU capacity allocated to applications CPU demand and the total physical CPU capacity. The simulation proceeds according to the following steps on a periodic basis: 1. Workload values (in request/s) for each application are evaluated for the current time. 2. the VM Provisioning module is run. For a tentative CPU allocation, this module upcalls a function that computes the global utility as follows: (a) The performance model (Figure 4(a)) is used to determine the response time achieved with the CPU capacity. As the model is based on discrete experimental data, the response time for a specific CPU allocation is estimated as the average of the values provided by two nearest curves if the resource amount does not match any curve (e.g: for a demand, if the resource amount is of 3000MHz, the response time is the average of the
6 500 (MHz) Respose time (ms) Utility u u 1 Utility Demand (requests per second) 0 SLA response time (ms) 0 SLA 2. SLA response time (ms) (a) (b) (c) Figure 4. Performance model (a), Utility functions (b) and (c) results obtained from two curves 2000MHz and 4000MHz). (b) The utility function of each application (Figures 4(b), 4(c)) provides the local utility from this response time. (c) After all local utilities have been produced, the global utility is computed as a weighted sum of the local utilities and an operating cost function. 3. The VM Provisioning module iteratively explores the space of all possible CPU allocation and picks the solution having the best global utility. 4. The VM Packing module logically places VMs on PMs for the solution while minimizing the number of active PMs and the number of required migrations. 5. the simulated time is increased to the next step. In the first experiment, we nearly do not take into account the operating cost by testing the simulation with ɛ =0.05 and the parameters shown in Table 2. All these parameters have been presented in Equations 1, 2, 3 and 4. The time series plots are shown in Figure 5. At times t 0 and t 1, as workloads of A and B are both low, CPU demands are negligible, hence there is only one PM needed. Between times t 2 and t 3, workloads increase, so the system tends to attribute more resource (CPU) accordingly. However, the workloads are not too heavy, there are still enough resource to keep response time under the SLA goal (τ = 100ms) for all applications. In this interval, almost all PMs are mobilized. The impact of the application weights α i is also illustrated between time t 4 and t 5, where demands for application A and B are both high. As 0.8 =α A >α B =0.2, A has the higher resource allocating priority. Since there is not any CPU resource left to provision to application B, the response time of B exceeds the SLA goal. Note that at these intervals, the global utility has slightly decreased (to 0.85) because of the SLA violation of B. At times t 6, t 7 and t 8 a Table 2. Experiment setting 1 AE α T max n max 1 n max 2 n max 3 n max 4 A B peak of demand for application A corresponds to a trough of demand for application B and vice-versa. CPU capacity is switched between the two applications to maintain an optimal global utility. Now we examine how the operating cost factor ɛ affects the global utility value and the allocation result. We keep all parameters to their values defined in the first test but we now set ɛ =0.3. The result is shown in Figure 6: as the resource demands increases, the global utility value decreases more than during the previous test. The worst value is reached at t 4 and t 5 (0.55) when both applications need a maximum amount of resource. We can see at these intervals and even at t 7, t 8, B descends quicker and than only holds a small amount, as compared to Figure 5, even when nearly all resource was released by A, this behavior is due to the weight α B =0.2 is negligible against α A =0.8, and the local utility value of B is not enough to compensate the operating cost which is boosted by ɛ =0.3. Consequently, there are some SLA violations of B, notably at time t 7. Keeping all other parameters of the first test, in order to show how weight factor α i affects the resource allocation, we now set α A =0.3, α B =0.7. As shown in Figure 7, at times t 4 and t 5, the situation has turned around: thanks to the advantageous weight, B now obtains a sufficient amount of CPU and is able to meet its SLA response time goal. The increase of the response time of A (about 250ms) during high workload intervals is the result of its lower contribution the global utility value. From a performance point of view, the average solving time for the provisioning phase is 5500ms with T max T max A = B = 17. The average solving time for the packing
7 Figure 7. Time series plots of 1) Demands D A and D B, 2) CPUs R A and R B, 3) Response times T A and T B ; α A =0.3, α B =0.7; ɛ =0.05 Figure 5. Time series plots of 1) Demands D A and D B, 2) CPUs R A and R B, 3) Number of active PMs, 4) Global utility, 5) Response times T A and T B. phase is 1000ms in the worst case (the Choco constraint solver was running on a dual-2.5ghz server with 4GB of RAM). A key handle to limit the space of solutions that the constraint solver must explore during the VM provisioning phase is to limit the VM configurations allowed through the Ti max and n max i parameters taking account the specificities of each application. For example a clusterized application might prefer to have its load spread on a low number of high-capacity servers rather than on a high number of lowcapacity servers. As an illustration if TA max and TB max are reduced to 4, the solving time is reduced to 800ms. The constraint solver aims to explore all possible solutions for a set of input data (demands and constraints) and to return the best solution. To avoid spending a high amount of time to get an expired solution which is no longer suitable to the current workload, we limit the time alloted to the solver to get an acceptable solution (which satisfies all constraints of our problem but does not necessarily maximize the global utility). 4 Related Work Figure 6. α A =0.8, α B =0.2; ɛ =0.3 Existing works on autonomic management systems for virtualized server environments tackle the allocation and placement of virtual servers from different perspectives. Many papers differ from our work insofar as they either focus on one specific type of applications or they do not consider the problem of dynamic provisioning or the possibility of resource contention between several applications with independent performance goals. For example, [3] proposes a virtual machine placement algorithm which resorts to forecasting techniques and a bin packing heuristic to allocate
8 and place virtual machines while minimizing the number of PMs activated and providing probabilistic SLA guarantees. Sandpiper [4] proposes two approaches for dynamically map VMs on PMs: a black box approach that relies on system-level metrics only and a grey box approach that takes into account application-level metrics along with a queueing model. VM packing is performed through a heuristic which iteratively places the highest-loaded VM on the least-load PM. Some of these mechanisms, for instance prediction mechanisms, could be integrated in our architecture within application-specific local decision modules. Regarding the VM packing problem, we argue that a Constraint Programming approach has many advantages over placement heuristics. Such heuristics are brittle and must be returned with care if new criteria for VM-to-PM assignment are introduced. Moreover these heuristics cannot guarantee that an optimal solution is produced. A CSP approach for VM packing provides an elegant and flexible approach which can easily be extended to take into account additional constraints. A constraint solving approach is used by Entropy [2] for the dynamic placement of virtual machines on physical machines while minimizing the number of active servers and the number of migrations required to reach a new configuration. Our work extends this system with a dynamic provisioning of VMs directed by high-level SLA goals. Utility functions act as the foundation of many autonomic resource management systems as a means to quantify the satisfaction of an application with regard to its level of service beyond a binary SLA goal satisfied / not satisfied metric. [13] lays the groundwork for using utility functions in autonomic system in a generic fashion with a two-level architecture which separates application-specific managers from a resource arbitration level. We specialize this architecture to take server virtualization into account and to integrate resource management costs in the global utility computation. [8] proposes a utility-based autonomic controller for dynamically allocating CPU power to VM. Like our work, they attempt to maximize a global utility function. But they do not consider the possibility of provisioning additional VMs to an application and restrict themselves to homogeneous application workloads modeled with a queuing network. The CPU capacity of VMs is determined through a beam-search combinatorial search procedure. [11] uses a simulated annealing approach to determine the configuration (VM quantity & placement) that maximize a global utility. They simplify the VM packing process by assuming that all VM hosted on a PM have an equal amount of capacity. It is worth evaluating the respective performance of simulated annealing and constraint solving. Shirako [1] proposes an autonomic VM orchestration for the mapping between the physical resource providers and virtualized servers. Like our work, they advocate a separation between VM provisioning and VM placement in a federated multi-resource-provider environment. 5 Conclusion & Future Work This paper addresses the problem of autonomic virtual resource management for hosting service platforms with a two-level architecture which isolates applicationspecific functions from a generic decision-making layer. We take into account both high-level performance goals of the hosted applications and objectives related to the placement of virtual machines on physical machines. Selfoptimization is achieved through a combination of utility functions and a constraint programming approach. The VM provisioning and packing problems are expressed as two Constraint Satisfaction Problems. Utility functions provide a high-level way to express and quantify application satisfaction with regard to SLA and to trade-off between multiple objectives which might conflict with one another (e.g. SLA fulfillment and energy consumption). Such an approach avoids the problems encountered by rule- and policy- based systems where conflicting objectives must be handled in an ad-hoc manner by the administrator. Simulation experiments have been conducted to validate our architecture and algorithms. We are in the process of implementing a test-bed based on a cluster of servers fitted with the Xen hypervisor [18] along with virtual server management service handling the storage, deployment of VM images and the aggregation of performance metrics both at system-level (e.g. CPU load) and at application-level (e.g. response time). The autonomic management system is being designed a component-based framework with a clear separation between generic mechanisms and pluggable modules. References [1] L. Grit, D. Irwin, A. Yumerefendi and J. Chase. Virtual Machine Hosting for Networked Clusters. Proceedings of the 2nd International Workshop on Virtualization Technology in Distributed Computing, [2] F. Hermenier, X. Lorca, J.-M. Menaud, G. Muller and J. Lawall. Entropy: a Consolidation Manager for Cluster. In proc. of the 2009 International Conference on Virtual Execution Environments (VEE 09), Mar [3] N. Bobroff, A. Kochut and K. Beaty. Dynamic Placement of Virtual Machines for Managing SLA Violations. 10th IFIP/IEEE International Symposium on Integrated Network Management, May 2007.
9 [4] T. Wood, P. Shenoy, A. Venkataramani, and M. Yousif. Black-box and Gray-box Strategies for Virtual Machine Migration. 4th USENIX Symposium on Networked Systems Design and Implementation, [5] K. Appleby, S. Fakhouri, L. Fong, G. Goldszmidt, M. Kalantar, S. Krishnakumar, D.P. Pazel, J. Pershing, and B. Rochwerger. Océano - SLA Based Management of a Computing Utility. IEEE/IFIP International Symposium on Integrated Network Management Proceedings, [6] D. Irwin, J. Chase, L. Grit, A. Yumerefendi, and D. Becker. Sharing Networked Resources with Brokered Leases. Proceedings of the annual conference on USENIX 06 Annual Technical Conference, [7] N. Jussien, G. Rochart, X. Lorca. The CHOCO constraint programming solver. CPAIOR 08 workshop on Open-Source Software for Integer and Constraint Programming (OSSICP 08), [15] T. Kelly, Utility-Directed Allocation, First Workshop on Algorithms and Architectures for Self-Managing Systems. June [16] R.P. Doyle, J.S. Chase, O.M. Asad, W. Jin and A.M. Vahdat. Model-based resource provisioning in a web service utility. Proceedings of the 4th conference on USENIX Symposium on Internet Technologies and Systems, [17] F. Benhamou, N. Jussien, and B. O Sullivan. Trends in Constraint Programming. ISTE, London, UK, [18] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauery, I. Pratt, A. Warfield. Xen and the Art of Virtualization. Proceedings of the nineteenth ACM symposium on Operating systems principles, [8] D. A. Menascé and M.N. Bennani. Autonomic Virtualized Environments. Proceedings of the International Conference on Autonomic and Autonomous Systems, [9] J. O. Kephart, H. Chan, R. Das, D. W. Levine, G. Tesauro, F. Rawson and C. Lefurgy. Coordinating multiple autonomic managers to achieve specified power-performance tradeoffs. Proceedings of the Fourth International Conference on Autonomic Computing, [10] R. Das, G. Tesauro and W. E. Walsh. Model-Based and Model-Free Approaches to Autonomic Resource Allocation. IBM Research Division, TR, November [11] X. Wang, D. Lan, G. Wang, X. Fang, M. Ye, Y. Chen and Q. Wang. Appliance-based Autonomic Provisioning Framework for Virtualized Outsourcing Data Center. Autonomic Computing, [12] S. Bouchenak, N.D. Palma and D. Hagimont. Autonomic Management of Clustered Applications. IEEE International Conference on Cluster Computing, [13] W.E. Walsh, G. Tesauro, J.O. Kephart and R. Das, Utility Functions in Autonomic Systems. Autonomic Computing, [14] G. Khanna, K. Beaty, G. Kar and A. Kochut, Application Performance Management in Virtualized Server Environments. Network Operations and Management Symposium
Flauncher and DVMS Deploying and Scheduling Thousands of Virtual Machines on Hundreds of Nodes Distributed Geographically
Flauncher and Deploying and Scheduling Thousands of Virtual Machines on Hundreds of Nodes Distributed Geographically Daniel Balouek, Adrien Lèbre, Flavien Quesnel To cite this version: Daniel Balouek,
This is an author-deposited version published in : http://oatao.univ-toulouse.fr/ Eprints ID : 12902
Open Archive TOULOUSE Archive Ouverte (OATAO) OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. This is an author-deposited
Mobility management and vertical handover decision making in heterogeneous wireless networks
Mobility management and vertical handover decision making in heterogeneous wireless networks Mariem Zekri To cite this version: Mariem Zekri. Mobility management and vertical handover decision making in
Fabien Hermenier. 2bis rue Bon Secours 44000 Nantes. hermenierfabien@gmail.com http://www.emn.fr/x-info/fhermeni/
Fabien Hermenier 2bis rue Bon Secours 44000 Nantes hermenierfabien@gmail.com http://www.emn.fr/x-info/fhermeni/ Activities Oct. 2009 - Sep. 2010 : Post-doctoral researcher École des Mines de Nantes, ASCOLA
Multi-dimensional Affinity Aware VM Placement Algorithm in Cloud Computing
Multi-dimensional Affinity Aware VM Placement Algorithm in Cloud Computing Nilesh Pachorkar 1, Rajesh Ingle 2 Abstract One of the challenging problems in cloud computing is the efficient placement of virtual
ibalance-abf: a Smartphone-Based Audio-Biofeedback Balance System
ibalance-abf: a Smartphone-Based Audio-Biofeedback Balance System Céline Franco, Anthony Fleury, Pierre-Yves Guméry, Bruno Diot, Jacques Demongeot, Nicolas Vuillerme To cite this version: Céline Franco,
Phoenix Cloud: Consolidating Different Computing Loads on Shared Cluster System for Large Organization
Phoenix Cloud: Consolidating Different Computing Loads on Shared Cluster System for Large Organization Jianfeng Zhan, Lei Wang, Bibo Tu, Yong Li, Peng Wang, Wei Zhou, Dan Meng Institute of Computing Technology
Advanced Load Balancing Mechanism on Mixed Batch and Transactional Workloads
Advanced Load Balancing Mechanism on Mixed Batch and Transactional Workloads G. Suganthi (Member, IEEE), K. N. Vimal Shankar, Department of Computer Science and Engineering, V.S.B. Engineering College,
La mort du narrateur et l interprétation du roman. L exemple de Pedro Páramo de Juan Rulfo
La mort du narrateur et l interprétation du roman. L exemple de Pedro Páramo de Juan Rulfo Sylvie Patron To cite this version: Sylvie Patron. La mort du narrateur et l interprétation du roman. L exemple
INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD
INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD M.Rajeswari 1, M.Savuri Raja 2, M.Suganthy 3 1 Master of Technology, Department of Computer Science & Engineering, Dr. S.J.S Paul Memorial
Exploring Resource Provisioning Cost Models in Cloud Computing
Exploring Resource Provisioning Cost Models in Cloud Computing P.Aradhya #1, K.Shivaranjani *2 #1 M.Tech, CSE, SR Engineering College, Warangal, Andhra Pradesh, India # Assistant Professor, Department
Figure 1. The cloud scales: Amazon EC2 growth [2].
- Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 shinji10343@hotmail.com, kwang@cs.nctu.edu.tw Abstract One of the most important issues
A Novel Method for Resource Allocation in Cloud Computing Using Virtual Machines
A Novel Method for Resource Allocation in Cloud Computing Using Virtual Machines Ch.Anusha M.Tech, Dr.K.Babu Rao, M.Tech, Ph.D Professor, MR. M.Srikanth Asst Professor & HOD, Abstract: Cloud computing
Virtualization Technology using Virtual Machines for Cloud Computing
International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Virtualization Technology using Virtual Machines for Cloud Computing T. Kamalakar Raju 1, A. Lavanya 2, Dr. M. Rajanikanth 2 1,
Energy-Efficient Task Allocation in Wireless Sensor Networks using virtualization technology
Energy-Efficient Task Allocation in Wireless Sensor Networks using virtualization technology Abstract: In s recent s years, s sensor s networks s are s widely s being s deployed s for s substantial s periods
Faster and Efficient VM Migrations for Improving SLA and ROI in Cloud Infrastructures
1 Faster and Efficient VM Migrations for Improving SLA and ROI in Cloud Infrastructures Sujal Das, Michael Kagan, and Diego Crupnicoff (diego@mellanox.com) Abstract Cloud platforms and infrastructures
Virginia Woolf, Comment devrait-on lire un livre? ou le théoricien du commun
Virginia Woolf, Comment devrait-on lire un livre? ou le théoricien du commun François-Ronan Dubois To cite this version: François-Ronan Dubois. Virginia Woolf, Comment devrait-on lire un livre? ou le théoricien
Analyse échographique de la cicatrisation tendineuse après réparation arthroscopique de la coiffe des rotateurs
Analyse échographique de la cicatrisation tendineuse après réparation arthroscopique de la coiffe des rotateurs Martin Schramm To cite this version: Martin Schramm. Analyse échographique de la cicatrisation
Dynamic Resource Allocation in Computing Clouds using Distributed Multiple Criteria Decision Analysis
Dynamic Resource Allocation in Computing Clouds using Distributed Multiple Criteria Decision Analysis Yağız Onat Yazır α, Chris Matthews α, Roozbeh Farahbod β Stephen Neville γ, Adel Guitouni β, Sudhakar
Study on Cloud Service Mode of Agricultural Information Institutions
Study on Cloud Service Mode of Agricultural Information Institutions Xiaorong Yang, Nengfu Xie, Dan Wang, Lihua Jiang To cite this version: Xiaorong Yang, Nengfu Xie, Dan Wang, Lihua Jiang. Study on Cloud
A graph based framework for the definition of tools dealing with sparse and irregular distributed data-structures
A graph based framework for the definition of tools dealing with sparse and irregular distributed data-structures Serge Chaumette, Jean-Michel Lepine, Franck Rubi To cite this version: Serge Chaumette,
Performance analysis of clouds with phase-type arrivals
Performance analysis of clouds with phase-type arrivals Farah Ait Salaht, Hind Castel-Taleb To cite this version: Farah Ait Salaht, Hind Castel-Taleb. Performance analysis of clouds with phase-type arrivals.
QASM: a Q&A Social Media System Based on Social Semantics
QASM: a Q&A Social Media System Based on Social Semantics Zide Meng, Fabien Gandon, Catherine Faron-Zucker To cite this version: Zide Meng, Fabien Gandon, Catherine Faron-Zucker. QASM: a Q&A Social Media
Dynamic memory Allocation using ballooning and virtualization in cloud computing
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 2, Ver. IV (Mar-Apr. 2014), PP 19-23 Dynamic memory Allocation using ballooning and virtualization
FP-Hadoop: Efficient Execution of Parallel Jobs Over Skewed Data
FP-Hadoop: Efficient Execution of Parallel Jobs Over Skewed Data Miguel Liroz-Gistau, Reza Akbarinia, Patrick Valduriez To cite this version: Miguel Liroz-Gistau, Reza Akbarinia, Patrick Valduriez. FP-Hadoop:
Performativity and Information Technologies: An inter-organizational perspective
Performativity and Information Technologies: An inter-organizational perspective François-Xavier De Vaujany, Sabine Carton, Carine Dominguez-Perry, Emmanuelle Vaast To cite this version: François-Xavier
Managing Risks at Runtime in VoIP Networks and Services
Managing Risks at Runtime in VoIP Networks and Services Oussema Dabbebi, Remi Badonnel, Olivier Festor To cite this version: Oussema Dabbebi, Remi Badonnel, Olivier Festor. Managing Risks at Runtime in
Server Consolidation with Migration Control for Virtualized Data Centers
*Manuscript Click here to view linked References Server Consolidation with Migration Control for Virtualized Data Centers Tiago C. Ferreto 1,MarcoA.S.Netto, Rodrigo N. Calheiros, and César A. F. De Rose
Migration of Virtual Machines for Better Performance in Cloud Computing Environment
Migration of Virtual Machines for Better Performance in Cloud Computing Environment J.Sreekanth 1, B.Santhosh Kumar 2 PG Scholar, Dept. of CSE, G Pulla Reddy Engineering College, Kurnool, Andhra Pradesh,
Supporting Application QoS in Shared Resource Pools
Supporting Application QoS in Shared Resource Pools Jerry Rolia, Ludmila Cherkasova, Martin Arlitt, Vijay Machiraju HP Laboratories Palo Alto HPL-2006-1 December 22, 2005* automation, enterprise applications,
Load balancing model for Cloud Data Center ABSTRACT:
Load balancing model for Cloud Data Center ABSTRACT: Cloud data center management is a key problem due to the numerous and heterogeneous strategies that can be applied, ranging from the VM placement to
Dynamic Resource Allocation in Software Defined and Virtual Networks: A Comparative Analysis
Dynamic Resource Allocation in Software Defined and Virtual Networks: A Comparative Analysis Felipe Augusto Nunes de Oliveira - GRR20112021 João Victor Tozatti Risso - GRR20120726 Abstract. The increasing
Efficient and Robust Allocation Algorithms in Clouds under Memory Constraints
Efficient and Robust Allocation Algorithms in Clouds under Memory Constraints Olivier Beaumont,, Paul Renaud-Goud Inria & University of Bordeaux Bordeaux, France 9th Scheduling for Large Scale Systems
Online vehicle routing and scheduling with continuous vehicle tracking
Online vehicle routing and scheduling with continuous vehicle tracking Jean Respen, Nicolas Zufferey, Jean-Yves Potvin To cite this version: Jean Respen, Nicolas Zufferey, Jean-Yves Potvin. Online vehicle
Towards an understanding of oversubscription in cloud
IBM Research Towards an understanding of oversubscription in cloud Salman A. Baset, Long Wang, Chunqiang Tang sabaset@us.ibm.com IBM T. J. Watson Research Center Hawthorne, NY Outline Oversubscription
The truck scheduling problem at cross-docking terminals
The truck scheduling problem at cross-docking terminals Lotte Berghman,, Roel Leus, Pierre Lopez To cite this version: Lotte Berghman,, Roel Leus, Pierre Lopez. The truck scheduling problem at cross-docking
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing Deep Mann ME (Software Engineering) Computer Science and Engineering Department Thapar University Patiala-147004
Affinity Aware VM Colocation Mechanism for Cloud
Affinity Aware VM Colocation Mechanism for Cloud Nilesh Pachorkar 1* and Rajesh Ingle 2 Received: 24-December-2014; Revised: 12-January-2015; Accepted: 12-January-2015 2014 ACCENTS Abstract The most of
Expanding Renewable Energy by Implementing Demand Response
Expanding Renewable Energy by Implementing Demand Response Stéphanie Bouckaert, Vincent Mazauric, Nadia Maïzi To cite this version: Stéphanie Bouckaert, Vincent Mazauric, Nadia Maïzi. Expanding Renewable
Two-Level Cooperation in Autonomic Cloud Resource Management
Two-Level Cooperation in Autonomic Cloud Resource Management Giang Son Tran, Laurent Broto, and Daniel Hagimont ENSEEIHT University of Toulouse, Toulouse, France Email: {giang.tran, laurent.broto, daniel.hagimont}@enseeiht.fr
Survey on Models to Investigate Data Center Performance and QoS in Cloud Computing Infrastructure
Survey on Models to Investigate Data Center Performance and QoS in Cloud Computing Infrastructure Chandrakala Department of Computer Science and Engineering Srinivas School of Engineering, Mukka Mangalore,
Environments, Services and Network Management for Green Clouds
Environments, Services and Network Management for Green Clouds Carlos Becker Westphall Networks and Management Laboratory Federal University of Santa Catarina MARCH 3RD, REUNION ISLAND IARIA GLOBENET 2012
Allocation of Resources Dynamically in Data Centre for Cloud Environment
Allocation of Resources Dynamically in Data Centre for Cloud Environment Mr.Pramod 1, Mr. Kumar Swamy 2, Mr. Sunitha B. S 3 ¹Computer Science & Engineering, EPCET, VTU, INDIA ² Computer Science & Engineering,
Infrastructure as a Service (IaaS)
Infrastructure as a Service (IaaS) (ENCS 691K Chapter 4) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ References 1. R. Moreno et al.,
Resource Allocation for Autonomic Data Centers using Analytic Performance Models
Resource Allocation for Autonomic Data Centers using Analytic Performance Models Mohamed N. Bennani and Daniel A. Menascé Dept. of Computer Science, MS 4A5 George Mason University 44 University Dr. Fairfax,
Keywords Distributed Computing, On Demand Resources, Cloud Computing, Virtualization, Server Consolidation, Load Balancing
Volume 5, Issue 1, January 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Survey on Load
Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load
Payment minimization and Error-tolerant Resource Allocation for Cloud System Using equally spread current execution load Pooja.B. Jewargi Prof. Jyoti.Patil Department of computer science and engineering,
Seed4C: A Cloud Security Infrastructure validated on Grid 5000
Seed4C: A Cloud Security Infrastructure validated on Grid 5000 E. Caron 1, A. Lefray 1, B. Marquet 2, and J. Rouzaud-Cornabas 1 1 Université de Lyon. LIP Laboratory. UMR CNRS - ENS Lyon - INRIA - UCBL
Scheduling using Optimization Decomposition in Wireless Network with Time Performance Analysis
Scheduling using Optimization Decomposition in Wireless Network with Time Performance Analysis Aparna.C 1, Kavitha.V.kakade 2 M.E Student, Department of Computer Science and Engineering, Sri Shakthi Institute
Ecole des Mines de Nantes. Journée Thématique Emergente "aspects énergétiques du calcul"
Ecole des Mines de Nantes Entropy Journée Thématique Emergente "aspects énergétiques du calcul" Fabien Hermenier, Adrien Lèbre, Jean Marc Menaud menaud@mines-nantes.fr Outline Motivation Entropy project
A Laboratory TOOL for the strings instrument makers
A Laboratory TOOL for the strings instrument makers Samad F. Pakzad, Thierry Jeandroz To cite this version: Samad F. Pakzad, Thierry Jeandroz. A Laboratory TOOL for the strings instrument makers. Société
International Journal of Scientific & Engineering Research, Volume 4, Issue 5, May-2013 617 ISSN 2229-5518
International Journal of Scientific & Engineering Research, Volume 4, Issue 5, May-2013 617 Load Distribution & Resource Scheduling for Mixed Workloads in Cloud Environment 1 V. Sindhu Shri II ME (Software
A usage coverage based approach for assessing product family design
A usage coverage based approach for assessing product family design Jiliang Wang To cite this version: Jiliang Wang. A usage coverage based approach for assessing product family design. Other. Ecole Centrale
EC2 Performance Analysis for Resource Provisioning of Service-Oriented Applications
EC2 Performance Analysis for Resource Provisioning of Service-Oriented Applications Jiang Dejun 1,2 Guillaume Pierre 1 Chi-Hung Chi 2 1 VU University Amsterdam 2 Tsinghua University Beijing Abstract. Cloud
Enhancing the Scalability of Virtual Machines in Cloud
Enhancing the Scalability of Virtual Machines in Cloud Chippy.A #1, Ashok Kumar.P #2, Deepak.S #3, Ananthi.S #4 # Department of Computer Science and Engineering, SNS College of Technology Coimbatore, Tamil
Dynamic Resource management with VM layer and Resource prediction algorithms in Cloud Architecture
Dynamic Resource management with VM layer and Resource prediction algorithms in Cloud Architecture 1 Shaik Fayaz, 2 Dr.V.N.Srinivasu, 3 Tata Venkateswarlu #1 M.Tech (CSE) from P.N.C & Vijai Institute of
Energetic Resource Allocation Framework Using Virtualization in Cloud
Energetic Resource Allocation Framework Using Virtualization in Ms.K.Guna *1, Ms.P.Saranya M.E *2 1 (II M.E(CSE)) Student Department of Computer Science and Engineering, 2 Assistant Professor Department
Towards a Framework for the Autonomic Management of Virtualization-Based Environments
Towards a Framework for the Autonomic Management of Virtualization-Based Environments Dan Marinescu and Reinhold Kroeger Wiesbaden University of Applied Sciences Distributed Systems Lab Kurt-Schumacher-Ring
Auto-Scaling, Load Balancing and Monitoring in Commercial and Open-Source Clouds
Auto-Scaling, Load Balancing and Monitoring in Commercial and Open-Source Clouds Eddy Caron, Luis Rodero-Merino, Frédéric Desprez, Adrian Muresan To cite this version: Eddy Caron, Luis Rodero-Merino, Frédéric
Xen Live Migration. Networks and Distributed Systems Seminar, 24 April 2006. Matúš Harvan Xen Live Migration 1
Xen Live Migration Matúš Harvan Networks and Distributed Systems Seminar, 24 April 2006 Matúš Harvan Xen Live Migration 1 Outline 1 Xen Overview 2 Live migration General Memory, Network, Storage Migration
Power and Performance Modeling in a Virtualized Server System
Power and Performance Modeling in a Virtualized Server System Massoud Pedram and Inkwon Hwang University of Southern California Department of Electrical Engineering Los Angeles, CA 90089 U.S.A. {pedram,
Partial and Dynamic reconfiguration of FPGAs: a top down design methodology for an automatic implementation
Partial and Dynamic reconfiguration of FPGAs: a top down design methodology for an automatic implementation Florent Berthelot, Fabienne Nouvel, Dominique Houzet To cite this version: Florent Berthelot,
Global Identity Management of Virtual Machines Based on Remote Secure Elements
Global Identity Management of Virtual Machines Based on Remote Secure Elements Hassane Aissaoui, P. Urien, Guy Pujolle To cite this version: Hassane Aissaoui, P. Urien, Guy Pujolle. Global Identity Management
Experimental validation for software DIALUX: application in CIE test Cases for building daylighting simulation
Experimental validation for software DIALUX: application in CIE test Cases for building daylighting simulation Ali Hamada Fakra, Harry Boyer, Fawaz Maamari To cite this version: Ali Hamada Fakra, Harry
Dynamic Creation and Placement of Virtual Machine Using CloudSim
Dynamic Creation and Placement of Virtual Machine Using CloudSim Vikash Rao Pahalad Singh College of Engineering, Balana, India Abstract --Cloud Computing becomes a new trend in computing. The IaaS(Infrastructure
IaaS-Clouds in the MaDgIK Sky
IaaS-Clouds in the MaDgIK Sky Konstantinos Tsakalozos PhD candidate Advisor: Alex Delis Research Topics 1.Nefeli: Hint based deployment of virtual infrastructures 2.How profit maximization drives resource
VR4D: An Immersive and Collaborative Experience to Improve the Interior Design Process
VR4D: An Immersive and Collaborative Experience to Improve the Interior Design Process Amine Chellali, Frederic Jourdan, Cédric Dumas To cite this version: Amine Chellali, Frederic Jourdan, Cédric Dumas.
Run-time Resource Management in SOA Virtualized Environments. Danilo Ardagna, Raffaela Mirandola, Marco Trubian, Li Zhang
Run-time Resource Management in SOA Virtualized Environments Danilo Ardagna, Raffaela Mirandola, Marco Trubian, Li Zhang Amsterdam, August 25 2009 SOI Run-time Management 2 SOI=SOA + virtualization Goal:
Group Based Load Balancing Algorithm in Cloud Computing Virtualization
Group Based Load Balancing Algorithm in Cloud Computing Virtualization Rishi Bhardwaj, 2 Sangeeta Mittal, Student, 2 Assistant Professor, Department of Computer Science, Jaypee Institute of Information
Aligning subjective tests using a low cost common set
Aligning subjective tests using a low cost common set Yohann Pitrey, Ulrich Engelke, Marcus Barkowsky, Romuald Pépion, Patrick Le Callet To cite this version: Yohann Pitrey, Ulrich Engelke, Marcus Barkowsky,
Black-box and Gray-box Strategies for Virtual Machine Migration
Black-box and Gray-box Strategies for Virtual Machine Migration Wood, et al (UMass), NSDI07 Context: Virtual Machine Migration 1 Introduction Want agility in server farms to reallocate resources devoted
Managed Virtualized Platforms: From Multicore Nodes to Distributed Cloud Infrastructures
Managed Virtualized Platforms: From Multicore Nodes to Distributed Cloud Infrastructures Ada Gavrilovska Karsten Schwan, Mukil Kesavan Sanjay Kumar, Ripal Nathuji, Adit Ranadive Center for Experimental
Green Cloud Computing 班 級 : 資 管 碩 一 組 員 :710029011 黃 宗 緯 710029021 朱 雅 甜
Green Cloud Computing 班 級 : 資 管 碩 一 組 員 :710029011 黃 宗 緯 710029021 朱 雅 甜 Outline Introduction Proposed Schemes VM configuration VM Live Migration Comparison 2 Introduction (1/2) In 2006, the power consumption
Application-aware Virtual Machine Migration in Data Centers
This paper was presented as part of the Mini-Conference at IEEE INFOCOM Application-aware Virtual Machine Migration in Data Centers Vivek Shrivastava,PetrosZerfos,Kang-wonLee,HaniJamjoom,Yew-HueyLiu,SumanBanerjee
Dynamic Load Balancing of Virtual Machines using QEMU-KVM
Dynamic Load Balancing of Virtual Machines using QEMU-KVM Akshay Chandak Krishnakant Jaju Technology, College of Engineering, Pune. Maharashtra, India. Akshay Kanfade Pushkar Lohiya Technology, College
Sliding HyperLogLog: Estimating cardinality in a data stream
Sliding HyperLogLog: Estimating cardinality in a data stream Yousra Chabchoub, Georges Hébrail To cite this version: Yousra Chabchoub, Georges Hébrail. Sliding HyperLogLog: Estimating cardinality in a
Dynamic Virtual Cluster reconfiguration for efficient IaaS provisioning
Dynamic Virtual Cluster reconfiguration for efficient IaaS provisioning Vittorio Manetti, Pasquale Di Gennaro, Roberto Bifulco, Roberto Canonico, and Giorgio Ventre University of Napoli Federico II, Italy
Additional mechanisms for rewriting on-the-fly SPARQL queries proxy
Additional mechanisms for rewriting on-the-fly SPARQL queries proxy Arthur Vaisse-Lesteven, Bruno Grilhères To cite this version: Arthur Vaisse-Lesteven, Bruno Grilhères. Additional mechanisms for rewriting
Dynamic Resource allocation in Cloud
Dynamic Resource allocation in Cloud ABSTRACT: Cloud computing allows business customers to scale up and down their resource usage based on needs. Many of the touted gains in the cloud model come from
Minkowski Sum of Polytopes Defined by Their Vertices
Minkowski Sum of Polytopes Defined by Their Vertices Vincent Delos, Denis Teissandier To cite this version: Vincent Delos, Denis Teissandier. Minkowski Sum of Polytopes Defined by Their Vertices. Journal
A dynamic optimization model for power and performance management of virtualized clusters
A dynamic optimization model for power and performance management of virtualized clusters Vinicius Petrucci, Orlando Loques Univ. Federal Fluminense Niteroi, Rio de Janeiro, Brasil Daniel Mossé Univ. of
An Efficient Hybrid P2P MMOG Cloud Architecture for Dynamic Load Management. Ginhung Wang, Kuochen Wang
1 An Efficient Hybrid MMOG Cloud Architecture for Dynamic Load Management Ginhung Wang, Kuochen Wang Abstract- In recent years, massively multiplayer online games (MMOGs) become more and more popular.
USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES
USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES Carlos Oliveira, Vinicius Petrucci, Orlando Loques Universidade Federal Fluminense Niterói, Brazil ABSTRACT In
The Truth Behind IBM AIX LPAR Performance
The Truth Behind IBM AIX LPAR Performance Yann Guernion, VP Technology EMEA HEADQUARTERS AMERICAS HEADQUARTERS Tour Franklin 92042 Paris La Défense Cedex France +33 [0] 1 47 73 12 12 info@orsyp.com www.orsyp.com
A Multi-objective Approach to Virtual Machine Management in Datacenters
A Multi-objective Approach to Virtual Machine Management in Datacenters Jing Xu and José Fortes Advanced Computing and Information Systems Laboratory (ACIS) Center for Autonomic Computing (CAC) University
Elastic VM for Rapid and Optimum Virtualized
Elastic VM for Rapid and Optimum Virtualized Resources Allocation Wesam Dawoud PhD. Student Hasso Plattner Institute Potsdam, Germany 5th International DMTF Academic Alliance Workshop on Systems and Virtualization
Dynamic load management of virtual machines in a cloud architectures
Dynamic load management of virtual machines in a cloud architectures Mauro Andreolini, Sara Casolari, Michele Colajanni, and Michele Messori Department of Information Engineering University of Modena and
A probabilistic multi-tenant model for virtual machine mapping in cloud systems
A probabilistic multi-tenant model for virtual machine mapping in cloud systems Zhuoyao Wang, Majeed M. Hayat, Nasir Ghani, and Khaled B. Shaban Department of Electrical and Computer Engineering, University
Multiobjective Cloud Capacity Planning for Time- Varying Customer Demand
Multiobjective Cloud Capacity Planning for Time- Varying Customer Demand Brian Bouterse Department of Computer Science North Carolina State University Raleigh, NC, USA bmbouter@ncsu.edu Harry Perros Department
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY
Karthi M,, 2013; Volume 1(8):1062-1072 INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK EFFICIENT MANAGEMENT OF RESOURCES PROVISIONING
An Analysis of First Fit Heuristics for the Virtual Machine Relocation Problem
An Analysis of First Fit Heuristics for the Virtual Machine Relocation Problem Gastón Keller, Michael Tighe, Hanan Lutfiyya and Michael Bauer Department of Computer Science The University of Western Ontario
Revealing the MAPE Loop for the Autonomic Management of Cloud Infrastructures
Revealing the MAPE Loop for the Autonomic Management of Cloud Infrastructures Michael Maurer, Ivan Breskovic, Vincent C. Emeakaroha, and Ivona Brandic Distributed Systems Group Institute of Information
SLA-Driven Simulation of Multi-Tenant Scalable Cloud-Distributed Enterprise Information Systems
SLA-Driven Simulation of Multi-Tenant Scalable Cloud-Distributed Enterprise Information Systems Alexandru-Florian Antonescu 2, Torsten Braun 2 alexandru-florian.antonescu@sap.com, braun@iam.unibe.ch SAP
Precise VM Placement Algorithm Supported by Data Analytic Service
Precise VM Placement Algorithm Supported by Data Analytic Service Dapeng Dong and John Herbert Mobile and Internet Systems Laboratory Department of Computer Science, University College Cork, Ireland {d.dong,
Knowledge based tool for modeling dynamic systems in education
Knowledge based tool for modeling dynamic systems in education Krassen Stefanov, Galina Dimitrova To cite this version: Krassen Stefanov, Galina Dimitrova. Knowledge based tool for modeling dynamic systems
A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing
A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing Liang-Teh Lee, Kang-Yuan Liu, Hui-Yang Huang and Chia-Ying Tseng Department of Computer Science and Engineering,
Logical Data Models for Cloud Computing Architectures
Logical Data Models for Cloud Computing Architectures Augustine (Gus) Samba, Kent State University Describing generic logical data models for two existing cloud computing architectures, the author helps
Resource Provisioning for Cloud Computing
Resource Provisioning for Cloud Computing Ye Hu 1, Johnny Wong 1, Gabriel Iszlai 2 and Marin Litoiu 3 1 University of Waterloo, 2 IBM Toronto Lab, 3 York University Abstract In resource provisioning for
Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski.
Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski. Fabienne Comte, Celine Duval, Valentine Genon-Catalot To cite this version: Fabienne
Energy-Aware Multi-agent Server Consolidation in Federated Clouds
Energy-Aware Multi-agent Server Consolidation in Federated Clouds Alessandro Ferreira Leite 1 and Alba Cristina Magalhaes Alves de Melo 1 Department of Computer Science University of Brasilia, Brasilia,