High Availability-Aware Optimization Digest for Applications Deployment in Cloud
|
|
|
- Virgil Stanley
- 10 years ago
- Views:
Transcription
1 High Availability-Aware Optimization Digest for Applications Deployment in Cloud Manar Jammal ECE Depratment Western University London ON, Canada Ali Kanso Ericsson Research Ericsson Montreal Canada Abdallah Shami ECE Depratment Western University London ON, Canada Abstract Cloud computing is continuously growing as a business model for hosting information and communication technology applications. Although on-demand resource consumption and faster deployment time make this model appealing for the enterprise, other concerns arise regarding the quality of service offered by the cloud. One major concern is the high availability of applications hosted in the cloud. This paper demonstrates the tremendous effect that the placement strategy for virtual machines hosting applications has on the high availability of the services provided by these applications. In addition, a novel scheduling technique is presented that takes into consideration the interdependencies between applications components and other constraints such as communication delay tolerance and resource utilization. The problem is formulated as a linear programming multi-constraint optimization model. The evaluation results demonstrate that the proposed solution improves the availability of the scheduled components compared to OpenStack Nova scheduler. Index Terms High availability, cloud applications, delay tolerance, scheduling algorithms, virtual machines, failure scope. I. INTRODUCTION Nowadays, the cloud is becoming the lifeblood of most telecommunication network services and information technology (IT) software applications [1] [2]. With the development of the cloud market, cloud computing can be seen as an opportunity for information and communications technology (ICT) companies to deliver communication and IT services over any fixed or mobile network with high performance and secure end-to-end quality of service (QoS) for end users [3]. Although cloud computing provides benefits to different players in its ecosystem and makes services available anytime, anywhere and in any context, other concerns arise regarding the performance and quality of the services offered by the cloud. One major concern is high availability (HA) of applications hosted in the cloud. Since these applications are hosted by virtual machines (VMs) residing on physical servers, their availability depends on that of the hosts [4] [5] [6]. When a hosting server fails, its VMs and their applications become inoperative. The absence of application protection plan has a tremendous effect on business continuity and IT enterprises. According to Aberdeen Group, the cost of one hour of downtime is $74,000 for small organizations and $1.1 million for larger ones [7]; excluding reputation damage that can be significantly greater in the longer term. In addition, the Ponemon Institute study shows that 91% of data centers have experienced unplanned outages in 2011 and 2012 [8]. The solution to these failures is to develop a highly available system that protects services, avoids downtime and maintains business continuity. High availability is an interesting concept that has attracted several recent research studies. However, the way to attain a certain availability baseline when scheduling VMs or applications changes from one research study to another. In [9], the authors propose a placement approach to generate VM configurations while maintaining high availability for multi-tier applications and improving their performance. They develop a replication strategy to maintain HA constraints based on the mean time between failures (MTBF) while satisfying latency demands to minimize performance degradation for each application. In [10], the authors address the resource allocation problem for deployment of multitier applications. They aim to maximize total service level agreement (SLA) profit while maintaining a certain level of availability. They develop a non-linear programming model to achieve their objective while guaranteeing a level of availability based on a load-sharing fault-tolerance arrangement. Another attempt that maximizes service availability by providing a failureresiliency plan is proposed in [11]. In [12], the authors address applications components scheduling on a cloud infrastructure. They propose a multi-objective scheduling approach that tends to maximize resource utilization, minimize the cost of application runtime and maximize applications availability through a component replication approach. Although these solutions try to maximize the cloud-service availability, each approach focuses only on a few aspects of availability. Some attempts tend to maximize availability without considering redundancy models, while others ignore the impact of mean time to failure (MTTF) or interdependency relations and their associated constraints. To address these inadequacies, a scheduling solution is required that considers all the factors affecting availability. This paper aims to demonstrate the effect of applications placement strategy on the HA of the services provided by the virtualized cloud to its end users. To attain this objective, the cloud and the application models have
2 been captured in a unified modelling language (UML) model. Also, we propose a novel scheduling technique that looks into the interdependencies and redundancies between applications components, their failure scopes, their communication delay tolerance and resource utilization requirements. The technique examines not only MTTF to measure the component downtime and consequently its availability, but the analysis is based on the mean time to repair (MTTR), recovery and outage tolerance times as well. This scheduling is modeled as a mixed integer linear programming (MILP) problem. The main contributions of this work are to: Capture all the functionality and availability constraints that affect application placement. Reflect availability constraints not only by failure rates of applications components and scheduled servers, but also by functionality requirements, which generate antilocation and co-location constraints. Consider various interdependencies and redundancies among applications components. Examine multiple failure scopes that might affect the component itself, its execution environment and its dependent components. This paper is organized as follows. Section II describes the system UML model, including cloud and application tree structures. Section III defines the research methodology for developing the MILP model. Section IV describes the simulation environment and the results of this work. Finally, future work and conclusions are presented in Section V. II. SYSTEM MODELLING AND SCHEMATIZATION Cloud schedulers that are agnostic of the intricacies of the tenants application may result in suboptimal placements. In these placements, redundant components may be placed too close to each other rendering their existence obsolete because a single failure can affect them all, or the delay constraints can be violated, hindering application functionality. The HAaware scheduling in the cloud must consider details of both the applications and the cloud infrastructure. We define a different approach where we start by modeling the applications with their functional and non-functional requirements. Then we consider the cloud infrastructure model to be a constrained solution space where we do the mapping between applications, VMs, and servers to maximize the availability. To this end, we modelled the cloud and the application using a unified modelling language (UML) class diagram, as shown in Fig. 1. With this model, we started with a specific cloud infrastructure and a set of requested applications then we ended up by generating an interface between the provider and user side using VM mappings. A. Cloud Provider Model The proposed cloud architecture is captured in the UML class diagram. At the root level, the cloud consists of data center networks distributed across various geographical areas. Each data center consists of multiple racks communicating through aggregated switches. Each rack has a set of shelves housing a large number of servers, which can have different capacities and failure rates. Servers residing on the same rack are connected with each other through the same network device (the top of the rack switch). Finally, the VMs are hosted on the servers. This tree structure determines the network delay constraints and consequently the delay between the communicating applications. This architecture divides the cloud into five different latency zones, which will be further discussed in Section III. In the proposed tree structure, each node i has its own failure rate (λ) and MTTR. The MTTF and MTTR parameters divide the intra- and inter- data center networks into availability zones. The availability avail of the host h depends not only on its λ and MTTR, but on that of corresponding DC and rack R as well. Each host h can be seen as (DC, R, S). In each zone, the highly available host is selected as follows: MT T F h avail h = (1) MT T F h + MT T R h 1 MT T F h = λ DC +λ R +λ s where MT T R h = MT T R DC + MT T R R + MT T R S B. Applicationr Model Applications are typically developed using a component based architecture where each application is made up of one or more components. The application combines its components functionalities to provide a higher level of service [13] [14]. To maintain availability requirements, each component can have one or more redundant components. The primary component and its redundant ones are grouped into a dynamic redundancy group. In this group, each component is assigned a specific number of active and standby redundant components. As shown in the UML model, each redundancy group is assigned to at most one application, which consists of at least one redundancy group. As for the component, it belongs to one component type. A component type represents an executable software deployment. From this perspective, the component represents a running instance of the component type. Components of the same type have the same attributes defined in the component type class, such as computational resources (CPU and memory) attributes. Each component can be configured to depend on other components. The dependency relation is captured at the type level and can be configured using the delay tolerance, outage tolerance and communication bandwidth attributes. The delay tolerance determines the minimum required latency to maintain communication between sponsor and dependent components. As for the outage tolerance or tolerance time, it is the time that the dependent component can tolerate without the sponsor one. The same association is used to describe the relation between redundant components that need to synchronize their states. Finally, each component type is associated with at least one failure type. The list of failure types determines the failure scope of each component type, its MTTF, MTTR, and recommended recovery.
3 Fig. 1: Cloud-Application UML Class Diagram. C. VM Mapping Each component of the application model is scheduled on a server in the cloud provider model using VM mappings. Each VM can be hosted on one server and can have at least one component instance running in it. Sudden failure events can occur to cloud-application such as natural disaster, network or runtime failures [15]. In order to deal with these events, the inoperative VMs are switched off, and a failover group takes over the control. The failover group consists of at least one VM, which is a redundant VM of the inoperative one. As mentioned earlier, the proposed HA-aware scheduling technique searches for the optimum physical server to host the requested component. Whenever a server is scheduled, a VM is mapped to the corresponding component and to the chosen server. Therefore, a component can reside on that VM. III. HIGH AVAILABILITY MODEL The VM placement solution generates mappings between the cloud physical servers and VMs on which the tenants applications are hosted while satisfying various constraints. The HA-aware scheduling technique provides an efficient and highly available allocation by satisfying the following constraints: 1) Capacity Constraints: These are functional constraints, which are satisfied by searching for servers that meet the resource needs of each application. In the proposed model, the computational resources consist of the CPU and memory. 2) Network Delay Constraints: Using these constraints, another list of servers is generated. These servers satisfy the latency requirements to avoid service degradation between communicating applications. It is assumed that the delay requirements are divided into five delay types (i.e., latency zones) as follows: a) D 0 Type: Requires that all communicating components should be hosted on the same VM and consequently on the same server. b) D 1 Type: Requires that all the communicating components should be hosted on the same server. c) D 2 Type: Requires that all the communicating components should be hosted on the same rack. d) D 3 Type: Requires that all the communicating
4 components should be hosted on the same DC. e) D 4 Type: Requires that all the communicating components can be hosted across different data centers but must be within the same cloud. 3) Availability Constraints: These constraints prune the candidate servers generated by the capacity and delay constraints to select the ones that maintain a high level of application s availability. In order to maximize the HA of an application, three sub-constraints should be satisfied: a) Failure Rate Constraint: It determines that the selected server should maximize component s availability. In order to satisfy this constraint, the model searches for the server with maximum MTTF and minimum MTTR. Whenever it is found, then the MT T Fc A of the component C after being hosted can be calculated as follows: MT T Fc A 1 = (2) λ c + λ h b) Dependency Constraint: This constraint is divided into two sub-constraints: Co-location Constraint: This is valid whenever the tolerance time of the dependent component is lower than the recovery time of its sponsor. When the dependent component cannot tolerate the absence of its sponsor, then the failure of its server, its sponsor or sponsor s server affects it. In order to minimize its failure rate, both dependent and sponsor should share same server. Anti-location Constraint:: It requires that the dependent component and its sponsor should be placed on different servers. This is valid whenever the tolerance time of the dependent component is greater than the recovery time of its sponsor. By considering this case, the MTTF of the application will be maximized because of its inverse proportionality relation with λ. c) Redundancy Constraint: It basically prevents redundant components of a primary one from residing on the same server and requires that they should be placed far away from each other as the delay constraints allow. With these constraints, we developed a MILP model that minimizes components downtime while finding the optimal physical server to host them. A. Mathematical Formulation This section introduces a MILP model to solve the HA-aware placement problem. The proposed MILP model was solved using the IBM ILOG CPLEX optimization solver. 1) Notations Various parameters were used to solve the placement problem and develop the MILP model. a) Input Paramters Let a virtual machine be denoted as V and a server as S. Each VM consists of an application {A}, which consists of a Notation Significance Representation R Resource Type: CPU Number of Cores and or memory MB of RAM RED cc Redundancy matrix of C and C {0,1} DEP cc Dependency matrix of C and C {0,1} DEL ss Delay between S and S second OT c Outage tolerance of C hour RT c Recovery time of C hour DT c Delay tolerance of C hour TABLE I: Variable Notations specific number of components {C}, which are of component types {CT}. Therefore, each application is a set of C and CT and can be denoted as A ={C, CT}. This notation ensures that whenever a set of components C of types CT is scheduled, its corresponding application is considered hosted. As for the computational resources, L cr and L T sr denote the set of resources, which can be memory or CPU of component and server respectively. Table 1 shows the various parameters notations used in the MILP model. b) Decision Variables The decision variables are defined as follow: { 1 if S host C X cs = (3) 0 otherwise { 1 if DELss DT c z c = (4) 0 otherwise 2) MILP Model The downtime represents a duration during which a system is unavailable or fails to function. In this paper, the system can be a component or a host. However, the downtime of C does not only depend on the component itself (Downtime c ), but on its hosting server (Downtime s ) as well. In order to minimize the overall downtime of C, the objective function of the formulated MILP model should minimize Downtime c and Downtime s. The objective function and its constraints are formulated as follows: Objective Function: min (Downtime c + Downtime s ) X cs Subject to: c s Capacity Constraints: (X cs L cr ) L T sr s, r (5) c X cs = 1 c (6) s X cs {0, 1} c, s (7)
5 Network Delay Constraints: (X c s DEL ss DT c) M z c c, c, s, s (8) X cs 1 M (1 z c ) c, c, s (9) z c {0, 1} c (10) Redundancy Constraint: X cs +X c s 1 c, c, s, RED cc (11) Dependency Anti-location Constraint: X cs +X c s 2 c, c, s, DEP cc (12) Dependency Co-location Constraint: X cs + X c s 1 c, c, s, DEP cc (13) Boundary Constraint: Downtime c, Downtime s 0 c, s (14) As discussed earlier, the HA-aware placement of the application is affected by capacity, delay and availability constraints. Regarding capacity constraints, constraint (5) ensures that the requested components resources must not exceed the available resources of the selected destination server. Constraint (6) determines that the component can be placed on at most one physical server. Constraint (7) ensures that the decision variable (X cs ) is a binary integer. The delay constraints (8), (9), and (10) ensure that communicating components will be placed on a server that satisfies the required latency. These constraints are applied on the dependency and redundancy communication relations between scheduled components. The availability constraint (11) reflects the anti-location constraint between a component and its redundant ones. Using constraint (12), the dependent components should share the same server in case their outage tolerance is smaller than the recovery time of their sponsor component. The anti-location constraint between dependent and sponsor components is active in the contrary case as shown in (13). The boundary constraint (14) specifies real positive values for downtimes of C and S. IV. MILP EVALUATION To assess the MILP model, evaluations were conducted using different data sets. The MTTF, MTTR and recovery time were used as measures of the downtime and availability of various components. To clarify the importance of the proposed model, it was compared to OpenStack Nova scheduler algorithm [16]. A. Availability Analysis As mentioned earlier, the availability avail c of a component C is inversely proportional to its downtime. Since the downtime was generated in terms of hours per year, then the availability is calculated as follows: avail c = ( 8760 downtime c ) (15) As for the downtime, it changes with delay types and can be calculated in terms of λ, MTTR and/or RT of a component, its corresponding DC, rack R and server S. Since our work considers redundancy and failover solutions, then downtime depends on λ and RT. For instance, the downtime in D 4 and D 2 type is calculated as (16) and (17) respectively: downtime D4 c = (λ c +λ s +λ DC +λ R ) RT c (16) downtime D2 c = ((λ c + λ s ) RT c ) + (λ R RT R ) + (λ DC RT DC ) (17) B. Computational Complexity Any scheduling problem can be defined as a triplet α β γ. Starting with α, it represents the problem environment. As for β and γ, they represent the problem constraints and the objective to be optimized respectively [17]. This triplet can take various fields depending on the scheduling type. Since the proposed scheduling work has n components to be assigned to m servers while minimizing downtime, it can be formulated as special case of transportation problem. It can be represented as Q m p j h j (C j ), where Q m indicates that the problem environment consists of m different parallel machines, p j determines that a job j can be processed using one machine m and finally h(k) represents the cost function to be optimized. This special case is known as the bipartite matching or assignment problem. It is represented as bipartite graph G = (n 1, n 2, a). This graph consists of two nodes n 1 and n 2 connected using arc a. This arc a = {j, k} assigns node j of set n 1 to node k of set n 2 and is represented by the decision variable X cs defined in Section III. This type of scheduling problems is formulated using linear programming models, but it is characterized by NP-hard complexity hierarchy. Because of the NP-hardness of the proposed optimization model, it is only feasible for small DC networks [18]. In the evaluated small network, the number of variables generated in the optimization solver is approximately C. OpenStack Filter Scheduler OpenStack is an open source cloud management system that satisfies the needs of private and public clouds using computing, networking and storage services [16]. Nova is one of the computing components of OpenStack that schedules VM based on a predefined instance type such as CPU or RAM. Nova compute service provides different types of filters and weights to host an instance. During the filtering stage, the scheduler generates list of servers that are capable of hosting the instance. Then weight and cost functions are applied to this list to determine the best compute node for it [19]. Nova supports other filters that can be used to support HA placement such as availability zone, affinity and antiaffinity filters, however, these existing filters are agnostic of the delay tolerance and inter-vm dependencies, which means they need to be extended. Also, the users have to manually define the needed filter based on the instance properties [19]. Alternatively, our proposed work eliminates the user interference. It provides an automated process for deploying
6 Fig. 2: MILP vs OpenStack Scheduler. VMs by considering functionality (resources and delay) and availability (different types of dependencies) requirements. D. Results Both the optimization model and the OpenStack scheduler were evaluated on a network consisting of 20 components, 2 DCs, 4 racks and 50 servers. The server and component MTTFs were generated using an exponential distribution with mean = 2000 hours for both [20]. As for the server and component MTTR, a normal distribution was used with mean = 3 and 0.05 hours and standard deviation = 1 and hours respectively [20] [21]. For server and component recovery times, a normal distribution was used with mean = 0.05 and hours and standard deviation = and hours respectively. To evaluate the interdependencies and redundancies between components, the proposed MILP was evaluated on two real-time Web applications. The Web applications include two types of dependencies among the components: (1) the synchronization dependency between the active component and its replicas, and (2) the functional dependencies where the App server depends on database server and sponsors the HTTP server. 1) MILP vs OpenStack Nova Scheduler The MILP model was compared to the core and RAM filters in Nova scheduler in order to show the impact of availability constraints on the downtime of components per year. Using these filters, only servers with sufficient RAM and CPU cores are eligible for hosting VMs. Fig. 2 shows the downtime difference between MILP model and OpenStack scheduler for different delay zones. Since these filters do not consider delay requirements, they generated similar results for all delay types. As the delay zones widen the solution space for the components placement, the difference between OpenStack scheduler and MILP increases gradually. Using the HA-aware MILP model, the downtime of component is reduced by 35%, 39% and 52% for D 0 /D 1, D 2 and D 3 types respectively. As for D 4 type, it allows scheduling component on any server in the cloud and allows placing primary and redundant components on two different DCs. In this case, if the server, rack or DC of component C fails, C will be down until it fails over to its redundant. Therefore, the downtime of a C is calculated using (16). But sometimes a sponsor component can affect the downtime of its dependents if the latter cannot tolerate its failure. In any case, the downtime is reduced significantly by 97% using the HA-aware technique in D 4 type. 2) Availability Improvement within MILP Model Although the MILP model maximizes the availability of components, its results change among different delay zones. Since D 0 and D 1 types require placement of components in the
7 Fig. 3: Availability Improvement among Delay Zones. same server, then the solution space of the selection process is limited. Consequently, the availability is affected and depends on the recovery time of the hosting server, rack and DC. However, the availability is highly improved in D 4 because it depends only on the recovery time of the hosted component. Additionally, this delay zone eliminates the restrictions on the locations of the components. Fig. 3 shows the difference between downtime among the 5 delay zones. Compared to D 4 results, the availability of components is reduced as more restrictions are added on the servers location. V. CONCLUSION Unexpected cloud-services outages can have a profound impact on business continuity and IT enterprises. The key to achieving availability requirements is to develop an approach that is immune to failure while considering real-time interdependencies and redundancies between applications. This paper has addressed the problem environment from different vantage points to generate highly available optimal placement for the requested applications. The proposed MILP model minimizes the downtime of applications, but its computational complexity limits the evaluation on large networks. Therefore, this model will be associated with a heuristic solution in the future work. The heuristic will solve the scheduling problem in polynomial time while satisfying all QoS and SLA requirements and differentiating between mission critical and standard applications. Also, the proposed approach will be extended to include multiple objectives such as maximizing the HA of applications components and maximizing resource utilization of the infrastructure used. ACKNOWLEDGMENT REFERENCES [1] M.A. Sharkh, M. Jammal, A. Shami, and A. Ouda, Resource allocation in a network-based cloud computing environment: design challenges, IEEE Communications Magazine, vol. 51, no.11, pp , Nov [2] Microsoft, The Economics of the Cloud, en-us/news/presskits/cloud/docs/the-economics-of-the-cloud.pdf, Nov [3] ITU, Cloud Computing Benefits from Telecommunication and ICT Perspectives, pub/itu-t/opb/fg/t-fg-cloud P7-PDF-E.pdf, Feb [4] L. Grit, D. Irwin, A. Yumerefendi, and A. Chase, Virtual Machine Hosting for Networked Clusters: Building the Foundations for Autonomic Orchestration, in 2nd International Workshop on Virtualization Technology in Distributed Computing, [5] D. Jayasinghe, C. Pu, et al., Improving Performance and Availability of Services Hosted on IaaS Clouds with Structural Constraint-Aware Virtual Machine Placement, in IEEE International Conference on Services Computing (SCC), July 4-9, 2011, pp [6] J. Dean and L. Barroso, The Tail at Scale, Communications of the ACM, vol. 56, no. 2, pp , Feb [7] Aberdeen Group, Why Mid-Sized Enterprises Should Consider Using Disaster Recovery-as-a-Service, Library/7873/AI-disaster-recovery-downtime.aspx, April [8] Ponemon Institute, 2013 Study on Data Center Outages, liebert/documents/white%20papers/2013 emerson data center outages sl pdf, Sep [9] G. Jung, K.R. Joshi, M.A. Hiltunen, R.D. Schlichting, and C. Pu, Performance and availability aware regeneration for cloud based multitier applications, in IEEE/IFIP Conference Dependable Systems and Networks, June 28-July , pp [10] B. Addis, D. Ardagna, B. Panicucci, and L. Zhang, Autonomic Management of Cloud Service Centers with Availability Guarantees, in IEEE International Conference Cloud Computing (CLOUD), July 5-10, 2010, pp [11] A. Abouzamazem and P. Ezhilchelvan, Efficient Inter-cloud Replication for High-Availability Services, in IEEE International Conference Cloud Engineering (IC2E), March 25-27, 2013, pp [12] M.E. Frincu and C. Craciun, Multi-objective Meta-heuristics for Scheduling Applications with High Availability Requirements and Cost Constraints in Multi-Cloud Environments, in IEEE Conference Utility and Cloud Computing, Dec. 5-8, 2011, pp [13] SA Forum, Service AvailabilityTM Forum Application Interface Specification, ?view=1, June [14] P3 InfoTech Solutions, Web Application Deployment in the Cloud Using Amazon Web Services From Infancy to Maturity, slideshare.net/p3infotech solutions/web-application-deploymentaws#, July [15] P. Bodik, F. Armando, M.J. Franklin, M.I. Jordan, and D.A. Patterson, Characterizing, modeling, and generating workload spikes for stateful services, in ACM Symposium Cloud Computing, June 10-11, 2010, pp [16] OpenStack, OpenStack Cloud Software, [17] M. Pinedo, Deterministic Models: Preliminaries, Scheduling Theory, Algorithms and Systems, Spring, New York, pp.13-33, 2008 [18] O. Kone, C. Artigues, P. Lopez, and M. Mongeau, Event-based MILP models for resource-constrained project scheduling problems, Computers and Operations Research, vol. 38, pp. 313, Jan [19] OpenStack, Filter Scheduler, cinder/devref/filter scheduler.html#costs-and-weights, Feb [20] Reliability HotWire, Availability and the Different Ways to Calculate It, Sep [21] EventHelix, System Reliability and Availability, reliability availability.htm#.u-acupldxcf, This work is partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC-STPGP ) and Ericsson Research. The authors also would like to thank Philip Kurowski for his contribution in the comparison with OpenStack Scheduler.
Infrastructure as a Service (IaaS)
Infrastructure as a Service (IaaS) (ENCS 691K Chapter 4) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ References 1. R. Moreno et al.,
Software-Defined Networks Powered by VellOS
WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible
CA Cloud Overview Benefits of the Hyper-V Cloud
Benefits of the Hyper-V Cloud For more information, please contact: Email: [email protected] Ph: 888-821-7888 Canadian Web Hosting (www.canadianwebhosting.com) is an independent company, hereinafter
How To Understand Cloud Computing
Overview of Cloud Computing (ENCS 691K Chapter 1) Roch Glitho, PhD Associate Professor and Canada Research Chair My URL - http://users.encs.concordia.ca/~glitho/ Overview of Cloud Computing Towards a definition
CURTAIL THE EXPENDITURE OF BIG DATA PROCESSING USING MIXED INTEGER NON-LINEAR PROGRAMMING
Journal homepage: http://www.journalijar.com INTERNATIONAL JOURNAL OF ADVANCED RESEARCH RESEARCH ARTICLE CURTAIL THE EXPENDITURE OF BIG DATA PROCESSING USING MIXED INTEGER NON-LINEAR PROGRAMMING R.Kohila
Exploring Resource Provisioning Cost Models in Cloud Computing
Exploring Resource Provisioning Cost Models in Cloud Computing P.Aradhya #1, K.Shivaranjani *2 #1 M.Tech, CSE, SR Engineering College, Warangal, Andhra Pradesh, India # Assistant Professor, Department
Load Balancing and Maintaining the Qos on Cloud Partitioning For the Public Cloud
Load Balancing and Maintaining the Qos on Cloud Partitioning For the Public Cloud 1 S.Karthika, 2 T.Lavanya, 3 G.Gokila, 4 A.Arunraja 5 S.Sarumathi, 6 S.Saravanakumar, 7 A.Gokilavani 1,2,3,4 Student, Department
Last time. Data Center as a Computer. Today. Data Center Construction (and management)
Last time Data Center Construction (and management) Johan Tordsson Department of Computing Science 1. Common (Web) application architectures N-tier applications Load Balancers Application Servers Databases
Distributed and Scalable QoS Optimization for Dynamic Web Service Composition
Distributed and Scalable QoS Optimization for Dynamic Web Service Composition Mohammad Alrifai L3S Research Center Leibniz University of Hannover, Germany [email protected] Supervised by: Prof. Dr. tech.
The Sirocco multi-cloud management framework
The Sirocco multi-cloud management framework Frédéric Dang Tran Paweł Rubach Orange Labs {frederic.dangtran,pawel.rubach}@orange.com Outline Context and objectives Sirocco architecture overview Focus on
International Journal of Computer & Organization Trends Volume21 Number1 June 2015 A Study on Load Balancing in Cloud Computing
A Study on Load Balancing in Cloud Computing * Parveen Kumar * Er.Mandeep Kaur Guru kashi University,Talwandi Sabo Guru kashi University,Talwandi Sabo Abstract: Load Balancing is a computer networking
PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM
PERFORMANCE ANALYSIS OF PaaS CLOUD COMPUTING SYSTEM Akmal Basha 1 Krishna Sagar 2 1 PG Student,Department of Computer Science and Engineering, Madanapalle Institute of Technology & Science, India. 2 Associate
Cloud Optimize Your IT
Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release
An Energy-Aware Methodology for Live Placement of Virtual Machines with Variable Profiles in Large Data Centers
An Energy-Aware Methodology for Live Placement of Virtual Machines with Variable Profiles in Large Data Centers Rossella Macchi: Danilo Ardagna: Oriana Benetti: Politecnico di Milano eni s.p.a. Politecnico
Grid Computing Vs. Cloud Computing
International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 6 (2013), pp. 577-582 International Research Publications House http://www. irphouse.com /ijict.htm Grid
COST MINIMIZATION OF RUNNING MAPREDUCE ACROSS GEOGRAPHICALLY DISTRIBUTED DATA CENTERS
COST MINIMIZATION OF RUNNING MAPREDUCE ACROSS GEOGRAPHICALLY DISTRIBUTED DATA CENTERS Ms. T. Cowsalya PG Scholar, SVS College of Engineering, Coimbatore, Tamilnadu, India Dr. S. Senthamarai Kannan Assistant
SLA BASED SERVICE BROKERING IN INTERCLOUD ENVIRONMENTS
SLA BASED SERVICE BROKERING IN INTERCLOUD ENVIRONMENTS Foued Jrad, Jie Tao and Achim Streit Steinbuch Centre for Computing, Karlsruhe Institute of Technology, Karlsruhe, Germany {foued.jrad, jie.tao, achim.streit}@kit.edu
Figure 1. The cloud scales: Amazon EC2 growth [2].
- Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 [email protected], [email protected] Abstract One of the most important issues
Achieve Better Ranking Accuracy Using CloudRank Framework for Cloud Services
Achieve Better Ranking Accuracy Using CloudRank Framework for Cloud Services Ms. M. Subha #1, Mr. K. Saravanan *2 # Student, * Assistant Professor Department of Computer Science and Engineering Regional
Flexible Distributed Capacity Allocation and Load Redirect Algorithms for Cloud Systems
Flexible Distributed Capacity Allocation and Load Redirect Algorithms for Cloud Systems Danilo Ardagna 1, Sara Casolari 2, Barbara Panicucci 1 1 Politecnico di Milano,, Italy 2 Universita` di Modena e
Advanced Load Balancing Mechanism on Mixed Batch and Transactional Workloads
Advanced Load Balancing Mechanism on Mixed Batch and Transactional Workloads G. Suganthi (Member, IEEE), K. N. Vimal Shankar, Department of Computer Science and Engineering, V.S.B. Engineering College,
An Efficient Checkpointing Scheme Using Price History of Spot Instances in Cloud Computing Environment
An Efficient Checkpointing Scheme Using Price History of Spot Instances in Cloud Computing Environment Daeyong Jung 1, SungHo Chin 1, KwangSik Chung 2, HeonChang Yu 1, JoonMin Gil 3 * 1 Dept. of Computer
Beyond the Stars: Revisiting Virtual Cluster Embeddings
Beyond the Stars: Revisiting Virtual Cluster Embeddings Matthias Rost Technische Universität Berlin September 7th, 2015, Télécom-ParisTech Joint work with Carlo Fuerst, Stefan Schmid Published in ACM SIGCOMM
COST-BENEFIT ANALYSIS: HIGH AVAILABILITY IN THE CLOUD AVI FREEDMAN, TECHNICAL ADVISOR. a white paper by
HIGH AVAILABILITY IN THE CLOUD AVI FREEDMAN, TECHNICAL ADVISOR a white paper by COST-BENEFIT ANALYSIS: HIGH AVAILABILITY IN THE CLOUD Is the first question you ask when evaluating IT infrastructure, how
RANKING OF CLOUD SERVICE PROVIDERS IN CLOUD
RANKING OF CLOUD SERVICE PROVIDERS IN CLOUD C.S. RAJARAJESWARI, M. ARAMUDHAN Research Scholar, Bharathiyar University,Coimbatore, Tamil Nadu, India. Assoc. Professor, Department of IT, PKIET, Karaikal,
High Availability of VistA EHR in Cloud. ViSolve Inc. White Paper February 2015. www.visolve.com
High Availability of VistA EHR in Cloud ViSolve Inc. White Paper February 2015 1 Abstract Inspite of the accelerating migration to cloud computing in the Healthcare Industry, high availability and uptime
High Availability with Postgres Plus Advanced Server. An EnterpriseDB White Paper
High Availability with Postgres Plus Advanced Server An EnterpriseDB White Paper For DBAs, Database Architects & IT Directors December 2013 Table of Contents Introduction 3 Active/Passive Clustering 4
CA ARCserve Replication and High Availability Deployment Options for Hyper-V
Solution Brief: CA ARCserve R16.5 Complexity ate my budget CA ARCserve Replication and High Availability Deployment Options for Hyper-V Adding value to your Hyper-V environment Overview Server virtualization
IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures
IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction
A SWOT ANALYSIS ON CISCO HIGH AVAILABILITY VIRTUALIZATION CLUSTERS DISASTER RECOVERY PLAN
A SWOT ANALYSIS ON CISCO HIGH AVAILABILITY VIRTUALIZATION CLUSTERS DISASTER RECOVERY PLAN Eman Al-Harbi [email protected] Soha S. Zaghloul [email protected] Faculty of Computer and Information
Optimal Service Pricing for a Cloud Cache
Optimal Service Pricing for a Cloud Cache K.SRAVANTHI Department of Computer Science & Engineering (M.Tech.) Sindura College of Engineering and Technology Ramagundam,Telangana G.LAKSHMI Asst. Professor,
Program Summary. Criterion 1: Importance to University Mission / Operations. Importance to Mission
Program Summary DoIT provides support for 1,200 virtual and 150 physical servers. Housed in DoIT data centers to take advantage of climate control, power conditioning and redundancy, fire suppression systems,
Scheduling and Monitoring of Internally Structured Services in Cloud Federations
Scheduling and Monitoring of Internally Structured Services in Cloud Federations Lars Larsson, Daniel Henriksson and Erik Elmroth {larsson, danielh, elmroth}@cs.umu.se Where are the VMs now? Cloud hosting:
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V White Paper July 2011 Contents Executive Summary... 3 Introduction... 3 Audience and Scope... 4 Today s Challenges...
RELIABILITY AND AVAILABILITY OF CLOUD COMPUTING. Eric Bauer. Randee Adams IEEE IEEE PRESS WILEY A JOHN WILEY & SONS, INC.
RELIABILITY AND AVAILABILITY OF CLOUD COMPUTING Eric Bauer Randee Adams IEEE IEEE PRESS WILEY A JOHN WILEY & SONS, INC., PUBLICATION CONTENTS Figures Tables Equations Introduction xvii xxi xxiii xxv I
Non-Stop Hadoop Paul Scott-Murphy VP Field Techincal Service, APJ. Cloudera World Japan November 2014
Non-Stop Hadoop Paul Scott-Murphy VP Field Techincal Service, APJ Cloudera World Japan November 2014 WANdisco Background WANdisco: Wide Area Network Distributed Computing Enterprise ready, high availability
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
AEIJST - June 2015 - Vol 3 - Issue 6 ISSN - 2348-6732. Cloud Broker. * Prasanna Kumar ** Shalini N M *** Sowmya R **** V Ashalatha
Abstract Cloud Broker * Prasanna Kumar ** Shalini N M *** Sowmya R **** V Ashalatha Dept of ISE, The National Institute of Engineering, Mysore, India Cloud computing is kinetically evolving areas which
A Middleware Strategy to Survive Compute Peak Loads in Cloud
A Middleware Strategy to Survive Compute Peak Loads in Cloud Sasko Ristov Ss. Cyril and Methodius University Faculty of Information Sciences and Computer Engineering Skopje, Macedonia Email: [email protected]
MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT
MINIMIZING STORAGE COST IN CLOUD COMPUTING ENVIRONMENT 1 SARIKA K B, 2 S SUBASREE 1 Department of Computer Science, Nehru College of Engineering and Research Centre, Thrissur, Kerala 2 Professor and Head,
Dynamic Resource Allocation in Software Defined and Virtual Networks: A Comparative Analysis
Dynamic Resource Allocation in Software Defined and Virtual Networks: A Comparative Analysis Felipe Augusto Nunes de Oliveira - GRR20112021 João Victor Tozatti Risso - GRR20120726 Abstract. The increasing
Performance Management for Cloud-based Applications STC 2012
Performance Management for Cloud-based Applications STC 2012 1 Agenda Context Problem Statement Cloud Architecture Key Performance Challenges in Cloud Challenges & Recommendations 2 Context Cloud Computing
Precise VM Placement Algorithm Supported by Data Analytic Service
Precise VM Placement Algorithm Supported by Data Analytic Service Dapeng Dong and John Herbert Mobile and Internet Systems Laboratory Department of Computer Science, University College Cork, Ireland {d.dong,
Deployment Options for Microsoft Hyper-V Server
CA ARCserve Replication and CA ARCserve High Availability r16 CA ARCserve Replication and CA ARCserve High Availability Deployment Options for Microsoft Hyper-V Server TYPICALLY, IT COST REDUCTION INITIATIVES
How swift is your Swift? Ning Zhang, OpenStack Engineer at Zmanda Chander Kant, CEO at Zmanda
How swift is your Swift? Ning Zhang, OpenStack Engineer at Zmanda Chander Kant, CEO at Zmanda 1 Outline Build a cost-efficient Swift cluster with expected performance Background & Problem Solution Experiments
INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD
INCREASING SERVER UTILIZATION AND ACHIEVING GREEN COMPUTING IN CLOUD M.Rajeswari 1, M.Savuri Raja 2, M.Suganthy 3 1 Master of Technology, Department of Computer Science & Engineering, Dr. S.J.S Paul Memorial
Cloud Management: Knowing is Half The Battle
Cloud Management: Knowing is Half The Battle Raouf BOUTABA David R. Cheriton School of Computer Science University of Waterloo Joint work with Qi Zhang, Faten Zhani (University of Waterloo) and Joseph
HRG Assessment: Stratus everrun Enterprise
HRG Assessment: Stratus everrun Enterprise Today IT executive decision makers and their technology recommenders are faced with escalating demands for more effective technology based solutions while at
Optimizing the Cost for Resource Subscription Policy in IaaS Cloud
Optimizing the Cost for Resource Subscription Policy in IaaS Cloud Ms.M.Uthaya Banu #1, Mr.K.Saravanan *2 # Student, * Assistant Professor Department of Computer Science and Engineering Regional Centre
OmniCube. SimpliVity OmniCube and Multi Federation ROBO Reference Architecture. White Paper. Authors: Bob Gropman
OmniCube SimpliVity OmniCube and Multi Federation ROBO Reference Architecture White Paper Authors: Bob Gropman Date: April 13, 2015 SimpliVity and OmniCube are trademarks of SimpliVity Corporation. All
VMware vcloud Automation Center 6.0
VMware 6.0 Reference Architecture TECHNICAL WHITE PAPER Table of Contents Overview... 4 Initial Deployment Recommendations... 4 General Recommendations... 4... 4 Load Balancer Considerations... 4 Database
Télécom SudParis. Djamal Zeghlache Professor. Département Réseaux et Services Multimédia Mobiles
Télécom SudParis Djamal Zeghlache Professor Département Réseaux et Services Multimédia Mobiles Resource Management Group (in wireless, fixed and computer networks) Département RS2M Méthodes, modèles et
Web Application Deployment in the Cloud Using Amazon Web Services From Infancy to Maturity
P3 InfoTech Solutions Pvt. Ltd http://www.p3infotech.in July 2013 Created by P3 InfoTech Solutions Pvt. Ltd., http://p3infotech.in 1 Web Application Deployment in the Cloud Using Amazon Web Services From
EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers
EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is
A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network
ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network Mohammad Naimur Rahman
Connect Converge / Converged Infrastructure
Large Canadian Bank Adopts by: Paul J. Holenstein Paul J. Holenstein Executive Vice President Gravic, Inc Malvern, Pennsylvania (CI) 1 is the heart of HP s move to what it calls the Instant-On Enterprise.
HA / DR Jargon Buster High Availability / Disaster Recovery
HA / DR Jargon Buster High Availability / Disaster Recovery Welcome to Maxava s Jargon Buster. Your quick reference guide to Maxava HA and industry technical terms related to High Availability and Disaster
Hierarchical Approach for Green Workload Management in Distributed Data Centers
Hierarchical Approach for Green Workload Management in Distributed Data Centers Agostino Forestiero, Carlo Mastroianni, Giuseppe Papuzzo, Mehdi Sheikhalishahi Institute for High Performance Computing and
Introduction to Cloud Computing
Introduction to Cloud Computing Cloud Computing I (intro) 15 319, spring 2010 2 nd Lecture, Jan 14 th Majd F. Sakr Lecture Motivation General overview on cloud computing What is cloud computing Services
The Promise of Virtualization for Availability, High Availability, and Disaster Recovery - Myth or Reality?
B.Jostmeyer Tivoli System Automation [email protected] The Promise of Virtualization for Availability, High Availability, and Disaster Recovery - Myth or Reality? IBM Systems Management, Virtualisierung
Auto-Scaling Model for Cloud Computing System
Auto-Scaling Model for Cloud Computing System Che-Lun Hung 1*, Yu-Chen Hu 2 and Kuan-Ching Li 3 1 Dept. of Computer Science & Communication Engineering, Providence University 2 Dept. of Computer Science
Integrated Application and Data Protection. NEC ExpressCluster White Paper
Integrated Application and Data Protection NEC ExpressCluster White Paper Introduction Critical business processes and operations depend on real-time access to IT systems that consist of applications and
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing Deep Mann ME (Software Engineering) Computer Science and Engineering Department Thapar University Patiala-147004
A Comparative Study of Load Balancing Algorithms in Cloud Computing
A Comparative Study of Load Balancing Algorithms in Cloud Computing Reena Panwar M.Tech CSE Scholar Department of CSE, Galgotias College of Engineering and Technology, Greater Noida, India Bhawna Mallick,
Traffic Engineering for Multiple Spanning Tree Protocol in Large Data Centers
Traffic Engineering for Multiple Spanning Tree Protocol in Large Data Centers Ho Trong Viet, Yves Deville, Olivier Bonaventure, Pierre François ICTEAM, Université catholique de Louvain (UCL), Belgium.
WORKFLOW ENGINE FOR CLOUDS
WORKFLOW ENGINE FOR CLOUDS By SURAJ PANDEY, DILEBAN KARUNAMOORTHY, and RAJKUMAR BUYYA Prepared by: Dr. Faramarz Safi Islamic Azad University, Najafabad Branch, Esfahan, Iran. Workflow Engine for clouds
Whitepaper Continuous Availability Suite: Neverfail Solution Architecture
Continuous Availability Suite: Neverfail s Continuous Availability Suite is at the core of every Neverfail solution. It provides a comprehensive software solution for High Availability (HA) and Disaster
Performance Management for Cloudbased STC 2012
Performance Management for Cloudbased Applications STC 2012 1 Agenda Context Problem Statement Cloud Architecture Need for Performance in Cloud Performance Challenges in Cloud Generic IaaS / PaaS / SaaS
Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware
Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery
Multilevel Communication Aware Approach for Load Balancing
Multilevel Communication Aware Approach for Load Balancing 1 Dipti Patel, 2 Ashil Patel Department of Information Technology, L.D. College of Engineering, Gujarat Technological University, Ahmedabad 1
Virtualizing Exchange
Virtualizing Exchange Simplifying and Optimizing Management of Microsoft Exchange Server Using Virtualization Technologies By Anil Desai Microsoft MVP September, 2008 An Alternative to Hosted Exchange
Cloud Computing Standards: Overview and ITU-T positioning
ITU Workshop on Cloud Computing (Tunis, Tunisia, 18-19 June 2012) Cloud Computing Standards: Overview and ITU-T positioning Dr France Telecom, Orange Labs Networks & Carriers / R&D Chairman ITU-T Working
USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES
USING VIRTUAL MACHINE REPLICATION FOR DYNAMIC CONFIGURATION OF MULTI-TIER INTERNET SERVICES Carlos Oliveira, Vinicius Petrucci, Orlando Loques Universidade Federal Fluminense Niterói, Brazil ABSTRACT In
International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April-2015 36 ISSN 2229-5518
International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April-2015 36 An Efficient Approach for Load Balancing in Cloud Environment Balasundaram Ananthakrishnan Abstract Cloud computing
Cloud Federations in Contrail
Cloud Federations in Contrail Emanuele Carlini 1,3, Massimo Coppola 1, Patrizio Dazzi 1, Laura Ricci 1,2, GiacomoRighetti 1,2 " 1 - CNR - ISTI, Pisa, Italy" 2 - University of Pisa, C.S. Dept" 3 - IMT Lucca,
High Availability with Windows Server 2012 Release Candidate
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
Azure Media Service Cloud Video Delivery KILROY HUGHES MICROSOFT AZURE MEDIA 2015.08.20
Azure Media Service Cloud Video Delivery KILROY HUGHES MICROSOFT AZURE MEDIA 2015.08.20 Azure Cloud Topology Public cloud providers such as Amazon Web Service, Google, IBM, Rackspace, etc. have similar
Energy Constrained Resource Scheduling for Cloud Environment
Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering
On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform
On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform Page 1 of 16 Table of Contents Table of Contents... 2 Introduction... 3 NoSQL Databases... 3 CumuLogic NoSQL Database Service...
A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing
A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing Liang-Teh Lee, Kang-Yuan Liu, Hui-Yang Huang and Chia-Ying Tseng Department of Computer Science and Engineering,
2. Research and Development on the Autonomic Operation. Control Infrastructure Technologies in the Cloud Computing Environment
R&D supporting future cloud computing infrastructure technologies Research and Development on Autonomic Operation Control Infrastructure Technologies in the Cloud Computing Environment DEMPO Hiroshi, KAMI
Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani
Improving the Scalability of Data Center Networks with Traffic-aware Virtual Machine Placement Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani Overview:
Becoming a Cloud Services Broker. Neelam Chakrabarty Sr. Product Marketing Manager, HP SW Cloud Products, HP April 17, 2013
Becoming a Cloud Services Broker Neelam Chakrabarty Sr. Product Marketing Manager, HP SW Cloud Products, HP April 17, 2013 Hybrid delivery for the future Traditional IT Evolving current state Future Information
