Athanasia Evangelinou, Michele Ciavotta, George Kousiouris, Danilo Ardagna Athanasia Evangelinou Cloud Forward 2015 6-8 October, Pisa
ARTIST MODAClouds Support cloud migration Main steps: Pre-migration: is migration possible? Migration: Analyse and model the legacy software Transform the legacy models to modernized models Post-migration: application provisioning Provide methods, a decision support system (DSS) an IDE and run-time environment for design and management of applications on multi-clouds with QoS guarantees Cloud-to-Cloud portability Monitor the run time performance and exploit Cloud flexibility 2
ARTIST is focused: - Migration of a legacy application into Cloud - on the modernization of non-cloud software code MODAClouds is focused: - on the migration of applications that are already able to run in a cloud allowing migration among clouds - Does not support migration of code for a legacy application(rely on existing approaches such as ARTIST) Cooperation focused on the benchmarking results sharing needed as an input to the combined developed methodology 3
Complex to find the most fitting deployment based on application requirements and provide the best QoS Wide range of Cloud Services (differ by cost, performance, consistency guarantees etc.) Deep change of software design and implementation because of cloud adoption Complexity of software systems Varying performance of Cloud Services and vendor lock-in Find a minimum-cost configuration that suits the application QoS requirements Infeasible to find the most fitting deployment manually due to the extremely large number of solutions 4
A joint benchmarking and optimization methodology to support the design and migration of non-cloud applications to Cloud -Provide both the combined use of cloud benchmarking tools and an automated tool for exploring the space of design alternatives -Identify the deployment of minimum costs which provide QoS guarantees 5
Identify cloud service performance by a series of metrics examination How to do this? -By using benchmarks & suitable tools for providers testing process Key aspects of the benchmarking process -Iterated over time (different hardware/managements decisions included in the refreshed metric values) Observe key characteristics (performance variation, standard deviation) Cover a wide range of diverse application types Ranking cloud services based on specific user interests with relation to cost, performance, deviation etc. 6
Framework able to automatically install, execute and store benchmark results Ranking cloud services based on specific user interests with relation to cost, performance, deviation etc. SE # Clients = w *delay + w * Cost 1 2 7
Current benchmark tools and Service Efficinency Metric Description Ranking cloud services based on specific user interests with relation to cost, performance, deviation etc. -ability to insert weights on cost or performance What is the best offering to run my streaming application when I want a cheap service for low workload? We have used these tools in order to simulate the most common workloads, to cover as more as possible the entire application field. Selection of benchmarks is based on the ability to have application level workloads characterization. SE # Clients = w *delay + w * Cost 1 2 8
Design-time optimization is NP-hard A multi-platform open source tool for specification, assessment and optimization of QoS characteristics for Cloud applications Assess the cost and the performance for a full described solution Translate design models into LQN Implements a local search approach 9
Benchmarking, Assessment,and,Op3miza3on, ApplicaAon' Categories' Select' SPACE4Cloud' Describe' Selected' Benchmarks' ApplicaAon' Model' Run' Benchmarking' Suite' Store' Execute' Cloud'Services' LQN'Solver' Query' Web'GUI' Visualize' Raw'Data'DB' Import' Resource'DB' Benchmarking results are exploited during the SPACE4Cloud candidate solution performance assessment 10
application considering a social network application - identify the most similar users in the network based on the registered user s preferences by calculating the Pearson coefficient Objective: comparison between results obtained by cloud providers and those calculated considering benchmarking information 11
Experiment in two phases: Phase A: analysis of the SPACE4Cloud outcomes for MiC and two cloud providers, Amazon and Microsoft Phase B: import into SPACE4Cloud performance results from benchmarking activity -DaCapo for the web Frontend and Filebench for Backend tier - Workload adopted for the experiment 12
All traces follow the trend defined by the workload For benchmarks a lower number of machines is needed to fulfil the QoS requirements To get a reliable estimate of the performance of an application in the Cloud is necessary to resort to more accurate benchmark results Amazon Microsoft EC2 Azure VM Type VM Type Frontend c1.medium Preview Extra Small Instance Frontend- m1.large A2 Bench Backend c1.medium Preview Extra Small Instance Backendbench m1.large A2 Amazon EC2 Azure 13
Profiling and Classification step in case when the application owner is aware of the overall VM behavior or different types of components are grouped in the same VM Validation with run time data on a real deployment 14
15
16
Objec&ves: Relieve the user of the usual manual benchmarking work- flow: Crea&on / Distruc&on of target environment Installa&on / Execu&on of benchmarks Retrieval and storage of results Provisioning of performance data in order to find the most fifng solu&on during the migra&on of an applica&on 17
Ranking cloud services based on specific user interests with relation to cost, performance, deviation etc. What is the best offering to run my streaming application when I want a cheap service for low workload? Workload aspects of a specific test Cost aspects of the selected offering Performance aspects for a given workload SE = j i sl i i swf j j j Where: s: scaling factor for normaliza&on l: workload metric f: KPI or cost metric w: weight factor 18
Compare between same colour bars, indica&ng similar workloads Similar results for Azure (A2 Standard), Amazon (m1.large) and Amazon (m1.medium) except for some cases where Amazon provides better results for the workload h2 while Azure was better for the workload avrora 19