Hosted Science: Managing Computational Workflows in the Cloud. Ewa Deelman USC Information Sciences Institute
|
|
|
- Pearl Hoover
- 10 years ago
- Views:
Transcription
1 Hosted Science: Managing Computational Workflows in the Cloud Ewa Deelman USC Information Sciences Institute
2 The Problem Scientific data is being collected at an ever increasing rate The old days -- big, focused experiments LHC Today cheap DNA sequencers and an increasing number of them The complexity of the computational problems is ever increasing Local compute resources are often not enough (too small, limited availability) The computing infrastructure keeps changing Hardware, software, but also computational models
3 Computational workflows --managing application complexity Help express multi-step computations in a declarative way Can support automation, minimize human involvement Makes analyses easier to run Can be high-level and portable across execution platforms Keep track of provenance to support reproducibility Foster collaboration code and data sharing
4 So far applications have been running on local/campus clusters or grids ~ 850,000 tasks SCEC CyberShake l Uses physicsbased approach l 3-D ground motion simulation with anelastic wave propagation l Considers ~415,000 earthquakes per site l <200 km from site of interest l Magnitude >6.5
5 DNA sequencing, a new breed of data-intensive applications Data collected at a sequencers Needs to be filtered for noisy data Needs to be aligned Needs to be collected into a single map Vendors provide some basic tools you may want to try the latest alignment algorithm you may want to use a remote cluster Challenges: automation of analysis, reproducibility Portability provenance USERS!
6 Outline Role of hosted environments Workflows on the Cloud Challenges in running workflows on the cloud Data management aspects Hosted Science Managing workflow ensembles on the cloud Within user-defined constraints Conclusions Acknowledgements: Gideon Juve (USC), Maciej Malawski (AGH), Jarek Nabrzyski ( ND), Applications: Bruce Berriman et al (Caltech), Tom Jordan et al (USC), Ben Berman, James Knowles et al (USC Medical School), Pegasus Projects: Miron Livny (UWM), Gideon Juve, Gaurang Mehta, Rajiv Mayani, Mats Rynge, Karan Vahi, (USC/ISI)
7 New applications are looking towards Clouds Originated in the business domain Outsourcing services to the Cloud (successful for business) Pay for what you use, elasticity of resources Provided by data centers that are built on compute and storage virtualization technologies Scientific applications often have different requirements MPI Shared file system Support for many dependent jobs Google s Container-based Data Center in Belgium Ewa Deelman, [email protected]
8 Hosted Science Today applications are using the cloud as a resource provider (storage, computing, social networking) In the future more services will be migrating to the cloud (more integration) Hosted end-to-end analysis Data and method publication Instruments Instruments Manpower Data and Publication sharing Social Networking Databases Science as Service Analysis as Service Workflow as Service Infrastructure as a Service Clusters Application Models
9 The Future is Now Illumnia s BaseSpace
10 Outline Role of hosted environments Workflows on the Cloud Challenges in running workflows on the cloud Data management aspects Hosted Science Managing workflow ensembles on the cloud Within user-defined constraints Conclusions
11 Issues It is difficult to manage cost How much would it cost to analyze one sample? How much would it cost to analyze a set of samples? The analyses may be complex and multi-step (workflows) It is difficult to manage deadlines I would like all the results to be done in a week I would like the most important analyses done in a week I have a week to get the most important results and $500 to do it
12 Scientific Environment How to manage complex workloads? Data Storage Campus Cluster EGI Work definition TeraGrid/XSEDE Open Science Grid Amazon Cloud Local Resource
13 Workflows have different computational needs --need systems to manage their execution SoCal Map needs 239 of those MPI codes ~ 12,000 CPU hours, Post Processing 2,000 CPU hours Data footprint ~ 800GB Peak # of cores on OSG 1,600 Walltime on OSG 20 hours, could be done in 4 hours on 800 cores
14 Workflow Management You may want to use different resources within a workflow or over time Need a high-level workflow specification Need a planning capability to map from high-level to executable workflow Need to manage the task dependencies Need to manage the execution of tasks on the remote resources Need to provide scalability, performance, reliability
15 Our Approach l Analysis Representation l Support a declarative representation for the workflow (dataflow) l Represent the workflow structure as a Directed Acyclic Graph (DAG) l Use recursion to achieve scalability l System (Plan for the resources, Execute the Plan, Manage tasks) l Layered architecture, each layer is responsible for a particular function l Mask errors at different levels of the system l Modular, composed of well-defined components, where different components can be swapped in l Use and adapt existing graph and other relevant algorithms
16 Use the given Resources Data Storage data Campus Cluster EGI Work definition As a WORKFLOW Workflow Management System Local Resource work TeraGrid/XSEDE Open Science Grid Amazon Cloud
17 Challenges of running workflows on the cloud Clouds provide resources, but the software is up to the user Running on multiple nodes may require cluster services (e.g. scheduler) Dynamically configuring such systems is not easy Manual setup is error-prone and not scalable Scripts work to a point, but break down for complex deployments Some tools are available Workflows need to communicate data often through files, need filesystems Data is an important aspect of running on the cloud
18 Outline Role of hosted environments Workflows on the Cloud Challenges in running workflows on the cloud Data management aspects Hosted Science Managing workflow ensembles on the cloud Within user-defined constraints Conclusions
19 Workflow Data In the Cloud Executables Transfer into cloud Store in VM image Input Data Transfer into cloud Store in cloud Intermediate Data Use local disk (single node only) Use distributed storage system Output Data Transfer out of cloud Store in cloud
20 Amazon Web Services (AWS) IaaS Cloud, Services Elastic Compute Cloud (EC2) Provision virtual machine instances Simple Storage Service (S3) Object-based storage system Put/Get files from a global repository Elastic Block Store (EBS) Block-based storage system Unshared, SAN-like volumes Others (queue, RDBMS, MapReduce, Mechanical Turk etc.) We want to explore data management issues for workflows on Amazon
21 Applications l Not CyberShake SoCal map (PP) could cost at least $60K for computing and $29K for data storage (for a month) on Amazon (one workflow ~$300) l Montage (astronomy, provided by IPAC) l 10,429 tasks, 4.2GB input, 7.9GB of output l I/O: High (95% of time waiting on I/O) l Memory: Low, CPU: Low l Epigenome (bioinformatics, USC Genomics Center) l 81 tasks 1.8GB input, 300 MB output l I/O: Low, Memory: Medium l CPU: High (99% time of time) l Broadband (earthquake science, SCEC) l 320 tasks, 6GB of input, 160 MB output l I/O: Medium l Memory: High (75% of task time requires > 1GB mem) l CPU: Medium
22 Storage Systems Local Disk RAID0 across available partitions with XFS NFS: Network file system 1 dedicated node (m1.xlarge) PVFS: Parallel, striped cluster file system Workers host PVFS and run tasks GlusterFS: Distributed file system Workers host GlusterFS and run tasks NUFA, and Distribute modes Amazon S3: Object-based storage system Non-POSIX interface required changes to Pegasus Data is cached on workers
23 A cloud Condor/NFS configuration The submit host can be in or out of the cloud
24 Storage System Performance NFS uses an extra node PVFS, GlusterFS use workers to store data, S3 does not PVFS, GlusterFS use 2 or more nodes We implemented whole file caching for S3
25 Re-reading the same file Lots of small files
26 Cost Components Resource Cost Cost for VM instances Billed by the hour Transfer Cost Cost to copy data to/from cloud over network Billed by the GB Storage Cost Cost to store VM images, application data Billed by the GB, # of accesses
27 Resource Cost (by Storage System) Cost tracks performance Price not unreasonable Adding resources does not usually reduce cost
28 Transfer Cost Application Input Output Logs Montage 4291 MB 7970 MB 40 MB Broadband 4109 MB 159 MB 5.5 MB Epigenome 1843 MB 299 MB 3.3 MB Transfer Sizes Cost of transferring data to/from cloud Input: $0.10/GB Output: $0.17/GB Transfer costs are a relatively large Application Input Output Logs Total Montage $0.42 $1.32 < $0.01 $1.75 Broadband $0.40 $0.03 < $0.01 $0.43 Epigenome $0.18 $0.05 < $0.01 $0.23 Transfer Costs For Montage, transferring data costs more than computing it ($1.75 > $1.42) Costs can be reduced by storing input data in the cloud and using it for multiple workflows
29 Outline Role of hosted environments Workflows on the Cloud Challenges in running workflows on the cloud Data management aspects Hosted Science Managing workflow ensembles on the cloud Within user-defined constraints Conclusions
30 Large-Scale, Data-Intensive Workflows Montage Galactic Plane Workflow 18 million input images (~2.5 TB) 900 output images (2.5 GB each, 2.4 TB total) 10.5 million tasks (34,000 CPU hours) John Good (Caltech) An analysis is composed of a number of related workflows an ensemble 30
31 Workflow Ensembles Set of workflows Workflows have different parameters, inputs, etc. Prioritized Priority represents user s utility 2009 CyberShake sites (SCEC) USC 31 San Onofre Nuclear Power Plant Montage 2MASS galactic plane (John Good, Caltech)
32 Problem Description How do you manage ensembles in hosted environments? Typical research question: How much computation can we complete given the limited time and budget of our research project? Constraints: Budget and Deadline Goal: given budget and deadline, maximize the number of prioritized workflows in an ensemble Budget VM # Time Deadline 32
33 Explore provisioning and task scheduling decisions Inputs: Budget, deadline, prioritized ensemble, and task runtime estimates Outputs: Provisioning: Determines # of VMs to use over time Scheduling: Maps tasks to VMs Algorithms: : Static Provisioning, Static Scheduling : Dynamic Provisioning, Dynamic Scheduling WA-: Workflow-Aware 33
34 Plans out all provisioning and scheduling decisions ahead of execution (offline algorithm) Algorithm: For each workflow in priority order Assign sub-deadlines to each task Find a minimum cost schedule for the workflow such that each task finishes by its deadline If the schedule cost <= the remaining budget: accept the workflow Otherwise: reject the workflow Static plan may be disrupted at runtime 34
35 Provisioning and scheduling decisions are made at runtime (online algorithm) Algorithm: Task priority = workflow priority Tasks are executed in priority order Tasks are mapped to available VMs arbitrarily Resource utilization determines provisioning May execute low-priority tasks even when the workflow they belong to will never finish We assume no pre-emption of tasks 35
36 WA- with additional workflow admission test: Each time a workflow starts Add up the cost of all the tasks in the workflow Determine critical path of workflow If there is enough budget: accept workflow Otherwise: reject workflow Other admissions tests are possible e.g. Critical path <= time remaining 36
37 Dynamic vs. Static Task execution over time and WA- Dynamic VM Time Static VM Time 37
38 Evaluation Simulation Enables us to explore a large parameter space Simulator uses CloudSim framework Ensembles Use synthetic workflows generated using parameters from real applications Randomized using different distributions, priorities Experiments Determine relative performance Measure effect of low quality estimates and delays 38
39 Ensemble Types Ensemble size Number of workflows (50) Workflow size {100, 200, 300, 400, 500, 600, 700, 800, 900, and 1000} Constant size Uniform distribution Pareto distribution Priorities Sorted: Priority assigned by size Unsorted: Priority not correlated with size frequency distribution of workflow sizes Pareto Ensemble workflow size 39
40 Score(e) = large number of low-priority workflows by executing the smallest workflow Performance Metric r to account for the user s priority, we used an exponential scoring approa pleted(e) is the set of workflows in ensemble e that was completed by th r an ensemble e is: Exponential score: w Completed(e) y(w) =0, the next Score(e) highest = workflow has P 2 Priority(w) =1, and so on. Th w Completed(e) 2 Priority(w) riority(w) is the priority of workflow w such that the highest-priority w ing function gives the highest priority workflow a score that is higher than Key: High-priority workflows are more valuable than all lower-priority workflows combined: typleted(e) workflowsiscombined: the set of workflows in ensemble e that was completed by P riority(w) is the priority of workflow w such that the highest-priority 2 p > 2 i ty(w) =0, the next highest workflow has P riority(w) =1, and so on. T i = p+1,... ringconsistent function giveswith the highest problem priority definition workflow a score that is higher th is consistent with our definition of the problem, which is to complete a 40 ity workflows combined:
41 amount of time required to execute all workflows (max deadline), which is the sum of the critical paths of all the workflows. This range (min deadline, max deadline) is then divided into equa paths of all the workflows. This range (min deadline, max deadline) is then divided into equal intervals to determine the deadlines to use in each experiment. By computing the budget an intervals to determine the deadlines to use each experiment. By computing the budget and Budget deadline constraits and in this Deadline way we ensure that Parameters the experiments for each ensemble cover the mos deadline constraits in this way we ensure that the experiments for each ensemble cover the most interesting area of the parameter space for the ensemble. interesting area of the parameter space for the ensemble. Goal: cover space of interesting parameters Budget Range: Deadline Range: Budget min Cost(w), min Cost(w), Cost(w) w e Cost(w) w e w e w e min CriticalPath(w), min CriticalPath(w), CriticalPath(w) w e CriticalPath(w) w e w e w e In all the In experiments all the experiments we assumed we that assumed the VMs that have the VMs a price have of $1 a price per VM-hour. of $1 per This VM-hour. price This pric was chosen was to chosen simplify to simplify interpretation interpretation of results of and results should and not should affect not the relative affect the performance relative performanc of the different of the different algorithms. algorithms. In this study In this we study do notwe take dointo notaccount take intothe account heterogeneity the heterogeneity of the of th infrastructure infrastructure since wesince assume wethat assume it is always that it is possible alwaystopossible select atovmselect typea that VMhas type thethat best has the bes price to performance price to performance ratio aratio given for application. a given application. All the experiments All the experiments were run were with maximum run with maximum autoscaling autoscaling factor (v max factor ) set (v to max 1.0) for set to 1.0 for DPD Deadline and WA-. After experimenting with and WA- we found that, due to the high 41 and WA-. After experimenting with and WA- we found that, due to the hig
42 Relative Performance How do the algorithms perform on different applications and ensemble types? Experiment: Compare relative performance of all 3 algorithms on 5 applications 5 applications, 5 ensemble types, 10 random seeds, 10 budgets, 10 deadlines Goal: Compare % of ensembles for which each algorithm gets the highest score 42
43 SIPHT LIGO Epigenomics Best Scores (%) Best Scores (%) Best Scores (%) C PS PU US UU Distribution C PS PU US UU Distribution C PS PU US UU Distribution Montage CyberShake Best Scores (%) Best Scores (%) Algorithm WA C PS PU US UU C PS PU US UU Distribution Distribution C = constant, PS = Pareto sorted, PU=Pareto unsorted, US=uniform sorted, UU=uniform
44 Inaccurate Runtime Estimates What happens if the runtime estimates are inaccurate? Experiment: Introduce uniform error of ±p% for p from 0 to 50 Compare ratios of actual cost/budget and actual makespan/deadline All applications, all distributions, and 10 ensembles, budgets and deadlines each Goal: See how often each algorithm exceeds budget and deadline 44
45 Inaccurate Runtime Estimate Results Cost / Budget Makespan / Deadline 2.0 WA 2.0 WA Cost / Budget 1.0 Makespan / Deadline WA WA WA WA WA WA WA 0 % 1 % 2 % 5 % 10 % 20 % 50 % Runtime estimate error WA WA WA WA WA WA WA 0 % 1 % 2 % 5 % 10 % 20 % 50 % Runtime estimate error 45
46 Task Failures Large workflows on distributed systems often have failures Experiment: Introduce a uniform task failure rate between 0% and 50% All applications, all distributions, and 10 ensembles, budgets and deadlines Goal: Determine if high failure rates lead to significant constraint overruns 46
47 Task Failure Results Cost / Budget Makespan / Deadline WA WA Cost / Budget Makespan / Deadline WA WA WA WA WA WA WA 0 % 1 % 2 % 5 % 10 % 20 % 50 % Failure rate WA WA WA WA WA WA WA 0 % 1 % 2 % 5 % 10 % 20 % 50 % Failure rate 47
48 Summary I--observations Commercial clouds are usually a reasonable alternative to grids for a number of workflow applications Performance is good Costs are OK for small workflows Data transfer can be costly Storage costs can become high over time Clouds require additional configurations to get desired performance In our experiments GlusterFS did well overall Need tools to help evaluate costs for entire computational problems (ensembles), not just one workflows Need tools to help manage the costs, the applications, and the resources
49 Summary II looking into the future There is a move to hosting more services in the cloud Hosting science will require a number of integrated services seamless support for managing resource usage and thus cost and performance ease of use---can you do science as an app? References: Paper on ensembles at SC 12 in Salt Lake City [email protected]
Data Sharing Options for Scientific Workflows on Amazon EC2
Data Sharing Options for Scientific Workflows on Amazon EC2 Gideon Juve, Ewa Deelman, Karan Vahi, Gaurang Mehta, Benjamin P. Berman, Bruce Berriman, Phil Maechling Francesco Allertsen Vrije Universiteit
Creating A Galactic Plane Atlas With Amazon Web Services
Creating A Galactic Plane Atlas With Amazon Web Services G. Bruce Berriman 1*, Ewa Deelman 2, John Good 1, Gideon Juve 2, Jamie Kinney 3, Ann Merrihew 3, and Mats Rynge 2 1 Infrared Processing and Analysis
SURVEY ON THE ALGORITHMS FOR WORKFLOW PLANNING AND EXECUTION
SURVEY ON THE ALGORITHMS FOR WORKFLOW PLANNING AND EXECUTION Kirandeep Kaur Khushdeep Kaur Research Scholar Assistant Professor, Department Of Cse, Bhai Maha Singh College Of Engineering, Bhai Maha Singh
The Application of Cloud Computing to Scientific Workflows: A Study of Cost and Performance
The Application of Cloud Computing to Scientific Workflows: A Study of Cost and Performance G. Bruce Berriman, Ewa Deelman, Gideon Juve, Mats Rynge and Jens-S. Vöckler G. Bruce Berriman Infrared Processing
Scientific Workflow Applications on Amazon EC2
Scientific Workflow Applications on Amazon EC2 Gideon Juve, Ewa Deelman, Karan Vahi, Gaurang Mehta USC Information Sciences Institute {gideon,deelman,vahi,gmehta}@isi.edu Bruce Berriman NASA Exoplanet
Workflow Allocations and Scheduling on IaaS Platforms, from Theory to Practice
Workflow Allocations and Scheduling on IaaS Platforms, from Theory to Practice Eddy Caron 1, Frédéric Desprez 2, Adrian Mureșan 1, Frédéric Suter 3, Kate Keahey 4 1 Ecole Normale Supérieure de Lyon, France
Experiments with Complex Scientific Applications on Hybrid Cloud Infrastructures
Experiments with Complex Scientific Applications on Hybrid Cloud Infrastructures Maciej'Malawski 1,2,'Piotr'Nowakowski 1,'Tomasz'Gubała 1,'Marek'Kasztelnik 1,' Marian'Bubak 1,2,'Rafael'Ferreira'da'Silva
DynamicCloudSim: Simulating Heterogeneity in Computational Clouds
DynamicCloudSim: Simulating Heterogeneity in Computational Clouds Marc Bux, Ulf Leser {bux leser}@informatik.hu-berlin.de The 2nd international workshop on Scalable Workflow Enactment Engines and Technologies
Cloud Computing. Alex Crawford Ben Johnstone
Cloud Computing Alex Crawford Ben Johnstone Overview What is cloud computing? Amazon EC2 Performance Conclusions What is the Cloud? A large cluster of machines o Economies of scale [1] Customers use a
Building Platform as a Service for Scientific Applications
Building Platform as a Service for Scientific Applications Moustafa AbdelBaky [email protected] Rutgers Discovery Informa=cs Ins=tute (RDI 2 ) The NSF Cloud and Autonomic Compu=ng Center Department
Characterizing Task Usage Shapes in Google s Compute Clusters
Characterizing Task Usage Shapes in Google s Compute Clusters Qi Zhang 1, Joseph L. Hellerstein 2, Raouf Boutaba 1 1 University of Waterloo, 2 Google Inc. Introduction Cloud computing is becoming a key
Amazon EC2 Product Details Page 1 of 5
Amazon EC2 Product Details Page 1 of 5 Amazon EC2 Functionality Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of
IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures
IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Introduction
Rethinking Data Management for Big Data Scientific Workflows
Rethinking Data Management for Big Data Scientific Workflows Karan Vahi, Mats Rynge, Gideon Juve, Rajiv Mayani, Ewa Deelman Information Sciences Institute - University of Southern California Marina Del
Rethinking Data Management for Big Data Scientific Workflows
Rethinking Data Management for Big Data Scientific Workflows Karan Vahi, Mats Rynge, Gideon Juve, Rajiv Mayani, Ewa Deelman Information Sciences Institute - University of Southern California Marina Del
Storage Architectures for Big Data in the Cloud
Storage Architectures for Big Data in the Cloud Sam Fineberg HP Storage CT Office/ May 2013 Overview Introduction What is big data? Big Data I/O Hadoop/HDFS SAN Distributed FS Cloud Summary Research Areas
SCORE BASED DEADLINE CONSTRAINED WORKFLOW SCHEDULING ALGORITHM FOR CLOUD SYSTEMS
SCORE BASED DEADLINE CONSTRAINED WORKFLOW SCHEDULING ALGORITHM FOR CLOUD SYSTEMS Ranjit Singh and Sarbjeet Singh Computer Science and Engineering, Panjab University, Chandigarh, India ABSTRACT Cloud Computing
Scientific Workflows in the Cloud
Scientific Workflows in the Cloud Gideon Juve and Ewa Deelman Abstract The development of cloud computing has generated significant interest in the scientific computing community. In this chapter we consider
The Case for Resource Sharing in Scientific Workflow Executions
The Case for Resource Sharing in Scientific Workflow Executions Ricardo Oda, Daniel Cordeiro, Rafael Ferreira da Silva 2 Ewa Deelman 2, Kelly R. Braghetto Instituto de Matemática e Estatística Universidade
A Service for Data-Intensive Computations on Virtual Clusters
A Service for Data-Intensive Computations on Virtual Clusters Executing Preservation Strategies at Scale Rainer Schmidt, Christian Sadilek, and Ross King [email protected] Planets Project Permanent
Cost estimation for the provisioning of computing resources to execute bag-of-tasks applications in the Amazon cloud
Cost estimation for the provisioning of computing resources to execute bag-of-tasks applications in the Amazon cloud Pedro Álvarez, Sergio Hernández, Javier Fabra, Joaquín Ezpeleta Aragón Institute of
2) Xen Hypervisor 3) UEC
5. Implementation Implementation of the trust model requires first preparing a test bed. It is a cloud computing environment that is required as the first step towards the implementation. Various tools
PARALLELS CLOUD SERVER
PARALLELS CLOUD SERVER An Introduction to Operating System Virtualization and Parallels Cloud Server 1 Table of Contents Introduction... 3 Hardware Virtualization... 3 Operating System Virtualization...
Keywords: Cloudsim, MIPS, Gridlet, Virtual machine, Data center, Simulation, SaaS, PaaS, IaaS, VM. Introduction
Vol. 3 Issue 1, January-2014, pp: (1-5), Impact Factor: 1.252, Available online at: www.erpublications.com Performance evaluation of cloud application with constant data center configuration and variable
Leveraging BlobSeer to boost up the deployment and execution of Hadoop applications in Nimbus cloud environments on Grid 5000
Leveraging BlobSeer to boost up the deployment and execution of Hadoop applications in Nimbus cloud environments on Grid 5000 Alexandra Carpen-Amarie Diana Moise Bogdan Nicolae KerData Team, INRIA Outline
Early Cloud Experiences with the Kepler Scientific Workflow System
Available online at www.sciencedirect.com Procedia Computer Science 9 (2012 ) 1630 1634 International Conference on Computational Science, ICCS 2012 Early Cloud Experiences with the Kepler Scientific Workflow
Cloud Server. Parallels. An Introduction to Operating System Virtualization and Parallels Cloud Server. White Paper. www.parallels.
Parallels Cloud Server White Paper An Introduction to Operating System Virtualization and Parallels Cloud Server www.parallels.com Table of Contents Introduction... 3 Hardware Virtualization... 3 Operating
Microsoft HPC. V 1.0 José M. Cámara ([email protected])
Microsoft HPC V 1.0 José M. Cámara ([email protected]) Introduction Microsoft High Performance Computing Package addresses computing power from a rather different approach. It is mainly focused on commodity
Multifaceted Resource Management for Dealing with Heterogeneous Workloads in Virtualized Data Centers
Multifaceted Resource Management for Dealing with Heterogeneous Workloads in Virtualized Data Centers Íñigo Goiri, J. Oriol Fitó, Ferran Julià, Ramón Nou, Josep Ll. Berral, Jordi Guitart and Jordi Torres
Benchmarking Amazon s EC2 Cloud Platform
Benchmarking Amazon s EC2 Cloud Platform Goal of the project: The goal of this project is to analyze the performance of an 8-node cluster in Amazon s Elastic Compute Cloud (EC2). The cluster will be benchmarked
CiteSeer x in the Cloud
Published in the 2nd USENIX Workshop on Hot Topics in Cloud Computing 2010 CiteSeer x in the Cloud Pradeep B. Teregowda Pennsylvania State University C. Lee Giles Pennsylvania State University Bhuvan Urgaonkar
WORKFLOW ENGINE FOR CLOUDS
WORKFLOW ENGINE FOR CLOUDS By SURAJ PANDEY, DILEBAN KARUNAMOORTHY, and RAJKUMAR BUYYA Prepared by: Dr. Faramarz Safi Islamic Azad University, Najafabad Branch, Esfahan, Iran. Workflow Engine for clouds
Scalable Architecture on Amazon AWS Cloud
Scalable Architecture on Amazon AWS Cloud Kalpak Shah Founder & CEO, Clogeny Technologies [email protected] 1 * http://www.rightscale.com/products/cloud-computing-uses/scalable-website.php 2 Architect
Today: Data Centers & Cloud Computing" Data Centers"
Today: Data Centers & Cloud Computing" Data Centers Cloud Computing Lecture 25, page 1 Data Centers" Large server and storage farms Used by enterprises to run server applications Used by Internet companies
IaaS Federation. Contrail project. IaaS Federation! Objectives and Challenges! & SLA management in Federations 5/23/11
Cloud Computing (IV) s and SPD Course 19-20/05/2011 Massimo Coppola IaaS! Objectives and Challenges! & management in s Adapted from two presentations! by Massimo Coppola (CNR) and Lorenzo Blasi (HP) Italy)!
Cloud Management: Knowing is Half The Battle
Cloud Management: Knowing is Half The Battle Raouf BOUTABA David R. Cheriton School of Computer Science University of Waterloo Joint work with Qi Zhang, Faten Zhani (University of Waterloo) and Joseph
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
Running a Workflow on a PowerCenter Grid
Running a Workflow on a PowerCenter Grid 2010-2014 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud
IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 Agenda v Mapping clients needs to cloud technologies v Addressing your pain
EEDC. Scalability Study of web apps in AWS. Execution Environments for Distributed Computing
EEDC Execution Environments for Distributed Computing 34330 Master in Computer Architecture, Networks and Systems - CANS Scalability Study of web apps in AWS Sergio Mendoza [email protected]
Chapter 7. Using Hadoop Cluster and MapReduce
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
Benchmarking Sahara-based Big-Data-as-a-Service Solutions. Zhidong Yu, Weiting Chen (Intel) Matthew Farrellee (Red Hat) May 2015
Benchmarking Sahara-based Big-Data-as-a-Service Solutions Zhidong Yu, Weiting Chen (Intel) Matthew Farrellee (Red Hat) May 2015 Agenda o Why Sahara o Sahara introduction o Deployment considerations o Performance
A Brief Introduction to Apache Tez
A Brief Introduction to Apache Tez Introduction It is a fact that data is basically the new currency of the modern business world. Companies that effectively maximize the value of their data (extract value
Grids and Clouds: Making Workflow Applications Work in Heterogeneous Distributed Environments
International Journal of High Performance Computing Applications OnlineFirst, published on December 4, 2009 as doi:10.1177/1094342009356432 1 Introduction Grids and Clouds: Making Workflow Applications
Network Infrastructure Services CS848 Project
Quality of Service Guarantees for Cloud Services CS848 Project presentation by Alexey Karyakin David R. Cheriton School of Computer Science University of Waterloo March 2010 Outline 1. Performance of cloud
Neptune. A Domain Specific Language for Deploying HPC Software on Cloud Platforms. Chris Bunch Navraj Chohan Chandra Krintz Khawaja Shams
Neptune A Domain Specific Language for Deploying HPC Software on Cloud Platforms Chris Bunch Navraj Chohan Chandra Krintz Khawaja Shams ScienceCloud 2011 @ San Jose, CA June 8, 2011 Cloud Computing Three
Performance Management for Cloudbased STC 2012
Performance Management for Cloudbased Applications STC 2012 1 Agenda Context Problem Statement Cloud Architecture Need for Performance in Cloud Performance Challenges in Cloud Generic IaaS / PaaS / SaaS
Cloud Computing and Amazon Web Services
Cloud Computing and Amazon Web Services Gary A. McGilvary edinburgh data.intensive research 1 OUTLINE 1. An Overview of Cloud Computing 2. Amazon Web Services 3. Amazon EC2 Tutorial 4. Conclusions 2 CLOUD
How To Manage Cloud Service Provisioning And Maintenance
Managing Cloud Service Provisioning and SLA Enforcement via Holistic Monitoring Techniques Vincent C. Emeakaroha Matrikelnr: 0027525 [email protected] Supervisor: Univ.-Prof. Dr. Schahram Dustdar
ANALYSIS OF WORKFLOW SCHEDULING PROCESS USING ENHANCED SUPERIOR ELEMENT MULTITUDE OPTIMIZATION IN CLOUD
ANALYSIS OF WORKFLOW SCHEDULING PROCESS USING ENHANCED SUPERIOR ELEMENT MULTITUDE OPTIMIZATION IN CLOUD Mrs. D.PONNISELVI, M.Sc., M.Phil., 1 E.SEETHA, 2 ASSISTANT PROFESSOR, M.PHIL FULL-TIME RESEARCH SCHOLAR,
HCOC: A Cost Optimization Algorithm for Workflow Scheduling in Hybrid Clouds
Noname manuscript No. (will be inserted by the editor) : A Cost Optimization Algorithm for Workflow Scheduling in Hybrid Clouds Luiz Fernando Bittencourt Edmundo Roberto Mauro Madeira Received: date /
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing Deep Mann ME (Software Engineering) Computer Science and Engineering Department Thapar University Patiala-147004
Multilevel Communication Aware Approach for Load Balancing
Multilevel Communication Aware Approach for Load Balancing 1 Dipti Patel, 2 Ashil Patel Department of Information Technology, L.D. College of Engineering, Gujarat Technological University, Ahmedabad 1
On the Performance-cost Tradeoff for Workflow Scheduling in Hybrid Clouds
On the Performance-cost Tradeoff for Workflow Scheduling in Hybrid Clouds Thiago A. L. Genez, Luiz F. Bittencourt, Edmundo R. M. Madeira Institute of Computing University of Campinas UNICAMP Av. Albert
Duke University http://www.cs.duke.edu/starfish
Herodotos Herodotou, Harold Lim, Fei Dong, Shivnath Babu Duke University http://www.cs.duke.edu/starfish Practitioners of Big Data Analytics Google Yahoo! Facebook ebay Physicists Biologists Economists
Challenges of Managing Scientific Workflows in High-Throughput and High- Performance Computing Environments
Challenges of Managing Scientific Workflows in High-Throughput and High- Performance Computing Environments Ewa Deelman USC Informa1on Sciences Ins1tute h8p://www.isi.edu/~deelman Funding from DOE, NSF,
An Energy-aware Multi-start Local Search Metaheuristic for Scheduling VMs within the OpenNebula Cloud Distribution
An Energy-aware Multi-start Local Search Metaheuristic for Scheduling VMs within the OpenNebula Cloud Distribution Y. Kessaci, N. Melab et E-G. Talbi Dolphin Project Team, Université Lille 1, LIFL-CNRS,
Planning, Provisioning and Deploying Enterprise Clouds with Oracle Enterprise Manager 12c Kevin Patterson, Principal Sales Consultant, Enterprise
Planning, Provisioning and Deploying Enterprise Clouds with Oracle Enterprise Manager 12c Kevin Patterson, Principal Sales Consultant, Enterprise Manager Oracle NIST Definition of Cloud Computing Cloud
Bringing Big Data Modelling into the Hands of Domain Experts
Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks [email protected] 2015 The MathWorks, Inc. 1 Data is the sword of the
Scientific and Technical Applications as a Service in the Cloud
Scientific and Technical Applications as a Service in the Cloud University of Bern, 28.11.2011 adapted version Wibke Sudholt CloudBroker GmbH Technoparkstrasse 1, CH-8005 Zurich, Switzerland Phone: +41
Shared Parallel File System
Shared Parallel File System Fangbin Liu [email protected] System and Network Engineering University of Amsterdam Shared Parallel File System Introduction of the project The PVFS2 parallel file system
Group Based Load Balancing Algorithm in Cloud Computing Virtualization
Group Based Load Balancing Algorithm in Cloud Computing Virtualization Rishi Bhardwaj, 2 Sangeeta Mittal, Student, 2 Assistant Professor, Department of Computer Science, Jaypee Institute of Information
International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April-2015 36 ISSN 2229-5518
International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April-2015 36 An Efficient Approach for Load Balancing in Cloud Environment Balasundaram Ananthakrishnan Abstract Cloud computing
Shoal: IaaS Cloud Cache Publisher
University of Victoria Faculty of Engineering Winter 2013 Work Term Report Shoal: IaaS Cloud Cache Publisher Department of Physics University of Victoria Victoria, BC Mike Chester V00711672 Work Term 3
A Cost-Evaluation of MapReduce Applications in the Cloud
1/23 A Cost-Evaluation of MapReduce Applications in the Cloud Diana Moise, Alexandra Carpen-Amarie Gabriel Antoniu, Luc Bougé KerData team 2/23 1 MapReduce applications - case study 2 3 4 5 3/23 MapReduce
Excalibur: An Autonomic Cloud Architecture for Executing Parallel Applications
Excalibur: An Autonomic Cloud Architecture for Executing Parallel Applications Alessandro Ferreira Leite Université Paris-Sud/University of Brasilia [email protected] Claude Tadonki MINES
Big Data Mining Services and Knowledge Discovery Applications on Clouds
Big Data Mining Services and Knowledge Discovery Applications on Clouds Domenico Talia DIMES, Università della Calabria & DtoK Lab Italy [email protected] Data Availability or Data Deluge? Some decades
Introduction to Cloud Computing
Introduction to Cloud Computing Cloud Computing I (intro) 15 319, spring 2010 2 nd Lecture, Jan 14 th Majd F. Sakr Lecture Motivation General overview on cloud computing What is cloud computing Services
CloudAnalyst: A CloudSim-based Visual Modeller for Analysing Cloud Computing Environments and Applications
CloudAnalyst: A CloudSim-based Visual Modeller for Analysing Cloud Computing Environments and Applications Bhathiya Wickremasinghe 1, Rodrigo N. Calheiros 2, and Rajkumar Buyya 1 1 The Cloud Computing
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
ArcGIS for Server: In the Cloud
DevSummit DC February 11, 2015 Washington, DC ArcGIS for Server: In the Cloud Bonnie Stayer, Esri Session Outline Cloud Overview - Benefits - Types of clouds ArcGIS in AWS - Cloud Builder - Maintenance
MagFS: The Ideal File System for the Cloud
: The Ideal File System for the Cloud is the first true file system for the cloud. It provides lower cost, easier administration, and better scalability and performance than any alternative in-cloud file
Webpage: www.ijaret.org Volume 3, Issue XI, Nov. 2015 ISSN 2320-6802
An Effective VM scheduling using Hybrid Throttled algorithm for handling resource starvation in Heterogeneous Cloud Environment Er. Navdeep Kaur 1 Er. Pooja Nagpal 2 Dr.Vinay Guatum 3 1 M.Tech Student,
Managing and Conducting Biomedical Research on the Cloud Prasad Patil
Managing and Conducting Biomedical Research on the Cloud Prasad Patil Laboratory for Personalized Medicine Center for Biomedical Informatics Harvard Medical School SaaS & PaaS gmail google docs app engine
Towards an understanding of oversubscription in cloud
IBM Research Towards an understanding of oversubscription in cloud Salman A. Baset, Long Wang, Chunqiang Tang [email protected] IBM T. J. Watson Research Center Hawthorne, NY Outline Oversubscription
The Theory And Practice of Testing Software Applications For Cloud Computing. Mark Grechanik University of Illinois at Chicago
The Theory And Practice of Testing Software Applications For Cloud Computing Mark Grechanik University of Illinois at Chicago Cloud Computing Is Everywhere Global spending on public cloud services estimated
Cloud computing - Architecting in the cloud
Cloud computing - Architecting in the cloud [email protected] 1 Outline Cloud computing What is? Levels of cloud computing: IaaS, PaaS, SaaS Moving to the cloud? Architecting in the cloud Best practices
How To Create A Cloud Based System For Aaas (Networking)
1 3.1 IaaS Definition IaaS: Infrastructure as a Service Through the internet, provide IT server, storage, computing power and other infrastructure capacity to the end users and the service fee based on
