Hosted Science: Managing Computational Workflows in the Cloud. Ewa Deelman USC Information Sciences Institute



Similar documents
Data Sharing Options for Scientific Workflows on Amazon EC2

Cost- and Deadline-Constrained Provisioning for Scientific Workflow Ensembles in IaaS Clouds

Creating A Galactic Plane Atlas With Amazon Web Services

How can new technologies can be of service to astronomy? Community effort

SURVEY ON THE ALGORITHMS FOR WORKFLOW PLANNING AND EXECUTION

The Application of Cloud Computing to Scientific Workflows: A Study of Cost and Performance

Scientific Workflow Applications on Amazon EC2

Workflow Allocations and Scheduling on IaaS Platforms, from Theory to Practice

The application of cloud computing to scientific workflows: a study of cost and performance

A Very Brief Introduction To Cloud Computing. Jens Vöckler, Gideon Juve, Ewa Deelman, G. Bruce Berriman

Experiments with Complex Scientific Applications on Hybrid Cloud Infrastructures

DynamicCloudSim: Simulating Heterogeneity in Computational Clouds

Cloud Computing. Alex Crawford Ben Johnstone

Future Generation Computer Systems. Deadline-constrained workflow scheduling algorithms for Infrastructure as a Service Clouds

Data Sharing Options for Scientific Workflows on Amazon EC2

Building Platform as a Service for Scientific Applications

On the Use of Cloud Computing for Scientific Workflows

Characterizing Task Usage Shapes in Google s Compute Clusters

Amazon EC2 Product Details Page 1 of 5

IaaS Cloud Architectures: Virtualized Data Centers to Federated Cloud Infrastructures

Développement logiciel pour le Cloud (TLC)

Rethinking Data Management for Big Data Scientific Workflows

Rethinking Data Management for Big Data Scientific Workflows

Enabling multi-cloud resources at CERN within the Helix Nebula project. D. Giordano (CERN IT-SDC) HEPiX Spring 2014 Workshop 23 May 2014

Storage Architectures for Big Data in the Cloud

Rapid 3D Seismic Source Inversion Using Windows Azure and Amazon EC2

The Application of Cloud Computing to Astronomy: A Study of Cost and Performance

SCORE BASED DEADLINE CONSTRAINED WORKFLOW SCHEDULING ALGORITHM FOR CLOUD SYSTEMS

Scientific Workflows in the Cloud

An Evaluation of the Cost and Performance of Scientific Workflows on Amazon EC2

The Case for Resource Sharing in Scientific Workflow Executions

Peer-to-Peer Data Sharing for Scientific Workflows on Amazon EC2

A Service for Data-Intensive Computations on Virtual Clusters

Cost estimation for the provisioning of computing resources to execute bag-of-tasks applications in the Amazon cloud

2) Xen Hypervisor 3) UEC

PARALLELS CLOUD SERVER

Keywords: Cloudsim, MIPS, Gridlet, Virtual machine, Data center, Simulation, SaaS, PaaS, IaaS, VM. Introduction

Leveraging BlobSeer to boost up the deployment and execution of Hadoop applications in Nimbus cloud environments on Grid 5000

Early Cloud Experiences with the Kepler Scientific Workflow System

Cloud Server. Parallels. An Introduction to Operating System Virtualization and Parallels Cloud Server. White Paper.

Microsoft HPC. V 1.0 José M. Cámara (checam@ubu.es)

Multifaceted Resource Management for Dealing with Heterogeneous Workloads in Virtualized Data Centers

Benchmarking Amazon s EC2 Cloud Platform

CiteSeer x in the Cloud

WORKFLOW ENGINE FOR CLOUDS

Scalable Architecture on Amazon AWS Cloud

Today: Data Centers & Cloud Computing" Data Centers"

IaaS Federation. Contrail project. IaaS Federation! Objectives and Challenges! & SLA management in Federations 5/23/11

Cloud Management: Knowing is Half The Battle

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Newsletter 4/2013 Oktober

Running a Workflow on a PowerCenter Grid

Integrating Remote Cloud and Local HPC Resources

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud

EEDC. Scalability Study of web apps in AWS. Execution Environments for Distributed Computing

Chapter 7. Using Hadoop Cluster and MapReduce

Benchmarking Sahara-based Big-Data-as-a-Service Solutions. Zhidong Yu, Weiting Chen (Intel) Matthew Farrellee (Red Hat) May 2015

A Brief Introduction to Apache Tez

Federated Big Data for resource aggregation and load balancing with DIRAC

Grids and Clouds: Making Workflow Applications Work in Heterogeneous Distributed Environments

Network Infrastructure Services CS848 Project

Neptune. A Domain Specific Language for Deploying HPC Software on Cloud Platforms. Chris Bunch Navraj Chohan Chandra Krintz Khawaja Shams

Performance Management for Cloudbased STC 2012

Cloud Computing and Amazon Web Services

How To Manage Cloud Service Provisioning And Maintenance

ANALYSIS OF WORKFLOW SCHEDULING PROCESS USING ENHANCED SUPERIOR ELEMENT MULTITUDE OPTIMIZATION IN CLOUD

HCOC: A Cost Optimization Algorithm for Workflow Scheduling in Hybrid Clouds

Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing

Multilevel Communication Aware Approach for Load Balancing

On the Performance-cost Tradeoff for Workflow Scheduling in Hybrid Clouds

Data Management Challenges of Data-Intensive Scientific Workflows

Duke University

Challenges of Managing Scientific Workflows in High-Throughput and High- Performance Computing Environments

Cloud/SaaS enablement of existing applications

An Energy-aware Multi-start Local Search Metaheuristic for Scheduling VMs within the OpenNebula Cloud Distribution

Planning, Provisioning and Deploying Enterprise Clouds with Oracle Enterprise Manager 12c Kevin Patterson, Principal Sales Consultant, Enterprise

Bringing Big Data Modelling into the Hands of Domain Experts

Scientific and Technical Applications as a Service in the Cloud

Shared Parallel File System

Group Based Load Balancing Algorithm in Cloud Computing Virtualization

International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April ISSN

Shoal: IaaS Cloud Cache Publisher

A Cost-Evaluation of MapReduce Applications in the Cloud

Excalibur: An Autonomic Cloud Architecture for Executing Parallel Applications

Comparing FutureGrid, Amazon EC2, and Open Science Grid for Scientific Workflows

Big Data Mining Services and Knowledge Discovery Applications on Clouds

Introduction to Cloud Computing

CloudAnalyst: A CloudSim-based Visual Modeller for Analysing Cloud Computing Environments and Applications

Windows Server 2008 R2 Hyper-V Live Migration

ArcGIS for Server: In the Cloud

MagFS: The Ideal File System for the Cloud

Webpage: Volume 3, Issue XI, Nov ISSN

Dynamic Extension of a Virtualized Cluster by using Cloud Resources CHEP 2012

A General Approach to Real-time Workflow Monitoring Karan Vahi, Ewa Deelman, Gaurang Mehta, Fabio Silva

Managing and Conducting Biomedical Research on the Cloud Prasad Patil

Towards an understanding of oversubscription in cloud

Cloud Computing. What Are We Handing Over? Ganesh Shankar Advanced IT Core Pervasive Technology Institute

The Theory And Practice of Testing Software Applications For Cloud Computing. Mark Grechanik University of Illinois at Chicago

Cloud computing - Architecting in the cloud

How To Create A Cloud Based System For Aaas (Networking)

Transcription:

Hosted Science: Managing Computational Workflows in the Cloud Ewa Deelman USC Information Sciences Institute http://pegasus.isi.edu deelman@isi.edu

The Problem Scientific data is being collected at an ever increasing rate The old days -- big, focused experiments LHC Today cheap DNA sequencers and an increasing number of them The complexity of the computational problems is ever increasing Local compute resources are often not enough (too small, limited availability) The computing infrastructure keeps changing Hardware, software, but also computational models

Computational workflows --managing application complexity Help express multi-step computations in a declarative way Can support automation, minimize human involvement Makes analyses easier to run Can be high-level and portable across execution platforms Keep track of provenance to support reproducibility Foster collaboration code and data sharing

So far applications have been running on local/campus clusters or grids ~ 850,000 tasks SCEC CyberShake l Uses physicsbased approach l 3-D ground motion simulation with anelastic wave propagation l Considers ~415,000 earthquakes per site l <200 km from site of interest l Magnitude >6.5

DNA sequencing, a new breed of data-intensive applications Data collected at a sequencers Needs to be filtered for noisy data Needs to be aligned Needs to be collected into a single map Vendors provide some basic tools you may want to try the latest alignment algorithm you may want to use a remote cluster Challenges: automation of analysis, reproducibility Portability provenance USERS!

Outline Role of hosted environments Workflows on the Cloud Challenges in running workflows on the cloud Data management aspects Hosted Science Managing workflow ensembles on the cloud Within user-defined constraints Conclusions Acknowledgements: Gideon Juve (USC), Maciej Malawski (AGH), Jarek Nabrzyski ( ND), Applications: Bruce Berriman et al (Caltech), Tom Jordan et al (USC), Ben Berman, James Knowles et al (USC Medical School), Pegasus Projects: Miron Livny (UWM), Gideon Juve, Gaurang Mehta, Rajiv Mayani, Mats Rynge, Karan Vahi, (USC/ISI)

New applications are looking towards Clouds Originated in the business domain Outsourcing services to the Cloud (successful for business) Pay for what you use, elasticity of resources Provided by data centers that are built on compute and storage virtualization technologies Scientific applications often have different requirements MPI Shared file system Support for many dependent jobs Google s Container-based Data Center in Belgium http://www.datacenterknowledge.com/ Ewa Deelman, deelman@isi.edu www.isi.edu/~deelman

Hosted Science Today applications are using the cloud as a resource provider (storage, computing, social networking) In the future more services will be migrating to the cloud (more integration) Email Hosted end-to-end analysis Data and method publication Instruments Instruments Manpower Data and Publication sharing Social Networking Databases Science as Service Analysis as Service Workflow as Service Infrastructure as a Service Clusters Application Models

The Future is Now Illumnia s BaseSpace

Outline Role of hosted environments Workflows on the Cloud Challenges in running workflows on the cloud Data management aspects Hosted Science Managing workflow ensembles on the cloud Within user-defined constraints Conclusions

Issues It is difficult to manage cost How much would it cost to analyze one sample? How much would it cost to analyze a set of samples? The analyses may be complex and multi-step (workflows) It is difficult to manage deadlines I would like all the results to be done in a week I would like the most important analyses done in a week I have a week to get the most important results and $500 to do it

Scientific Environment How to manage complex workloads? Data Storage Campus Cluster EGI Work definition TeraGrid/XSEDE Open Science Grid Amazon Cloud Local Resource

Workflows have different computational needs --need systems to manage their execution SoCal Map needs 239 of those MPI codes ~ 12,000 CPU hours, Post Processing 2,000 CPU hours Data footprint ~ 800GB Peak # of cores on OSG 1,600 Walltime on OSG 20 hours, could be done in 4 hours on 800 cores

Workflow Management You may want to use different resources within a workflow or over time Need a high-level workflow specification Need a planning capability to map from high-level to executable workflow Need to manage the task dependencies Need to manage the execution of tasks on the remote resources Need to provide scalability, performance, reliability

Our Approach l Analysis Representation l Support a declarative representation for the workflow (dataflow) l Represent the workflow structure as a Directed Acyclic Graph (DAG) l Use recursion to achieve scalability l System (Plan for the resources, Execute the Plan, Manage tasks) l Layered architecture, each layer is responsible for a particular function l Mask errors at different levels of the system l Modular, composed of well-defined components, where different components can be swapped in l Use and adapt existing graph and other relevant algorithms

Use the given Resources Data Storage data Campus Cluster EGI Work definition As a WORKFLOW Workflow Management System Local Resource work TeraGrid/XSEDE Open Science Grid Amazon Cloud

Challenges of running workflows on the cloud Clouds provide resources, but the software is up to the user Running on multiple nodes may require cluster services (e.g. scheduler) Dynamically configuring such systems is not easy Manual setup is error-prone and not scalable Scripts work to a point, but break down for complex deployments Some tools are available Workflows need to communicate data often through files, need filesystems Data is an important aspect of running on the cloud

Outline Role of hosted environments Workflows on the Cloud Challenges in running workflows on the cloud Data management aspects Hosted Science Managing workflow ensembles on the cloud Within user-defined constraints Conclusions

Workflow Data In the Cloud Executables Transfer into cloud Store in VM image Input Data Transfer into cloud Store in cloud Intermediate Data Use local disk (single node only) Use distributed storage system Output Data Transfer out of cloud Store in cloud

Amazon Web Services (AWS) IaaS Cloud, Services Elastic Compute Cloud (EC2) Provision virtual machine instances Simple Storage Service (S3) Object-based storage system Put/Get files from a global repository Elastic Block Store (EBS) Block-based storage system Unshared, SAN-like volumes Others (queue, RDBMS, MapReduce, Mechanical Turk etc.) We want to explore data management issues for workflows on Amazon

Applications l Not CyberShake SoCal map (PP) could cost at least $60K for computing and $29K for data storage (for a month) on Amazon (one workflow ~$300) l Montage (astronomy, provided by IPAC) l 10,429 tasks, 4.2GB input, 7.9GB of output l I/O: High (95% of time waiting on I/O) l Memory: Low, CPU: Low l Epigenome (bioinformatics, USC Genomics Center) l 81 tasks 1.8GB input, 300 MB output l I/O: Low, Memory: Medium l CPU: High (99% time of time) l Broadband (earthquake science, SCEC) l 320 tasks, 6GB of input, 160 MB output l I/O: Medium l Memory: High (75% of task time requires > 1GB mem) l CPU: Medium

Storage Systems Local Disk RAID0 across available partitions with XFS NFS: Network file system 1 dedicated node (m1.xlarge) PVFS: Parallel, striped cluster file system Workers host PVFS and run tasks GlusterFS: Distributed file system Workers host GlusterFS and run tasks NUFA, and Distribute modes Amazon S3: Object-based storage system Non-POSIX interface required changes to Pegasus Data is cached on workers

A cloud Condor/NFS configuration The submit host can be in or out of the cloud

Storage System Performance NFS uses an extra node PVFS, GlusterFS use workers to store data, S3 does not PVFS, GlusterFS use 2 or more nodes We implemented whole file caching for S3

Re-reading the same file Lots of small files

Cost Components Resource Cost Cost for VM instances Billed by the hour Transfer Cost Cost to copy data to/from cloud over network Billed by the GB Storage Cost Cost to store VM images, application data Billed by the GB, # of accesses

Resource Cost (by Storage System) Cost tracks performance Price not unreasonable Adding resources does not usually reduce cost

Transfer Cost Application Input Output Logs Montage 4291 MB 7970 MB 40 MB Broadband 4109 MB 159 MB 5.5 MB Epigenome 1843 MB 299 MB 3.3 MB Transfer Sizes Cost of transferring data to/from cloud Input: $0.10/GB Output: $0.17/GB Transfer costs are a relatively large Application Input Output Logs Total Montage $0.42 $1.32 < $0.01 $1.75 Broadband $0.40 $0.03 < $0.01 $0.43 Epigenome $0.18 $0.05 < $0.01 $0.23 Transfer Costs For Montage, transferring data costs more than computing it ($1.75 > $1.42) Costs can be reduced by storing input data in the cloud and using it for multiple workflows

Outline Role of hosted environments Workflows on the Cloud Challenges in running workflows on the cloud Data management aspects Hosted Science Managing workflow ensembles on the cloud Within user-defined constraints Conclusions

Large-Scale, Data-Intensive Workflows Montage Galactic Plane Workflow 18 million input images (~2.5 TB) 900 output images (2.5 GB each, 2.4 TB total) 10.5 million tasks (34,000 CPU hours) John Good (Caltech) An analysis is composed of a number of related workflows an ensemble 30

Workflow Ensembles Set of workflows Workflows have different parameters, inputs, etc. Prioritized Priority represents user s utility 2009 CyberShake sites (SCEC) USC 31 San Onofre Nuclear Power Plant Montage 2MASS galactic plane (John Good, Caltech)

Problem Description How do you manage ensembles in hosted environments? Typical research question: How much computation can we complete given the limited time and budget of our research project? Constraints: Budget and Deadline Goal: given budget and deadline, maximize the number of prioritized workflows in an ensemble Budget VM # Time Deadline 32

Explore provisioning and task scheduling decisions Inputs: Budget, deadline, prioritized ensemble, and task runtime estimates Outputs: Provisioning: Determines # of VMs to use over time Scheduling: Maps tasks to VMs Algorithms: : Static Provisioning, Static Scheduling : Dynamic Provisioning, Dynamic Scheduling WA-: Workflow-Aware 33

Plans out all provisioning and scheduling decisions ahead of execution (offline algorithm) Algorithm: For each workflow in priority order Assign sub-deadlines to each task Find a minimum cost schedule for the workflow such that each task finishes by its deadline If the schedule cost <= the remaining budget: accept the workflow Otherwise: reject the workflow Static plan may be disrupted at runtime 34

Provisioning and scheduling decisions are made at runtime (online algorithm) Algorithm: Task priority = workflow priority Tasks are executed in priority order Tasks are mapped to available VMs arbitrarily Resource utilization determines provisioning May execute low-priority tasks even when the workflow they belong to will never finish We assume no pre-emption of tasks 35

WA- with additional workflow admission test: Each time a workflow starts Add up the cost of all the tasks in the workflow Determine critical path of workflow If there is enough budget: accept workflow Otherwise: reject workflow Other admissions tests are possible e.g. Critical path <= time remaining 36

Dynamic vs. Static Task execution over time and WA- Dynamic VM Time Static VM Time 37

Evaluation Simulation Enables us to explore a large parameter space Simulator uses CloudSim framework Ensembles Use synthetic workflows generated using parameters from real applications Randomized using different distributions, priorities Experiments Determine relative performance Measure effect of low quality estimates and delays 38

Ensemble Types Ensemble size Number of workflows (50) Workflow size {100, 200, 300, 400, 500, 600, 700, 800, 900, and 1000} Constant size Uniform distribution Pareto distribution Priorities Sorted: Priority assigned by size Unsorted: Priority not correlated with size frequency 0 10 20 30 40 50 distribution of workflow sizes Pareto Ensemble 0 200 400 600 800 1000 workflow size 39

Score(e) = large number of low-priority workflows by executing the smallest workflow Performance Metric r to account for the user s priority, we used an exponential scoring approa pleted(e) is the set of workflows in ensemble e that was completed by th r an ensemble e is: Exponential score: w Completed(e) y(w) =0, the next Score(e) highest = workflow has P 2 Priority(w) =1, and so on. Th w Completed(e) 2 Priority(w) riority(w) is the priority of workflow w such that the highest-priority w ing function gives the highest priority workflow a score that is higher than Key: High-priority workflows are more valuable than all lower-priority workflows combined: typleted(e) workflowsiscombined: the set of workflows in ensemble e that was completed by P riority(w) is the priority of workflow w such that the highest-priority 2 p > 2 i ty(w) =0, the next highest workflow has P riority(w) =1, and so on. T i = p+1,... ringconsistent function giveswith the highest problem priority definition workflow a score that is higher th is consistent with our definition of the problem, which is to complete a 40 ity workflows combined:

amount of time required to execute all workflows (max deadline), which is the sum of the critical paths of all the workflows. This range (min deadline, max deadline) is then divided into equa paths of all the workflows. This range (min deadline, max deadline) is then divided into equal intervals to determine the deadlines to use in each experiment. By computing the budget an intervals to determine the deadlines to use each experiment. By computing the budget and Budget deadline constraits and in this Deadline way we ensure that Parameters the experiments for each ensemble cover the mos deadline constraits in this way we ensure that the experiments for each ensemble cover the most interesting area of the parameter space for the ensemble. interesting area of the parameter space for the ensemble. Goal: cover space of interesting parameters Budget Range: Deadline Range: Budget min Cost(w), min Cost(w), Cost(w) w e Cost(w) w e w e w e min CriticalPath(w), min CriticalPath(w), CriticalPath(w) w e CriticalPath(w) w e w e w e In all the In experiments all the experiments we assumed we that assumed the VMs that have the VMs a price have of $1 a price per VM-hour. of $1 per This VM-hour. price This pric was chosen was to chosen simplify to simplify interpretation interpretation of results of and results should and not should affect not the relative affect the performance relative performanc of the different of the different algorithms. algorithms. In this study In this we study do notwe take dointo notaccount take intothe account heterogeneity the heterogeneity of the of th infrastructure infrastructure since wesince assume wethat assume it is always that it is possible alwaystopossible select atovmselect typea that VMhas type thethat best has the bes price to performance price to performance ratio aratio given for application. a given application. All the experiments All the experiments were run were with maximum run with maximum autoscaling autoscaling factor (v max factor ) set (v to max 1.0) for set to 1.0 for DPD Deadline and WA-. After experimenting with and WA- we found that, due to the high 41 and WA-. After experimenting with and WA- we found that, due to the hig

Relative Performance How do the algorithms perform on different applications and ensemble types? Experiment: Compare relative performance of all 3 algorithms on 5 applications 5 applications, 5 ensemble types, 10 random seeds, 10 budgets, 10 deadlines Goal: Compare % of ensembles for which each algorithm gets the highest score 42

SIPHT LIGO Epigenomics Best Scores (%) 0 50 100 Best Scores (%) 0 50 100 Best Scores (%) 0 50 100 C PS PU US UU Distribution C PS PU US UU Distribution C PS PU US UU Distribution Montage CyberShake Best Scores (%) 0 50 100 Best Scores (%) 0 50 100 Algorithm WA C PS PU US UU C PS PU US UU Distribution Distribution C = constant, PS = Pareto sorted, PU=Pareto unsorted, US=uniform sorted, UU=uniform

Inaccurate Runtime Estimates What happens if the runtime estimates are inaccurate? Experiment: Introduce uniform error of ±p% for p from 0 to 50 Compare ratios of actual cost/budget and actual makespan/deadline All applications, all distributions, and 10 ensembles, budgets and deadlines each Goal: See how often each algorithm exceeds budget and deadline 44

Inaccurate Runtime Estimate Results Cost / Budget Makespan / Deadline 2.0 WA 2.0 WA 1.5 1.5 Cost / Budget 1.0 Makespan / Deadline 1.0 0.5 0.5 0.0 0.0 WA WA WA WA WA WA WA 0 % 1 % 2 % 5 % 10 % 20 % 50 % Runtime estimate error WA WA WA WA WA WA WA 0 % 1 % 2 % 5 % 10 % 20 % 50 % Runtime estimate error 45

Task Failures Large workflows on distributed systems often have failures Experiment: Introduce a uniform task failure rate between 0% and 50% All applications, all distributions, and 10 ensembles, budgets and deadlines Goal: Determine if high failure rates lead to significant constraint overruns 46

Task Failure Results Cost / Budget Makespan / Deadline 3.0 2.5 WA 3.0 2.5 WA Cost / Budget 2.0 1.5 1.0 Makespan / Deadline 2.0 1.5 1.0 0.5 0.5 0.0 0.0 WA WA WA WA WA WA WA 0 % 1 % 2 % 5 % 10 % 20 % 50 % Failure rate WA WA WA WA WA WA WA 0 % 1 % 2 % 5 % 10 % 20 % 50 % Failure rate 47

Summary I--observations Commercial clouds are usually a reasonable alternative to grids for a number of workflow applications Performance is good Costs are OK for small workflows Data transfer can be costly Storage costs can become high over time Clouds require additional configurations to get desired performance In our experiments GlusterFS did well overall Need tools to help evaluate costs for entire computational problems (ensembles), not just one workflows Need tools to help manage the costs, the applications, and the resources

Summary II looking into the future There is a move to hosting more services in the cloud Hosting science will require a number of integrated services seamless support for managing resource usage and thus cost and performance ease of use---can you do science as an app? References: http://pegasus.isi.edu Paper on ensembles at SC 12 in Salt Lake City deelman@isi.edu