Efficient Fault-Tolerant Infrastructure for Cloud Computing

Similar documents
Efficient Fault-Tolerant Infrastructure for Cloud Computing

Hadoop Architecture. Part 1

A Study on Workload Imbalance Issues in Data Intensive Distributed Computing

Lecture 7: NP-Complete Problems

Distributed File Systems

Exponential time algorithms for graph coloring

Hadoop and Map-Reduce. Swati Gore

Energy Efficient MapReduce

CSE-E5430 Scalable Cloud Computing Lecture 2

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Nan Kong, Andrew J. Schaefer. Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA

MapReduce and Distributed Data Analysis. Sergei Vassilvitskii Google Research

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

Small Maximal Independent Sets and Faster Exact Graph Coloring

BigData. An Overview of Several Approaches. David Mera 16/12/2013. Masaryk University Brno, Czech Republic

Chapter 7. Using Hadoop Cluster and MapReduce

Hadoop IST 734 SS CHUNG

Cloud Computing at Google. Architecture

An On-Line Algorithm for Checkpoint Placement

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Conjugating data mood and tenses: Simple past, infinite present, fast continuous, simpler imperative, conditional future perfect

MinCopysets: Derandomizing Replication In Cloud Storage

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Big Data Processing with Google s MapReduce. Alexandru Costan

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Online and Offline Selling in Limit Order Markets

Diversity Coloring for Distributed Data Storage in Networks 1

Introduction to Parallel Programming and MapReduce

Distributed Filesystems

Distributed Computing over Communication Networks: Maximal Independent Set

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

Big Data With Hadoop

Apache Hadoop. Alexandru Costan

Hadoop Distributed File System. Jordan Prosch, Matt Kipps

Distributed Databases

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

COMP 598 Applied Machine Learning Lecture 21: Parallelization methods for large-scale machine learning! Big Data by the numbers

16.1 MAPREDUCE. For personal use only, not for distribution. 333

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

Offline sorting buffers on Line

PARALLELS CLOUD STORAGE

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

1 Approximating Set Cover

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

Adaptive Online Gradient Descent

Applied Algorithm Design Lecture 5

Midterm Practice Problems

Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel

Analysis of Algorithms, I

Data Storage - II: Efficient Usage & Errors

ARTICLE IN PRESS. European Journal of Operational Research xxx (2004) xxx xxx. Discrete Optimization. Nan Kong, Andrew J.

An Empirical Study of Two MIS Algorithms

6. How MapReduce Works. Jari-Pekka Voutilainen

GraySort and MinuteSort at Yahoo on Hadoop 0.23

Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA

(67902) Topics in Theory and Complexity Nov 2, Lecture 7

Approximation Algorithms

Near Optimal Solutions

Big Data. White Paper. Big Data Executive Overview WP-BD Jafar Shunnar & Dan Raver. Page 1 Last Updated

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

Generating models of a matched formula with a polynomial delay

Load Balancing and Switch Scheduling

Lecture 10: Regression Trees

Big Data Storage Options for Hadoop Sam Fineberg, HP Storage

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

MapReduce. MapReduce and SQL Injections. CS 3200 Final Lecture. Introduction. MapReduce. Programming Model. Example

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

CSC2420 Spring 2015: Lecture 3

Software Execution Protection in the Cloud

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Non-Stop for Apache HBase: Active-active region server clusters TECHNICAL BRIEF

Cloud Computing. Lectures 10 and 11 Map Reduce: System Perspective

Load Balancing and Rebalancing on Web Based Environment. Yu Zhang

The Classes P and NP

An Approach to Load Balancing In Cloud Computing

A Sublinear Bipartiteness Tester for Bounded Degree Graphs

Fairness in Routing and Load Balancing

NoSQL and Hadoop Technologies On Oracle Cloud

Parallel Processing of cluster by Map Reduce

Efficient Data Replication Scheme based on Hadoop Distributed File System

Follow the Perturbed Leader

Load Balancing. Load Balancing 1 / 24

A Brief Introduction to Apache Tez

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

Intro to Map/Reduce a.k.a. Hadoop

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

Jeffrey D. Ullman slides. MapReduce for data intensive computing

Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing

Analysis of an Artificial Hormone System (Extended abstract)

Storage and Retrieval of Large RDF Graph Using Hadoop and MapReduce

Sector vs. Hadoop. A Brief Comparison Between the Two Systems

Big Data and Apache Hadoop s MapReduce

How To Solve The Online Advertising Problem

Transcription:

Efficient Fault-Tolerant Infrastructure for Cloud Computing Xueyuan Su Candidate for Ph.D. in Computer Science, Yale University December 2013 Committee Michael J. Fischer (advisor) Dana Angluin James Aspnes Avi Silberschatz Michael Mitzenmacher, Harvard (external reader) 1 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline 1 Introduction 2 Assigning Tasks for Efficiency An Idealized Model of Hadoop Hadoop Online Scheduling 3 Checkpointing with Unreliable Checkpoints An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 4 2 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Introduction 1 Introduction 2 Assigning Tasks for Efficiency An Idealized Model of Hadoop Hadoop Online Scheduling 3 Checkpointing with Unreliable Checkpoints An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 4 3 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Data-Intensive Computing E=mc 2... ^$#@%& Data-intensive computing, big data. Store and process large volumes of data (search/transactions/scientific computing/etc...). Scalability, parallelism, and efficiency. Solution: Cloud computing! 4 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Cloud Computing Introduction E=mc 2... ^$#@%& Envisions shifting data storage and computing power from local servers to data centers. Includes both of Applications delivered over the Internet, Supporting hardware & software infrastructure. 5 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Cloud Computing Infrastructure Master Run task #45 Store data block #23 Task tracker Data center Slave Google s MapReduce and Apache Hadoop. Built on large clusters of commodity machines. Storage: Data block replication in distributed file systems. Execution: Job partitioning into tasks for parallel execution. 6 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Challenges for Infrastructure Design Progress: 0.01%. Estimated completion time: 5 years... &*^%$#@... :( Efficiency challenge: Load balancing for fast job completion. Data locality is critical. The time to load data dominates the actual computation time. Running tasks on servers not holding input data remotely load through network communication cost plus disk access. The more data needs to be remotely loaded the more congested the network becomes higher remote access cost. 7 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Challenges for Infrastructure Design (cont.) Progress: 99.99%. Ooops... system crashed. Restart from 0%. &*^%$#@... :( Fault tolerance challenge: Inevitable failures due to scale. Completions of long-running jobs are affected by failures. Checkpointing & rollback. Literature: Checkpoints are stored in totally reliable storage. Checkpoints may fail & their failure probability can be significant. Faster job completion can sometimes be achieved by using less reliable but cheaper checkpoints. 8 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Our Contributions Introduction The job just completed. Recovered from the failure. :) :) Efficiency: Assignment of tasks to servers for minimizing job completion time. Fault tolerance: Checkpoint placements that consider possible checkpoint failure. 9 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Introduction An Idealized Model of Hadoop Hadoop Online Scheduling 1 Introduction 2 Assigning Tasks for Efficiency An Idealized Model of Hadoop Hadoop Online Scheduling 3 Checkpointing with Unreliable Checkpoints An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 4 10 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Introduction An Idealized Model of Hadoop Hadoop Online Scheduling 1 Introduction 2 Assigning Tasks for Efficiency An Idealized Model of Hadoop Hadoop Online Scheduling 3 Checkpointing with Unreliable Checkpoints An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 4 11 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Quick Overview of Hadoop Infrastructure Storage: Hadoop distributed file system (HDFS). Files are split into large equal-sized blocks. Each data block is replicated on multiple servers. Job execution: Break jobs down into small independent tasks. Each task processes a single input data block. Tasks are assigned to and run on servers. Tasks assigned to the same server run sequentially. Tasks assigned to different servers run in parallel. Job completes when all tasks finish. Thus, the job completion time is the completion time of the task that finishes last. 12 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Scheduling Tasks to Servers An Idealized Model of Hadoop Hadoop Online Scheduling Run task #45 Challenge: How to assign tasks to servers to simultaneously balance load and minimize cost of remote data access? 13 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Components of the Model A set of tasks and a set of data blocks. Each data block has a corresponding task to be performed on it. A set of servers that store data blocks and perform tasks. A data placement that maps a data block to the set of servers on which replicas of that block are stored. A task assignment that specifies the server on which a task will run. A cost function that determines the run time of a task on a server to which it is assigned. 14 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Data Placement on Servers An Idealized Model of Hadoop Hadoop Online Scheduling server data block Every data block occurs on at least one server. a b c e a a d d c Block #Replicas a 3 b 1 c 2 d 2 e 1 Server 1 Server 2 Server 3 15 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Data Placement Graph An Idealized Model of Hadoop Hadoop Online Scheduling Slide data blocks up out of the server rectangles and connect them by lines. This leads to a bipartite graph where each replica has an edge to its corresponding server. a b c e a d a c d Server 1 Server 2 Server 3 16 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Data Placement Graph (cont.) An Idealized Model of Hadoop Hadoop Online Scheduling Merge the replica nodes with the same block label together, giving the bipartite graph called the data placement graph G ρ. a b c d e Server 1 Server 2 Server 3 17 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Assigning Tasks to Servers a b c e local task a d b c remote task e a a d d c A task is local if it is on the same server as its corresponding data block. Its cost is w loc. A task is remote if it is on a different server from its corresponding data block. Its cost is w rem. We assume w rem w loc. Server 1 Server 2 Server 3 # remote tasks = 3 18 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Remote Cost Network becomes more congested with more remote tasks. Network congestion increases the cost for remote data access. To reflect the increased communication cost, w rem is a monotone increasing function of the number of remote tasks. 19 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Remote Cost (cont.) An Idealized Model of Hadoop Hadoop Online Scheduling Reassigning task c from server 3 to server 2 gives a worse result. The cost of task c becomes remote cost. Remote costs for all remote tasks increase due to the increased number of remote tasks. local task remote task c d a b e c d a b e Server 1 Server 2 Server 3 Server 1 Server 2 Server 3 # remote tasks = 3 # remote tasks = 4 20 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Server Load and Maximum Load An Idealized Model of Hadoop Hadoop Online Scheduling The load L A s of a server s under assignment A is the sum of costs introduced by all tasks that A assigns to s. The maximum load L A of assignment A is the maximum server load under A. In the following example, L A = L A 2. c L A 2 d L A L A 1 L A 3 b e a Server 1 Server 2 Server 3 21 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Introduction An Idealized Model of Hadoop Hadoop Online Scheduling 1 Introduction 2 Assigning Tasks for Efficiency An Idealized Model of Hadoop Hadoop Online Scheduling 3 Checkpointing with Unreliable Checkpoints An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 4 22 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

HTA Problems Introduction An Idealized Model of Hadoop Hadoop Online Scheduling Definition The HTA Optimization Problem is, given a Hadoop system, to find a task assignment that minimizes the maximum load. Definition The HTA Decision Problem is, given a Hadoop system and a server capacity k, to decide whether there exists a task assignment with maximum load k. 23 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

NP-Completeness Introduction An Idealized Model of Hadoop Hadoop Online Scheduling By reduction from 3SAT, we show that Theorem The HTA decision problem is NP-complete. Actually the HTA problem remains hard even if the server capacity is 3, the cost function only has 2 values {1, 3} in its range, and each data block has at most 2 replicas. 24 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Approximation Algorithms We present and analyze two approximation algorithms. Round Robin algorithm computes assignments that deviate from the optimal by a factor (w rem /w loc ). The flow-based algorithm Max-Cover-Bal-Assign computes assignments that are optimal to within an additive gap w rem. 25 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Outline of the Max-Cover-Bal-Assign Algorithm Given a threshold τ, the algorithm runs in two phases: Max-cover uses network flow to maximize the number of local tasks such that no single server is assigned more than τ local tasks. Bal-assign finishes the assignment by greedily assigning each unassigned task to a server with minimal load. 26 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Example Data Placement An Idealized Model of Hadoop Hadoop Online Scheduling a b c d e f g h i j k Server 1 Server 2 Server 3 27 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Max-cover with Threshold τ = 4 An Idealized Model of Hadoop Hadoop Online Scheduling Max-cover assigns each server the maximum number of local tasks, subject to the threshold τ = 4. =4 d c b a i h k j Server 1 Server 2 Server 3 28 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Bal-assign Introduction An Idealized Model of Hadoop Hadoop Online Scheduling Bal-assign finishes the task assignment by greedily assigning each unassigned task to a server with minimal load. =4 d g c b e i f k L A a h j Server 1 Server 2 Server 3 29 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Optimal Assignment O An Idealized Model of Hadoop Hadoop Online Scheduling However, optimal assignment can do better... =4 d c b a k j i h g f e L O Server 1 Server 2 Server 3 30 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

OPT vs Flow-Based Introduction An Idealized Model of Hadoop Hadoop Online Scheduling But the difference is smaller than w rem. =4 d c b a g e i h f k j L A =4 d c b a k j i h g f e L O Server 1 Server 2 Server 3 Server 1 Server 2 Server 3 31 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

The Approximation Gap An Idealized Model of Hadoop Hadoop Online Scheduling By trying all possible thresholds τ and picking the best assignment, the flow-based algorithm performs surprisingly well. Theorem Let n be the number of servers. For n 2, the flow-based algorithm computes an assignment A such that ( L A L O + 1 1 ) wrem O n 1 32 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Introduction An Idealized Model of Hadoop Hadoop Online Scheduling 1 Introduction 2 Assigning Tasks for Efficiency An Idealized Model of Hadoop Hadoop Online Scheduling 3 Checkpointing with Unreliable Checkpoints An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 4 33 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Online Scheduling with Restricted Assignment My cost is 57. Servers 6 & 8 are feasible for me. Distinctions from the HTA problem: Tasks arrive sequentially in an online fashion. Tasks must be assigned to their feasible (local) servers. Tasks may have different costs (but do not depend on which server the tasks are assigned to). Goal: Assign each task to a feasible server upon its arrival to minimize the maximum load. 34 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling The GREEDY Algorithm My cost is 57. Servers 2 & 8 are feasible for me. Server 2 Server 5 Server 8 The GREEDY algorithm: Chooses a feasible server with minimum load. Breaks ties arbitrarily. 35 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling The GREEDY Algorithm (cont.) Server 2 has load 213 Server 8 has load 198 Server 2 Server 5 Server 8 The GREEDY algorithm: Chooses a feasible server with minimum load. Breaks ties arbitrarily. 36 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling The GREEDY Algorithm (cont.) Go to server 8 Server 2 Server 5 Server 8 The GREEDY algorithm: Chooses a feasible server with minimum load. Breaks ties arbitrarily. 37 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Performance Measure and Our Approach Assume an optimal offline algorithm OPT knows the task sequence in advance and has unbounded computational power. The competitive ratio is the maximum load ratio between GREEDY and OPT under all possible inputs. Theorem (Upper bound on competitive ratio of GREEDY) For any problem instance σ, let n be the number of servers. Let A σ and O σ be computed by GREEDY and OPT, respectively. L Aσ L Oσ log n + 1 δ(σ), where 0 δ(σ) log n. δ(σ) = 0 if and only if O σ perfectly balances load over all servers and L Aσ is a multiple of L Oσ. 38 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Mathematical Structures for Competitive Analysis The proof of the competitive ratio upper bound makes use of two novel structures. Witness graphs. Token Production Systems (TPSs). 39 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Witness Graphs Witness graphs are ordered weighted multidigraphs. Each witness graph embeds two assignments and allow them to be compared in the same framework. t4 t4 t3 s2 t3 s2 t2 t2 s1 s1 t1 t1 40 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Witness Graph from Assignments Step 1: Add directions on edges. Step 2: Merge task vertices. t4 t4 t4 t3 s2 t3 s2 s2 t3 s2 t2 s1 t2 s1 s1 t2 s1 t1 t1 t1 41 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Witness Graph from Assignments (cont.) Step 3: Eliminate task vertices. Step 4: Merge server vertices. Step 5: Add edge weights and ordering. 2 4 s2 s2 s2 s2 2 2 1 3 1 1 s1 s1 s1 s1 42 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Witness Graph from Assignments (cont.) Each server is represented by a vertex, and each task is represented by an edge. Server load under Assignment 1 equals incoming edge weight. Server load under Assignment 2 equals outgoing edge weight. The total task load equals the total edge weight. t4 t4 2 4 t3 s2 t3 s2 s2 t2 s1 t2 s1 2 2 1 3 1 1 t1 t1 s1 43 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Token Production Systems A token production system consists of a list of buckets and some number of tokens. A set of vectors called actions rule the production and consumption of tokens in buckets. Each action has a cost. A sequence of actions form an activity and its cost is the sum of action costs. bucket token (0, -2, -1, -3, 0, 0, 6, 0) action 44 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Idealized Model of Hadoop Hadoop Online Scheduling Token Production Systems (cont.) A witness graph can be embedded into a token production system. Each vertex is represented by an action. The maximum weight of incoming edges over all vertices equals the number of buckets. The total edge weight equals the cost of an activity. 45 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Introduction An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 1 Introduction 2 Assigning Tasks for Efficiency An Idealized Model of Hadoop Hadoop Online Scheduling 3 Checkpointing with Unreliable Checkpoints An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 4 46 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Introduction An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 1 Introduction 2 Assigning Tasks for Efficiency An Idealized Model of Hadoop Hadoop Online Scheduling 3 Checkpointing with Unreliable Checkpoints An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 4 47 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Long-Running Jobs Affected By Failures An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 50% 0% :( Hardware failures are the norm, not the exception, in large systems. The chance of job completion without any failures becomes exponentially small with increasing job processing time. Assume a job execution fails. After faulty parts are fixed or replaced, jobs must be restarted and lost data must be recomputed, incurring substantial overhead. 48 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Checkpoints Avoid Restart Overhead 50% :) 25% Checkpoint: Save the job state from time to time. Rollback: When a failure is detected, a rollback operation restores the job state to one previously saved in a checkpoint, and recomputation begins from there. This avoids always restarting the job from the beginning. 49 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem However, Checkpoints Might Also Fail 50% 0% :( Checkpoints may fail and their failure probability can be significant. Faster job completion can sometimes be achieved by using less reliable but cheaper checkpoints. 50 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Impact of Checkpoint Failures An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Challenges: How to evaluate the impact of checkpoint failures on job completion time? How to place checkpoints to minimize the expected job completion time. 51 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Job Execution with Unreliable Checkpoints A set of tasks t 1, t 2,, t n to be executed sequentially. Each run of t i succeeds with positive probability p i. Upon success, store the output of t i into checkpoint c i+1. When a run of t i fails, recovery from checkpoint c i is attempted. With probability q i, the recovery succeeds and the job is resumed from there. If the access to c i fails, access to the previous checkpoint c i 1 is attempted and repeat. Restarting the job from the beginning is always possible. 52 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Illustration of a Job Execution When a task fails, accesses to checkpoints are attempted. When both checkpoints fail, the job is restarted from the beginning. 53 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Illustration of a Job Execution (cont.) When another task fails, accesses to checkpoints are attempted. In this example, the access to the most recent checkpoint fails, but the second most recent one succeeds. Recovery resumes from there. 54 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Illustration of a Job Execution (cont.) An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Finally the job successfully completes. 55 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Two Placement Models An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem The discrete (modular program) model: Task lengths are pre-determined. Checkpoints can only be placed at task boundaries. Task success probability may be independent of task length. A task failure is detected at the end of the task run. The continuous model: Checkpoints can be placed at arbitrary points in the job. Task lengths are determined by distance between adjacent checkpoints. Task failures follow the Poisson distribution. Task success probability depends on task length and failure rate. A task failure is detected immediately after its occurrence. 56 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Job Execution as a Markov Chain An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem For each 1 i n, state t i represents the start of task t i, and state c i represents the attempted access to checkpoint c i. For convenience, state t n+1 represents the job completion. A job execution is a random walk from state t 1 to state t n+1. p 1 p 2 p... n 1 p n t 1 t 2 t n t n 1 q 1 q 2 q n 1 1 p 1 1 p 2 1 p n... c 1 c 2 c n 1 q 2 1 q 3 1 q n 57 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Introduction An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 1 Introduction 2 Assigning Tasks for Efficiency An Idealized Model of Hadoop Hadoop Online Scheduling 3 Checkpointing with Unreliable Checkpoints An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 4 58 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Number of Successful Task Runs An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Let S i be the number of successful runs of task t i during a job execution. The sample space consists of all possible job executions. Then the expectation of S i satisfies the recurrence relation E[S n ] = 1 ( 1 qi E[S i 1 ] = p i ) E[S i ] + q i, for all i 1 59 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Number of Successful Task Runs (cont.) Extreme values of q: ( 1 qi E[S i 1 ] = p i ) E[S i ] + q i If q i = 0, E[S i 1 ] = E[S i ]/p i (exponential growth). This is the case without any checkpoints. If q i = 1, E[S i 1 ] = 1 (constant). This is the case with totally reliable checkpoints. 60 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Growth Rate of Expected Completion Time Consider the uniform case where p i = p and q i = q for all i. p + q is critical in determining the growth rate of expected job completion time as a function of job length n. Let r = (1 q)/p. T = q p(1 r) n + Θ(1) = Θ(n) if p + q > 1, q 2p n2 + 2 q 2p n = Θ(n2 ) if p + q = 1, 1 p(r 1) r n + q p(1 r) n + Θ(1) = Θ (r n ) if p + q < 1. System designers should configure checkpoints such that p + q > 1. 61 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Checkpoint Retention Policy Sometimes the storage for checkpoints is limited. Assume at most k checkpoints can be stored at the same time. Consider the uniform case where p i = p and q i = q for all i such that p + q > 1. A simple retention policy keeps only the most recent k checkpoints and discards older ones. When k = Ω(log n), the expected job completion time is T = Θ(n). The penalty is only a constant factor. 62 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem First Order Approximation to Optimum Interval In many systems, it is reasonable to make checkpoints over regular intervals, e.g., once an hour. One might want to decide the optimum checkpoint interval to minimize expected job completion time. Let q be checkpoint reliability, g be the cost for generating a checkpoint, and M be mean time between failure. When g M, a first order approximation to the optimum checkpoint interval is w = ( ) q 2gM 2 q This expression generalizes the result w = 2gM in the literature for totally reliable checkpoints. 63 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Two Symmetry Properties An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Faster expected job completion can be achieved with the freedom to place checkpoints over non-equidistant points. Represent task lengths by a vector w = (w 1,, w n ). Let w R = (w n,, w 1 ) be its reversal. Two symmetry properties: T (w) = T (w R ) ( ) w + w R T (w) T 2 Both symmetry properties are proved by induction and multiple applications of a weighted sum theorem. 64 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Introduction An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 1 Introduction 2 Assigning Tasks for Efficiency An Idealized Model of Hadoop Hadoop Online Scheduling 3 Checkpointing with Unreliable Checkpoints An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 4 65 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Concatenation of Jobs An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem c 2 c 3 c 4 c 5 c 6 c 7 + + t 1 t 2 t 3 t 4 t 5 t 6 t 7 c 2 c 3 c 4 c 5 c 6 c 7 t 1 t 2 t 3 t 4 t 5 t 6 t 7 Concatenating two jobs with a checkpoint results in a larger job. 66 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Concatenation of Jobs (cont.) Theorem (Weighted Sum) Let T (q) be the expected completion time of the concatenation of two jobs with a checkpoint of reliability q. T (q) satisfies T (q) = qt (1) + (1 q)t (0) In other words, T (q) is the weighted sum of the two extreme cases where checkpoints are either totally reliable or totally unreliable. 67 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Proof of the Weighted Sum Theorem For job i, define W i : expected completion time (left in right out). Wi R : expected reverse completion time (right in right out). p i : success probability. W i Job Completion W i R Reverse Job Completion 68 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Proof of the Weighted Sum Theorem (cont.) ( ) 1 T (q) = W 1 + W 2 + (1 q) 1 W1 R p 2 progress W 1 R W 1 R W 1 W 2 W 1 + W 2 : Both jobs must complete. (1 q)(1/p 2 1): On average, job 2 fails for (1/p 2 1) times. Each time job 2 fails, access to the connecting checkpoint is attempted. Each access fails with probability (1 q). W1 R : Each time when the connecting checkpoint fails, job 1 must reverse complete. 69 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem Proof of the Weighted Sum Theorem (cont.) ( ) 1 T (q) = W 1 + W 2 + (1 q) 1 W1 R p 2 Plugging q = 1 and q = 0 into above equation gives qt (1) + (1 q)t (0) ( ( ) 1 = q (W 1 + W 2 ) + (1 q) W 1 + W 2 1 p 2 ( ) 1 = W 1 + W 2 + (1 q) 1 W1 R p 2 = T (q) W R 1 ) 70 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Introduction 1 Introduction 2 Assigning Tasks for Efficiency An Idealized Model of Hadoop Hadoop Online Scheduling 3 Checkpointing with Unreliable Checkpoints An Extended Model with Possible Checkpoint Failures Results on Expected Job Completion Time A Weighted Sum Theorem 4 71 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Efficiency The job just completed. :) A model for studying the problem of assigning tasks to servers so as to minimize job completion time. N P-completeness result for HTA problem. Good approximation algorithms for task assignment. New competitive ratio bound for online GREEDY. 72 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Fault Tolerance Introduction Recovered from the failure. :) A model that includes checkpoint reliability in evaluating job completion time. p + q determines the growth rate of expected completion time. Logarithmic checkpoint retention with constant factor penalty. First order approximation to optimum checkpoint interval. Two symmetry properties for non-equidistant placements. 73 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Sidenotes on Non-Theoretical Work Google: Chunk Allocation Algorithms for Colossus (the 2nd gen. of Google File System). Do No Evil. No publication! ORACLE R : Oracle In-Database Hadoop: When MapReduce Meets RDBMS. In SIGMOD 2012 & a feature in Oracle Database 12c. 74 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Before Finishing My Talk... Acknowledgements 75 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Hopeless Little X. Introduction P vs. NP? Theory? Optimism + Persistence + Preciseness 76 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

The Wise Mike Introduction P vs. NP? Theory? Optimism + Persistence + Preciseness 77 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Inspiring Committee & Friends P vs. NP? Theory? Optimism + Persistence + Preciseness 78 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Supportive Family Introduction P vs. NP? Theory? Optimism + Persistence + Preciseness 79 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

The End Introduction P vs. NP? Theory? Optimism + Persistence + Preciseness Thank You! 80 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Backup Slides Backup Slides 5 Proof Structure for HTA NP-Completeness 6 A Weaker Approximation Bound 7 Recurrence Relation for Successful Task Runs 8 Proof of the First Symmetry 9 Proof Sketch of the Second Symmetry 81 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry 5 Proof Structure for HTA NP-Completeness 6 A Weaker Approximation Bound 7 Recurrence Relation for Successful Task Runs 8 Proof of the First Symmetry 9 Proof Sketch of the Second Symmetry 82 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Proof Method Given a 3CNF formula, we construct an instance of the HTA decision problem such that the local cost is 1, the remote cost is 3, the server capacity is 3. The resulting HTA instance has a task assignment of maximum load at most 3 if and only if the 3CNF formula is satisfiable. Note that a server can be assigned at most 3 local tasks or 1 remote task. 83 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry A Piece of the Structure Let C u = (l u1 l u2 l u3 ) be a clause in the 3CNF formula and each l uv be a literal. Construct a corresponding clause gadget, containing 1 clause server C u, 3 literal tasks l u1, l u2, l u3 and 1 auxiliary task a u. The server can only accept 3 local tasks, and thus 1 of the tasks needs to be assigned elsewhere. 84 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Intuition Behind the Structure There exists a feasible task assignment such that: The auxiliary task is assigned locally to its corresponding clause server. All false literal tasks are assigned locally to their corresponding clause servers. Therefore, some literal task must be assigned elsewhere. This will imply that the corresponding literal of the 3CNF formula is true. Please refer to my thesis for more details... 85 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry 5 Proof Structure for HTA NP-Completeness 6 A Weaker Approximation Bound 7 Recurrence Relation for Successful Task Runs 8 Proof of the First Symmetry 9 Proof Sketch of the Second Symmetry 86 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry A Weaker Approximation Bound Lemma Let n be the number of servers, L A L O + (1 1/n) wrem. O Proof intuition: Max-cover guarantees no fewer local tasks than O. Bal-assign balances remaining remote tasks. 87 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Proof of the Approximation Bound Let K A and K O be the total number of local tasks under assignments A and O, respectively. Max-cover uses network flow to maximize the number of local tasks with respect to the threshold τ. When τ A = τ O, we have K A K O. =4 d c b a g e i h f k j L A =4 d c b a k j i h g f e L O Server 1 Server 2 Server 3 Server 1 Server 2 Server 3 88 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Proof of the Approximation Bound (cont.) The number of local tasks K A K O implies that A assigns no more remote tasks than O. By the monotonicity of w rem, we have w A rem w O rem Let H A and H O be the total load under assignments A and O, respectively. It is straightforward that H A H O. 89 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Proof of the Approximation Bound (cont.) The maximum load cannot be smaller than the average. Thus, L O H O /n This implies that H O n L O =4 d c b a k j i h g f e L O Server 1 Server 2 Server 3 90 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Proof of the Approximation Bound (cont.) Consider the server with maximum load (server 2) and the last task assigned to it (task g). If g is assigned by max-cover, then L A = L O = τ w loc. If g is assigned by the greedy bal-assign, before g is assigned, s 2, L A s L A wrem A H A (n 1) (L A wrem) A + L A =4 d c b g e i f k L A a h j Server 1 Server 2 Server 3 91 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Proof of the Approximation Bound (cont.) Combining the four inequalities completes the proof. wrem A wrem O H A H O H O n L O H A (n 1) (L A w A rem) + L A L A L O + (1 1/n) w O rem 92 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry 5 Proof Structure for HTA NP-Completeness 6 A Weaker Approximation Bound 7 Recurrence Relation for Successful Task Runs 8 Proof of the First Symmetry 9 Proof Sketch of the Second Symmetry 93 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Successful Task Runs vs. Checkpoint Failures We prove this theorem by counting arguments. E[S n ] = 1 as the job completes once task t n succeeds. For 2 i n + 1, consider task t i 1. t i 1 succeeds at least once. Each time when checkpoint c i fails, task t i 1 needs to be successfully re-executed to restore its output. Let D i be the number of times that checkpoint c i fails. E[S i 1 ] = 1 + E[D i ] 94 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Checkpoint Failures vs. Failed Task Runs Consider checkpoint c i. Each time when task t i fails or checkpoint c i+1 fails, an attempted access to checkpoint c i is made. Each attempted access fails with probability (1 q i ). Let F i be the number of times task t i fails. E[D i ] = (1 q i )(E[F i ] + E[D i+1 ]) 95 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Failed Task Runs vs. Successful Task Runs Task runs are repeated Bernoulli trials, and thus ( ) 1 E[F i ] = 1 E[S i ] p i 96 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Recurrence of Successful Task Runs Combining the three equalities gives E[S i 1 ] = 1 + E[D i ] = 1 + (1 q i )(E[F i ] + E[D i+1 ]) (( ) ) 1 = 1 + (1 q i ) 1 E[S i ] + E[S i ] 1 p i = 1 q i p i E[S i ] + q i 97 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry 5 Proof Structure for HTA NP-Completeness 6 A Weaker Approximation Bound 7 Recurrence Relation for Successful Task Runs 8 Proof of the First Symmetry 9 Proof Sketch of the Second Symmetry 98 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Proof of the First Symmetry Inductive proof: The base case with l = 1 task is trivially true. Assume the claim holds for l = n 1 tasks. Now consider the inductive step with l = n tasks. 99 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Proof of the First Symmetry (cont.) w 1 w 2 w n 1 w n w n w n 1 w 2 w 1 Apply the weighted sum theorem to the checkpoint after the first task in (w 1,, w n ), T n (w 1,, w n ) = q [T 1 (w 1 ) + T n 1 (w 2,, w n )] +(1 q)t n 1 (w 1 + w 2, w 3,, w n ) 100 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Proof of the First Symmetry (cont.) w 1 w 2 w n 1 w n w n w n 1 w 2 w 1 Apply the weighted sum theorem to the checkpoint before the last task in (w n,, w 1 ), T n (w n,, w 1 ) = q [T n 1 (w n,, w 2 ) + T 1 (w 1 )] +(1 q)t n 1 (w n,, w 3, w 2 + w 1 ) 101 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Proof of the First Symmetry (cont.) By induction hypothesis, T n 1 (w 2,, w n ) = T n 1 (w n,, w 2 ) T n 1 (w 1 + w 2, w 3,, w n ) = T n 1 (w n,, w 3, w 2 + w 1 ) 102 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Proof of the First Symmetry (cont.) Combining the four equalities completes the proof. T n (w 1,, w n ) = q [T 1 (w 1 ) + T n 1 (w 2,, w n )] +(1 q)t n 1 (w 1 + w 2, w 3,, w n ) = q [T 1 (w 1 ) + T n 1 (w n,, w 2 )] +(1 q)t n 1 (w n,, w 3, w 2 + w 1 ) = T n (w n,, w 1 ) 103 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Outline Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry 5 Proof Structure for HTA NP-Completeness 6 A Weaker Approximation Bound 7 Recurrence Relation for Successful Task Runs 8 Proof of the First Symmetry 9 Proof Sketch of the Second Symmetry 104 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Construction of Helper Functions Consider length vector w = (w 1,, w n ), where n is the vector length. For any x, y 0, define a family of functions f n. f 0 (w, x, y) = 0. For n 1, f n (w, x, y) = T n (x + w 1, w 2,, w n ) +T n (w 1,, w n 1, w n + y) 105 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing

Proof Structure for HTA NP-Completeness A Weaker Approximation Bound Recurrence Relation for Successful Task Runs Proof of the First Symmetry Proof Sketch of the Second Symmetry Property of Helper Functions Lemma Let w = (w 1,, w n ) be a length vector and w R = (w n,, w 1 ) be its reversal. For any n 0 and any x, y 0, we have ( w + w R f n (w, x, y) f n, x + y, x + y ) 2 2 2 This Lemma is proved by induction on n and multiple applications of the weighed sum theorem. Then it follows immediately that T (w) = 1 2 f (w, 0, 0) 1 ( ) ( ) w + w R w + w R 2 f, 0, 0 = T 2 2 106 Xueyuan Su Efficient Fault-Tolerant Infrastructure for Cloud Computing