Parametric Analysis of Mobile Cloud Computing using Simulation Modeling Arani Bhattacharya Pradipta De Mobile System and Solutions Lab (MoSyS) The State University of New York, Korea (SUNY Korea) StonyBrook University Ansuman Banerjee Indian Statistical Institute
Mobile Cloud Computing Source: Shiraz, Muhammad, et al. "A review on distributed application processing frameworks in smart mobile devices for mobile cloud computing." Communications Surveys & Tutorials, IEEE 15.3 (2013): 1294-1313. Mobile Cloud Computing is a framework to augment a resource constrained mobile device to execute resource intensive applications by using cloud based server resources Pros: - Saves battery power - Makes execution faster Cons: - Must send the program states (data) to the cloud server consumes battery - Network latency can lead to execution delay
Prototype MCC Systems MAUI (2010) Showed up to 80% energy savings on computationally intensive applications Required source code annotations CloneCloud (2011) Showed up to 20x energy savings on computationally intensive applications working on unmodified application binaries COMET (2012) Showed up to 15x speed up on unmodified application binaries ThinkAir (2012) Showed that the execution time and energy consumption decrease two orders of magnitude for a N-queens puzzle application and one order of magnitude for a face detection and a virus scan application
Are MCC Systems Ready for Use? MAUI CloneCloud ThinkAir COMET Bandwidth Parallelism Mobile processors Cloud speed Practical MCC system must adapt to all possible operating environment variations
Sources of Variation Application level Degree of Concurrency Workload Real-time constraints Network Bandwidth variation Latency variation Multiple interfaces Mobility Execution Platform Mobile platform (number of processors, GPU) Cloud Architecture Energy / Time profile diversity due to platform
Practical MCC Design Requirement We need a controlled MCC experiment environment Prototype system with fine control over the variable parameters Complexity of implementation very high Simulation environment Models that can represent environmental parameters
Presentation Outline Motivation Simulation Model Results Open Questions
Choice of Simulation Models Finite state automaton Combination of variable parameters represent a state State transition on change of any variable State space explosion difficult to analyze Integer Linear Program Optimization objective is to minimize energy usage or time to execute an application Represent the other parameters as constraints Our Objective Analyze the impact of various parameters on the energy savings and/or time to completion Understand the interplay of parameters and their importance
Application Model Directed Acyclic Graph: represents application call graph Task type: native and remoteable tasks Concurrency: Call graph represents multi-threaded apps Time/Energy Profile: Each task incurs fixed time and energy represented as task attributes Network Overhead: Each edge has an attribute representing the amount of data to be transferred Start Time Constraint: Ensures that a successor method cannot begin before all its predecessors have completed execution
Execution Platform Mobile System: Limited number of processors Enforced using concurrency constraint Cloud System: Unlimited number of processors Faster than mobile processor (by F times) Each task can be executed either on mobile or cloud system Decision engine treats execution location as a binary decision variable (xi)
Application Execution Model Precedence Constraint One task can start execution only when preceding tasks have completed Execution Time Constraint Total time is sum of migration and execution times Deadline Constraint Application must complete within time Energy Budget Constraint Total energy consumed must be limited
Decision Engine System Model Tunable to optimize for energy savings or time to execute an application A scale factor (λ) is used to tradeoff energy saved and time to complete execution Network Parameters Depends on choice of network interface 3G, LTE, WiFi
Summary of the Model Variable Parameter Execution time of each method Energy consumed by each method Data associated with each migration Bandwidth (restricted variation) Latency (restricted variation) Number of mobile processors Speed of cloud compared to mobile Constraint Execution Time, Precedence Energy Budget Execution Time Execution Time Execution Time Concurrency Execution Time
Limitations Monetary cost of executing on server is not modeled No network transmission error Network properties do not change during execution Mobile processors are assumed to be homogeneous
Simulation Results
Simulation Settings Parameter Values Execution time of each method 100-500 ms Energy of each method 1 20 J Data associated with each migration 50 500 KB Bandwidth 1 10 Mbps Latency 2-70 ms Native methods in call graph 30% Number of mobile processors 1-8 Number of threads spawned at each task 1-3 Speed of cloud compared to mobile 2-50
Scaling Factor vs Execution Time Scaling factor determines the tradeoff between energy saved and time to completion in the objective function Scaling Factor: Value of 0: minimize energy Value of 1: minimize time Gain (Execution Time): Comparison using cloud system versus only mobile system Execution time increases when only energy is minimized
Scaling Factor vs Energy Scaling Factor: Value of 0: minimize energy Value of 1: minimize time Gain (Energy): Comparison using cloud system versus only mobile system Energy increases when only time is minimized Energy and time objectives are often conflicting
Native Methods vs Performance Native Methods: Must be executed on mobile device High number of native tasks Lower advantage of using cloud system No improvement when half the tasks are native
Amount of Parallelism vs Performance Max Degree of Parallelism: Represents maximum number of threads spawned at one task Observation: High parallelism reduces gain in energy and time Conclusion: Time not affected by parallelism Migration cost increases with more threads
Speed of cloud vs Time Speed of Cloud: Compared to speed of mobile device Observation: At high bandwidth, faster server reduces time At low bandwidth, cloud speed not significant
Cloud response Time vs Time Cloud Response Time: Propagation delay between mobile and cloud Observation: At high bandwidth, effect of propagation delay is greater
Mobile processors vs time Offloading decision not affected by increasing the number of processors in the mobile device Cloud system has unlimited processors, therefore, more mobile processors do not help
Bandwidth Variation vs Time High bandwidth variation increases execution time If bandwidth variation pattern is known in advance, then this effect can be reduced Allows offloading framework to schedule migration at moments of high bandwidth
Inferences Energy and execution time are conflicting objectives The decision should be more context sensitive If mobile workload increases, then time objective can be relaxed If application requires hard real time constraint, then energy objective may need to be relaxed Speed of cloud plays a significant role only when network bandwidth is high When network bandwidth is low, energy consumption and time to transfer program state override the benefits of offloading Increasing the number of mobile processors does not impact MCC performance significantly
Open Questions Modeling every parameter requires a model on how each parameter varies May require a stochastic model stochastic optimization Bandwidth variations are hard to model Several heuristics have been proposed that adapt to bandwidth variations, but a closed form solution for modeling bandwidth variation is ideal How to validate the model under realistic settings
http://www.mosys.cs.sunykorea.ac.kr Email us @ arani@sunykorea.ac.kr pradipta.de@sunykorea.ac.kr ansuman@isical.ac.in