Towards Resource-Elastic Machine Learning
|
|
|
- Denis Hines
- 10 years ago
- Views:
Transcription
1 Towards Resource-Elastic Machine Learning Shravan Narayanamurthy, Markus Weimer, Dhruv Mahajan Tyson Condie, Sundararajan Sellamanickam, Keerthi Selvaraj Microsoft [shravan mweimer dhrumaha tcondie ssrajan 1 Introduction The availability of powerful distributed data platforms and the widespread success of Machine Learning (ML) has led to a virtuous cycle wherein organizations are investing in gathering a wider range of (even bigger!) datasets and addressing an even broader range of tasks. The Hadoop Distributed File System (HDFS) is being provisioned to capture and durably store these datasets. Along side HDFS, resource managers like Mesos [10], Corona [8] and YARN [16] enable the allocation of compute resources near the data, where frameworks like REEF [3] can cache it and support fast iterative computations. Unfortunately, most ML algorithms are not tuned to operate on these new cloud platforms, where two new challenges arise: 1) scale-up: the need to acquire more resources dedicated to a particular algorithm, and 2) scale-down: the need to react to resource preemption. This paper focuses on the scale-down challenge, since it poses the most stringent requirement for executing on cloud platforms like YARN, which reserves the right to preempt compute resources dedicated to a job (tenant) [16]. YARN exposes compute resources (called containers) through a central Resource Manager (RM). The RM mediates container requests from multiple tenants that want to execute some form of a job i.e., MapReduce, Pregel, SQL, Machine Learning. YARN assigns containers according to some policy that is typically based on fairness, priority or monetary compensation. These local policies are used to derive a global scheduling decision among multiple tenants that ensures each tenant is given a satisfactory container allocation, and that (ideally) cluster utilization is kept high. The challenge is satisfying these global properties as new tenants enter and leave the system. The key mechanism for achieving this goal is preemption [7, 16], which allows the RM to recall (preempt) previously assigned containers so that they may be given to another tenant that improves Pareto optimality. Jobs need to react to these preemption requests. Trivially, preemption can be viewed as a (task/container) failure, which systems such as Hadoop MapReduce [1] can well accommodate through aggressive task check-pointing and restart. However, such levels of check-pointing have been frequently found to be detrimental to the performance of jobs that are (somewhat) immune to such fine-grained failures e.g., small/short jobs as well as Machine Learning systems that blatantly sacrifice fault tolerance for the sake of performance [4, 17, 14, 5]. These systems rely on restarting the whole computation on a failure, requiring the user to execute several attempts of an ML job until one of them succeeds. Unfortunately, this strategy is bound to fail in the presence of preemption, which is likely to be more common than faults; especially in a system under heavy load; making it unlikely to ever complete a job that employs such a restart strategy. In this abstract, we present our initial findings in integrating resource elasticity as a first-class citizen into ML algorithms. We present a resource-elastic linear learning algorithm as a stand-in for statistical query model (SQM) [11] algorithms in Section 2. It assumes random partitioning of the data onto containers; giving up a container therefore is equivalent to drawing a random sample of the whole dataset. We assume that we can choose when to give up a container within some bounded time, such is the case for YARN [2]. In Section 3 we then describe initial results on our implementation on REEF [3]. Section 4 offers our plans for future work. 1
2 2 Resource Elastic Linear Learning We consider the following convex objective function, f(w; X, Y ) = p l p (w t X p, Y p ) + λ 2 wt w, (1) where l p,{x p, Y p } are part of loss function and data respectively in partition p. For ease of exposition, we use the distributed Batch Gradient Descent Algorithm 1 here to optimize Equation 1 in lieu of other well known methods like SGD [6], LBFGS [13] and Trust Region Newton [12]. Algorithm 1: Distributed Batch Gradient Descent (Distr-BGD) Master: Choose w 0 ; for r = 0, 1... do 1. Master: Broadcast w r to all the slaves.; 2. Partition p: Receive w r and compute partial gradient g r p = l p at w r ; 3. Partition p: Perform Reduce operation on g r p.; 4. Master: Receive the output, g r = p gr p of the reduce operation.; 5. Master: Update the weight vector, w r+1 = w r η r (g r + λw r ).; The master first passes on model w r to all the slave nodes using a Broadcast operation. The slave partitions then compute the partial gradients g r p and perform the Reduce operation to aggregate them with the master as the root node. The master then receives the overall gradient and updates w r. The step η r is either a constant or decays with time. Alternatively, one can also do a line search. This relies on two main communication operators: Broadcast and Reduce. The readers are referred to [15] for the details of these operators. In this paper, we use a simple binary tree as in [4] to implement them. Elasticity Model: We make the following assumptions: 1) Container removal can occur at any step of the algorithm, 2) It occurs due to preemption or failure of nodes themselves rather than issues with the algorithm implementation or data, 3) At some future time we are guaranteed to get the containers back, and 4) Only leaf nodes in the binary tree vanish 1. Our Approach: Let us say, during iteration r the partitions with indices in set Q disappear. We react to this on two levels: 1) We make the Broadcast and Reduce elastic, and 2) We approximate the loss function on the missing partitions and use this information in the overall optimization. Elastic Broadcast and Reduce: Elastic Broadcast means that all the active slave containers in Q will still receive the broadcast from the master. Similarly, the output of Reduce will return the aggregated result of active containers only. Approximation for vanished partitions: We approximate the sum of loss functions l Q = p Q l p by first order Taylor expansion, ˆl Q = ĝq T (w wr 1 ), where ĝ Q = q Q gr 1 q. The master uses this approximation in r th and subsequent iterations till the partitions come up again. This has two advantages: 1) The quality of our solution will be better because we do not ignore the vanished nodes completely, and b) We do not have to wait for the partitions to return. Computing ĝ Q : Once the Master observes that partitions have vanished, it broadcasts w r 1 to slave partitions and gets g r 1 Q with the Reduce operation. It then calculates ĝ Q = g r 1 g r 1 Q. Note that the master needs to store the previous gradient g r 1 only. If extra partitions vanish during the reduce operation, the master will compute the approximation of all vanished partitions together. Algorithm 2 contains this approach. F N denotes the set of partitions vanished so far in the algorithm. The master maintains a F IF O queue that stores Q as well as approximation ĝ Q of vanished partitions in a given iteration. The variable ĝ F N maintains the aggregated approximate gradient i.e. 1 This last assumption is made for the sake of ease of exposition only. 2
3 ĝ F N = q F N ĝq. The master reschedules the partitions in the F IF O queue as new containers become available and updates ĝ F N and queue. Although theoretically possible, failures or preemption don t occur in every iteration of the algorithm in practice. A proper analysis and modeling of the interval between the consecutive resource allocation changes and formal proof of convergence of the algorithm 2 is left for future work. Algorithm 2: Elastic Distr-BGD begin 1.Master: F NQueue ; // initialize vanished partition queue to empty 2 Master: F N ; // initialize set of vanished partitions to empty 3.Master: ĝ F N 0; 4. Master: Choose w 0 ; for r = 0, 1... do 5. Master: Broadcast w r to all the slaves; 6. Partition p: Receive w r and compute partial gradient gp r = l p at w r ; 7. Partition p: Perform Reduce operation on gp; r 8. Master: Receive g r = p gr p and Q from the Reduce operation.; // Q is set of vanished partitions Master: if Q // check if some partition has vanished then 9. Master: Broadcast w r 1 to all the slaves.; 10. Partition p: Receive w r 1 and compute partial gradient gp r 1 = l p at w r 1 ; 11. Partition p: Perform Reduce operation on gp r 1.; 12. Master: Receive g r 1 Q = p gr p and Q from the Reduce operation.; // Q also includes vanished partitions from previous reduce in Step Master: ĝ Q g r 1 g r 1 Q ; 14. Master: ĝ F N ĝ F N + ĝ Q, F N F N Q; 15. Master: F NQueue.Add(Q, ĝ Q).; // Add approximated gradient and set of faulty nodes to queue 16. Master: w r+1 w r, w r w r 1 else 17. Master: w r+1 w r η r (g r + ĝ F N + λw r ); 18. Master(Q, ĝ Q) F NQueue.top; Master:if number of free nodes available Q then 19. Master: F NQueue.pop; 20. Master: Request to bring the partitions with indices in Q up; 21. Master: ĝ F N ĝ F N ĝ Q, F N F N Q; 3 Experimental Results Implementation: We implemented Algorithm 2 on REEF [3], which offers event-driven abstractions on top of resource managers. Crucially, it provides events for container (de-)allocation, which then trigger reconfiguration of our elastic Broadcast and Reduce operators. The latter is implemented as a binary tree whose inner nodes keep an approximation for their inputs available, which facilitates seamless container deallocations. Our Reduce function also returns the set of active partitions Q to the master, which enables the elasticity treatment in Algorithm 2. Moreover, we perform line search in Algorithms 1 and 2 to find the step η r. Dataset: We use a subset of 4 million examples of the splice dataset described in [4]. The raw data consists of strings of length 141 with 4 (A, T, C, G) alphabets. The train and test sizes are 3.6M and 400K respectively. We derive the binary presence or absence of n grams at specific locations 2 Convergence proof will require modifying the algorithm to do proper line search (with AGW conditions) when the node comes up again (Steps 17-19). 3
4 Objective Function No Failure Our Approach Ignore Stall Iterations AUPRC No Failure Our Approach 0.26 Ignore Stall Iterations (a) Objective Function (b) AUPRC Figure 1: Plots showing the preemption scenarios. 7 nodes are taken away at the 100 th iteration due to preemption and are given back again at iteration 200. of the string with n = [1, 4] as features. The dimensionality of the feature space is 47, 028 and the overall data size is around 16GB. Experimental setup: We run our experiments on a 12 core machine with hyper-threading and 96GB RAM. We use P = 14 in all our experiments. We preempt the algorithm by taking away 7(50%) nodes at iteration 100 and giving them back at iteration 200. Baselines and evaluation criteria: We compare our results with two baselines, a) Stall: Distr-BGD algorithm waits for all the nodes to come up again, and, b) Ignore: Distr-BGD continues with only the remaining containers until they become available again. We use training objective function value and Area under Precision-Recall Curve (AUPRC) on the test set as evaluation metrics. Observations: Figure 1a shows the objective function value as a function of the number of iterations. Note that both the iterations and overall time taken are directly proportional to each other. Our method shows considerable improvement over the baselines. Surprisingly, we even do better than the no failure case. This can be explained by the fact that the use of the past gradient has similarities to adding the momentum term [9] for gradient descent algorithm which is well known to have a beneficial effect. Similar observations can be made for AUPRC metric on test data in Figure 1b. The experiments clearly show the utility of designing fault-aware ML algorithms. 4 Conclusions and Future work In this abstract, we explored the idea of treating resource elasticity as a first class tenant in machine learning algorithms. To the best of our knowledge, this is the first time this connection has been made. Our initial results confirmed that doing so can yield substantial improvements over the state of the art: Forcing the runtime to absorb the resource elasticity yields high overheads (e.g. in MapReduce) and treating any container preemption as a job failure wastes compute cycles. This encouraging result motivated our current and future work in this area: the findings can and need to be substantiated on larger datasets. We also need to compare against stronger baselines like, continuing the optimization while ignoring the vanished nodes completely. The insight will be generalized to other SQM algorithms and graphical models. All of this will move us closer to predictable and reliable performance of machine learning on the cloud. 4
5 Acknowledgments Our implementation makes heavy use of REEF[3], a new distributed computing framework the authors of the paper are involved in. We d like to thank our collaborators on that project without which the present work would not have been possible. 5
6 References [1] Hadoop MapReduce. [2] Preemption and restart of MapReduce tasks. browse/mapreduce [3] REEF: The retainable evaluator execution framework. org. [4] A. Agarwal, O. Chapelle, M. Dudík, and J. Langford. A reliable effective terascale linear learning system, [5] A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A. J. Smola. Scalable inference in latent variable models. In WSDM 12: Proceedings of the fifth ACM international conference on Web search and data mining, pages , New York, NY, USA, ACM. [6] L. Bottou. Large-Scale Machine Learning with Stochastic Gradient Descent. In Y. Lechevallier and G. Saporta, editors, Proceedings of the 19th International Conference on Computational Statistics (COMPSTAT 2010), pages , Paris, France, Aug Springer. [7] B. Cho, M. Rahman, T. Chajed, I. Gupta, C. Abad, N. Roberts, and P. Lin. Natjam: Design and evaluation of eviction policies for supporting priorities and deadlines in mapreduce clusters. October [8] Facebook Engineering. Under the Hood: Scheduling MapReduce jobs more efficiently with Corona, November [9] S. Haykin. Neural Networks and Learning Machines (3rd Edition). Prentice Hall, 3 edition, Nov [10] B. Hindman, A. Konwinski, M. Zaharia, A. Ghodsi, A. D. Joseph, R. Katz, S. Shenker, and I. Stoica. Mesos: A platform for fine-grained resource sharing in the data center. In Proceedings of the 8th USENIX conference on Networked systems design and implementation, pages USENIX Association, [11] M. Kearns. Efficient noise-tolerant learning from statistical queries. J. ACM, 45(6): , Nov [12] C.-J. Lin, R. C. Weng, and S. S. Keerthi. Trust region newton method for logistic regression. J. Mach. Learn. Res., 9: , June [13] D. C. LIU and J. NOCEDAL. On the limited memory BFGS method for large scale optimization. Math. Programming, 45(3, (Ser. B)): , [14] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein. GraphLab: A New Parallel Framework for Machine Learning. In Conference on Uncertainty in Artificial Intelligence (UAI), Catalina Island, California, July [15] M. Snir, S. Otto, S. Huss-Lederman, D. Walker, and J. Dongarra. MPI-The Complete Reference, Volume 1: The MPI Core. MIT Press, Cambridge, MA, USA, 2nd. (revised) edition, [16] V. Vavilapalli and et. al. Apache hadoop yarn: Yet another resource negotiator. In ACM Symposium on Cloud Computing, SoCC 13, October [17] M. Weimer, S. Rao, and M. Zinkevich. A convenient framework for efficient parallel multipass algorithms. In LCCC : NIPS 2010 Workshop on Learning on Cores, Clusters and Clouds, December
Towards Resource-Elastic Machine Learning
Towards Resource-Elastic Machine Learning Shravan Narayanamurthy, Markus Weimer, Dhruv Mahajan Tyson Condie, Sundararajan Sellamanickam, Keerthi Selvaraj Microsoft [shravan mweimer dhrumaha tcondie ssrajan
MapReduce/Bigtable for Distributed Optimization
MapReduce/Bigtable for Distributed Optimization Keith B. Hall Google Inc. [email protected] Scott Gilpin Google Inc. [email protected] Gideon Mann Google Inc. [email protected] Abstract With large data
Managing large clusters resources
Managing large clusters resources ID2210 Gautier Berthou (SICS) Big Processing with No Locality Job( /crawler/bot/jd.io/1 ) submi t Workflow Manager Compute Grid Node Job This doesn t scale. Bandwidth
Computing at Scale: Resource Scheduling Architectural Evolution and Introduction to Fuxi System
Computing at Scale: Resource Scheduling Architectural Evolution and Introduction to Fuxi System Renyu Yang( 杨 任 宇 ) Supervised by Prof. Jie Xu Ph.D. student@ Beihang University Research Intern @ Alibaba
Mesos: A Platform for Fine- Grained Resource Sharing in Data Centers (II)
UC BERKELEY Mesos: A Platform for Fine- Grained Resource Sharing in Data Centers (II) Anthony D. Joseph LASER Summer School September 2013 My Talks at LASER 2013 1. AMP Lab introduction 2. The Datacenter
Big Data and Scripting Systems beyond Hadoop
Big Data and Scripting Systems beyond Hadoop 1, 2, ZooKeeper distributed coordination service many problems are shared among distributed systems ZooKeeper provides an implementation that solves these avoid
Spark: Cluster Computing with Working Sets
Spark: Cluster Computing with Working Sets Outline Why? Mesos Resilient Distributed Dataset Spark & Scala Examples Uses Why? MapReduce deficiencies: Standard Dataflows are Acyclic Prevents Iterative Jobs
Journée Thématique Big Data 13/03/2015
Journée Thématique Big Data 13/03/2015 1 Agenda About Flaminem What Do We Want To Predict? What Is The Machine Learning Theory Behind It? How Does It Work In Practice? What Is Happening When Data Gets
COMP 598 Applied Machine Learning Lecture 21: Parallelization methods for large-scale machine learning! Big Data by the numbers
COMP 598 Applied Machine Learning Lecture 21: Parallelization methods for large-scale machine learning! Instructor: ([email protected]) TAs: Pierre-Luc Bacon ([email protected]) Ryan Lowe ([email protected])
Map-Reduce for Machine Learning on Multicore
Map-Reduce for Machine Learning on Multicore Chu, et al. Problem The world is going multicore New computers - dual core to 12+-core Shift to more concurrent programming paradigms and languages Erlang,
Spark. Fast, Interactive, Language- Integrated Cluster Computing
Spark Fast, Interactive, Language- Integrated Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica UC
Evaluating partitioning of big graphs
Evaluating partitioning of big graphs Fredrik Hallberg, Joakim Candefors, Micke Soderqvist [email protected], [email protected], [email protected] Royal Institute of Technology, Stockholm, Sweden Abstract. Distributed
GraySort on Apache Spark by Databricks
GraySort on Apache Spark by Databricks Reynold Xin, Parviz Deyhim, Ali Ghodsi, Xiangrui Meng, Matei Zaharia Databricks Inc. Apache Spark Sorting in Spark Overview Sorting Within a Partition Range Partitioner
Jubatus: An Open Source Platform for Distributed Online Machine Learning
Jubatus: An Open Source Platform for Distributed Online Machine Learning Shohei Hido Seiya Tokui Preferred Infrastructure Inc. Tokyo, Japan {hido, tokui}@preferred.jp Satoshi Oda NTT Software Innovation
A Study on Workload Imbalance Issues in Data Intensive Distributed Computing
A Study on Workload Imbalance Issues in Data Intensive Distributed Computing Sven Groot 1, Kazuo Goda 1, and Masaru Kitsuregawa 1 University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505, Japan Abstract.
A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment
A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment Panagiotis D. Michailidis and Konstantinos G. Margaritis Parallel and Distributed
Machine Learning over Big Data
Machine Learning over Big Presented by Fuhao Zou [email protected] Jue 16, 2014 Huazhong University of Science and Technology Contents 1 2 3 4 Role of Machine learning Challenge of Big Analysis Distributed
Enhancing Dataset Processing in Hadoop YARN Performance for Big Data Applications
Enhancing Dataset Processing in Hadoop YARN Performance for Big Data Applications Ahmed Abdulhakim Al-Absi, Dae-Ki Kang and Myong-Jong Kim Abstract In Hadoop MapReduce distributed file system, as the input
Spark and the Big Data Library
Spark and the Big Data Library Reza Zadeh Thanks to Matei Zaharia Problem Data growing faster than processing speeds Only solution is to parallelize on large clusters» Wide use in both enterprises and
Big Data and Scripting Systems build on top of Hadoop
Big Data and Scripting Systems build on top of Hadoop 1, 2, Pig/Latin high-level map reduce programming platform Pig is the name of the system Pig Latin is the provided programming language Pig Latin is
Hadoop. MPDL-Frühstück 9. Dezember 2013 MPDL INTERN
Hadoop MPDL-Frühstück 9. Dezember 2013 MPDL INTERN Understanding Hadoop Understanding Hadoop What's Hadoop about? Apache Hadoop project (started 2008) downloadable open-source software library (current
Introduction to Big Data! with Apache Spark" UC#BERKELEY#
Introduction to Big Data! with Apache Spark" UC#BERKELEY# This Lecture" The Big Data Problem" Hardware for Big Data" Distributing Work" Handling Failures and Slow Machines" Map Reduce and Complex Jobs"
Spark in Action. Fast Big Data Analytics using Scala. Matei Zaharia. www.spark- project.org. University of California, Berkeley UC BERKELEY
Spark in Action Fast Big Data Analytics using Scala Matei Zaharia University of California, Berkeley www.spark- project.org UC BERKELEY My Background Grad student in the AMP Lab at UC Berkeley» 50- person
IMPROVED FAIR SCHEDULING ALGORITHM FOR TASKTRACKER IN HADOOP MAP-REDUCE
IMPROVED FAIR SCHEDULING ALGORITHM FOR TASKTRACKER IN HADOOP MAP-REDUCE Mr. Santhosh S 1, Mr. Hemanth Kumar G 2 1 PG Scholor, 2 Asst. Professor, Dept. Of Computer Science & Engg, NMAMIT, (India) ABSTRACT
Distributed Structured Prediction for Big Data
Distributed Structured Prediction for Big Data A. G. Schwing ETH Zurich [email protected] T. Hazan TTI Chicago M. Pollefeys ETH Zurich R. Urtasun TTI Chicago Abstract The biggest limitations of learning
Bayesian networks - Time-series models - Apache Spark & Scala
Bayesian networks - Time-series models - Apache Spark & Scala Dr John Sandiford, CTO Bayes Server Data Science London Meetup - November 2014 1 Contents Introduction Bayesian networks Latent variables Anomaly
Unified Big Data Processing with Apache Spark. Matei Zaharia @matei_zaharia
Unified Big Data Processing with Apache Spark Matei Zaharia @matei_zaharia What is Apache Spark? Fast & general engine for big data processing Generalizes MapReduce model to support more types of processing
The Stratosphere Big Data Analytics Platform
The Stratosphere Big Data Analytics Platform Amir H. Payberah Swedish Institute of Computer Science [email protected] June 4, 2014 Amir H. Payberah (SICS) Stratosphere June 4, 2014 1 / 44 Big Data small data
Characterizing Task Usage Shapes in Google s Compute Clusters
Characterizing Task Usage Shapes in Google s Compute Clusters Qi Zhang 1, Joseph L. Hellerstein 2, Raouf Boutaba 1 1 University of Waterloo, 2 Google Inc. Introduction Cloud computing is becoming a key
Survey on Scheduling Algorithm in MapReduce Framework
Survey on Scheduling Algorithm in MapReduce Framework Pravin P. Nimbalkar 1, Devendra P.Gadekar 2 1,2 Department of Computer Engineering, JSPM s Imperial College of Engineering and Research, Pune, India
Energy Efficient MapReduce
Energy Efficient MapReduce Motivation: Energy consumption is an important aspect of datacenters efficiency, the total power consumption in the united states has doubled from 2000 to 2005, representing
Mining Large Datasets: Case of Mining Graph Data in the Cloud
Mining Large Datasets: Case of Mining Graph Data in the Cloud Sabeur Aridhi PhD in Computer Science with Laurent d Orazio, Mondher Maddouri and Engelbert Mephu Nguifo 16/05/2014 Sabeur Aridhi Mining Large
Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel
Parallel Databases Increase performance by performing operations in parallel Parallel Architectures Shared memory Shared disk Shared nothing closely coupled loosely coupled Parallelism Terminology Speedup:
Declarative Systems for Large-Scale Machine Learning
Declarative Systems for Large-Scale Machine Learning Vinayak Borkar 1, Yingyi Bu 1, Michael J. Carey 1, Joshua Rosen 2, Neoklis Polyzotis 2, Tyson Condie 3, Markus Weimer 3 and Raghu Ramakrishnan 3 1 University
Hadoop and Map-Reduce. Swati Gore
Hadoop and Map-Reduce Swati Gore Contents Why Hadoop? Hadoop Overview Hadoop Architecture Working Description Fault Tolerance Limitations Why Map-Reduce not MPI Distributed sort Why Hadoop? Existing Data
Scaling Bayesian Network Parameter Learning with Expectation Maximization using MapReduce
Scaling Bayesian Network Parameter Learning with Expectation Maximization using MapReduce Erik B. Reed Carnegie Mellon University Silicon Valley Campus NASA Research Park Moffett Field, CA 94035 [email protected]
BIG DATA What it is and how to use?
BIG DATA What it is and how to use? Lauri Ilison, PhD Data Scientist 21.11.2014 Big Data definition? There is no clear definition for BIG DATA BIG DATA is more of a concept than precise term 1 21.11.14
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next
Log Mining Based on Hadoop s Map and Reduce Technique
Log Mining Based on Hadoop s Map and Reduce Technique ABSTRACT: Anuja Pandit Department of Computer Science, [email protected] Amruta Deshpande Department of Computer Science, [email protected]
Large-scale Machine Learning in Distributed Environments
Large-scale Machine Learning in Distributed Environments Chih-Jen Lin National Taiwan University ebay Research Labs Tutorial at ACM ICMR, June 5, 2012 Chih-Jen Lin (National Taiwan Univ.) 1 / 105 Outline
Apache Hama Design Document v0.6
Apache Hama Design Document v0.6 Introduction Hama Architecture BSPMaster GroomServer Zookeeper BSP Task Execution Job Submission Job and Task Scheduling Task Execution Lifecycle Synchronization Fault
Conjugating data mood and tenses: Simple past, infinite present, fast continuous, simpler imperative, conditional future perfect
Matteo Migliavacca (mm53@kent) School of Computing Conjugating data mood and tenses: Simple past, infinite present, fast continuous, simpler imperative, conditional future perfect Simple past - Traditional
XGBoost: Reliable Large-scale Tree Boosting System
XGBoost: Reliable Large-scale Tree Boosting System Tianqi Chen and Carlos Guestrin University of Washington {tqchen, guestrin}@cs.washington.edu Abstract Tree boosting is an important type of machine learning
Hadoop MapReduce and Spark. Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015
Hadoop MapReduce and Spark Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015 Outline Hadoop Hadoop Import data on Hadoop Spark Spark features Scala MLlib MLlib
Lambda Architecture. Near Real-Time Big Data Analytics Using Hadoop. January 2015. Email: [email protected] Website: www.qburst.com
Lambda Architecture Near Real-Time Big Data Analytics Using Hadoop January 2015 Contents Overview... 3 Lambda Architecture: A Quick Introduction... 4 Batch Layer... 4 Serving Layer... 4 Speed Layer...
Hadoop Parallel Data Processing
MapReduce and Implementation Hadoop Parallel Data Processing Kai Shen A programming interface (two stage Map and Reduce) and system support such that: the interface is easy to program, and suitable for
Learning at scale on Hadoop
Learning at scale on Hadoop Olivier Toromanoff, Software engineer Berlin Buzzwords, 2015-06-01 Copyright 2014 Criteo Prediction @ Criteo Learning on Hadoop Limits are lower than the sky From Experimentation
large-scale machine learning revisited Léon Bottou Microsoft Research (NYC)
large-scale machine learning revisited Léon Bottou Microsoft Research (NYC) 1 three frequent ideas in machine learning. independent and identically distributed data This experimental paradigm has driven
Hadoop Fair Scheduler Design Document
Hadoop Fair Scheduler Design Document October 18, 2010 Contents 1 Introduction 2 2 Fair Scheduler Goals 2 3 Scheduler Features 2 3.1 Pools........................................ 2 3.2 Minimum Shares.................................
Research on Job Scheduling Algorithm in Hadoop
Journal of Computational Information Systems 7: 6 () 5769-5775 Available at http://www.jofcis.com Research on Job Scheduling Algorithm in Hadoop Yang XIA, Lei WANG, Qiang ZHAO, Gongxuan ZHANG School of
Hadoop Fair Sojourn Protocol Based Job Scheduling On Hadoop
Hadoop Fair Sojourn Protocol Based Job Scheduling On Hadoop Mr. Dileep Kumar J. S. 1 Mr. Madhusudhan Reddy G. R. 2 1 Department of Computer Science, M.S. Engineering College, Bangalore, Karnataka 2 Assistant
Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control
Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control EP/K006487/1 UK PI: Prof Gareth Taylor (BU) China PI: Prof Yong-Hua Song (THU) Consortium UK Members: Brunel University
Big Data Analytics with Spark and Oscar BAO. Tamas Jambor, Lead Data Scientist at Massive Analytic
Big Data Analytics with Spark and Oscar BAO Tamas Jambor, Lead Data Scientist at Massive Analytic About me Building a scalable Machine Learning platform at MA Worked in Big Data and Data Science in the
Big Data and Analytics: Getting Started with ArcGIS. Mike Park Erik Hoel
Big Data and Analytics: Getting Started with ArcGIS Mike Park Erik Hoel Agenda Overview of big data Distributed computation User experience Data management Big data What is it? Big Data is a loosely defined
Manipulability of the Price Mechanism for Data Centers
Manipulability of the Price Mechanism for Data Centers Greg Bodwin 1, Eric Friedman 2,3,4, and Scott Shenker 3,4 1 Department of Computer Science, Tufts University, Medford, Massachusetts 02155 2 School
Text Mining Approach for Big Data Analysis Using Clustering and Classification Methodologies
Text Mining Approach for Big Data Analysis Using Clustering and Classification Methodologies Somesh S Chavadi 1, Dr. Asha T 2 1 PG Student, 2 Professor, Department of Computer Science and Engineering,
Introduction to Apache YARN Schedulers & Queues
Introduction to Apache YARN Schedulers & Queues In a nutshell, YARN was designed to address the many limitations (performance/scalability) embedded into Hadoop version 1 (MapReduce & HDFS). Some of the
Matchmaking: A New MapReduce Scheduling Technique
Matchmaking: A New MapReduce Scheduling Technique Chen He Ying Lu David Swanson Department of Computer Science and Engineering University of Nebraska-Lincoln Lincoln, U.S. {che,ylu,dswanson}@cse.unl.edu
Big Data on Microsoft Platform
Big Data on Microsoft Platform Prepared by GJ Srinivas Corporate TEG - Microsoft Page 1 Contents 1. What is Big Data?...3 2. Characteristics of Big Data...3 3. Enter Hadoop...3 4. Microsoft Big Data Solutions...4
L3: Statistical Modeling with Hadoop
L3: Statistical Modeling with Hadoop Feng Li [email protected] School of Statistics and Mathematics Central University of Finance and Economics Revision: December 10, 2014 Today we are going to learn...
Hadoop Ecosystem B Y R A H I M A.
Hadoop Ecosystem B Y R A H I M A. History of Hadoop Hadoop was created by Doug Cutting, the creator of Apache Lucene, the widely used text search library. Hadoop has its origins in Apache Nutch, an open
W H I T E P A P E R. Deriving Intelligence from Large Data Using Hadoop and Applying Analytics. Abstract
W H I T E P A P E R Deriving Intelligence from Large Data Using Hadoop and Applying Analytics Abstract This white paper is focused on discussing the challenges facing large scale data processing and the
Parallel Data Mining. Team 2 Flash Coders Team Research Investigation Presentation 2. Foundations of Parallel Computing Oct 2014
Parallel Data Mining Team 2 Flash Coders Team Research Investigation Presentation 2 Foundations of Parallel Computing Oct 2014 Agenda Overview of topic Analysis of research papers Software design Overview
NoSQL and Hadoop Technologies On Oracle Cloud
NoSQL and Hadoop Technologies On Oracle Cloud Vatika Sharma 1, Meenu Dave 2 1 M.Tech. Scholar, Department of CSE, Jagan Nath University, Jaipur, India 2 Assistant Professor, Department of CSE, Jagan Nath
A Brief Introduction to Apache Tez
A Brief Introduction to Apache Tez Introduction It is a fact that data is basically the new currency of the modern business world. Companies that effectively maximize the value of their data (extract value
LARGE-SCALE GRAPH PROCESSING IN THE BIG DATA WORLD. Dr. Buğra Gedik, Ph.D.
LARGE-SCALE GRAPH PROCESSING IN THE BIG DATA WORLD Dr. Buğra Gedik, Ph.D. MOTIVATION Graph data is everywhere Relationships between people, systems, and the nature Interactions between people, systems,
Apache Flink Next-gen data analysis. Kostas Tzoumas [email protected] @kostas_tzoumas
Apache Flink Next-gen data analysis Kostas Tzoumas [email protected] @kostas_tzoumas What is Flink Project undergoing incubation in the Apache Software Foundation Originating from the Stratosphere research
CS555: Distributed Systems [Fall 2015] Dept. Of Computer Science, Colorado State University
CS 555: DISTRIBUTED SYSTEMS [SPARK] Shrideep Pallickara Computer Science Colorado State University Frequently asked questions from the previous class survey Streaming Significance of minimum delays? Interleaving
How to use Big Data in Industry 4.0 implementations. LAURI ILISON, PhD Head of Big Data and Machine Learning
How to use Big Data in Industry 4.0 implementations LAURI ILISON, PhD Head of Big Data and Machine Learning Big Data definition? Big Data is about structured vs unstructured data Big Data is about Volume
Big Fast Data Hadoop acceleration with Flash. June 2013
Big Fast Data Hadoop acceleration with Flash June 2013 Agenda The Big Data Problem What is Hadoop Hadoop and Flash The Nytro Solution Test Results The Big Data Problem Big Data Output Facebook Traditional
Research Statement Joseph E. Gonzalez [email protected]
As we scale to increasingly parallel and distributed architectures and explore new algorithms and machine learning techniques, the fundamental computational models and abstractions that once separated
GraySort and MinuteSort at Yahoo on Hadoop 0.23
GraySort and at Yahoo on Hadoop.23 Thomas Graves Yahoo! May, 213 The Apache Hadoop[1] software library is an open source framework that allows for the distributed processing of large data sets across clusters
MapReduce Approach to Collective Classification for Networks
MapReduce Approach to Collective Classification for Networks Wojciech Indyk 1, Tomasz Kajdanowicz 1, Przemyslaw Kazienko 1, and Slawomir Plamowski 1 Wroclaw University of Technology, Wroclaw, Poland Faculty
ISSN: 2320-1363 CONTEXTUAL ADVERTISEMENT MINING BASED ON BIG DATA ANALYTICS
CONTEXTUAL ADVERTISEMENT MINING BASED ON BIG DATA ANALYTICS A.Divya *1, A.M.Saravanan *2, I. Anette Regina *3 MPhil, Research Scholar, Muthurangam Govt. Arts College, Vellore, Tamilnadu, India Assistant
Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing
www.ijcsi.org 227 Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing Dhuha Basheer Abdullah 1, Zeena Abdulgafar Thanoon 2, 1 Computer Science Department, Mosul University,
Optimization and analysis of large scale data sorting algorithm based on Hadoop
Optimization and analysis of large scale sorting algorithm based on Hadoop Zhuo Wang, Longlong Tian, Dianjie Guo, Xiaoming Jiang Institute of Information Engineering, Chinese Academy of Sciences {wangzhuo,
Tachyon: Reliable File Sharing at Memory- Speed Across Cluster Frameworks
Tachyon: Reliable File Sharing at Memory- Speed Across Cluster Frameworks Haoyuan Li UC Berkeley Outline Motivation System Design Evaluation Results Release Status Future Directions Outline Motivation
Classification On The Clouds Using MapReduce
Classification On The Clouds Using MapReduce Simão Martins Instituto Superior Técnico Lisbon, Portugal [email protected] Cláudia Antunes Instituto Superior Técnico Lisbon, Portugal [email protected]
MLlib: Scalable Machine Learning on Spark
MLlib: Scalable Machine Learning on Spark Xiangrui Meng Collaborators: Ameet Talwalkar, Evan Sparks, Virginia Smith, Xinghao Pan, Shivaram Venkataraman, Matei Zaharia, Rean Griffith, John Duchi, Joseph
Scaling Out With Apache Spark. DTL Meeting 17-04-2015 Slides based on https://www.sics.se/~amir/files/download/dic/spark.pdf
Scaling Out With Apache Spark DTL Meeting 17-04-2015 Slides based on https://www.sics.se/~amir/files/download/dic/spark.pdf Your hosts Mathijs Kattenberg Technical consultant Jeroen Schot Technical consultant
Distributed forests for MapReduce-based machine learning
Distributed forests for MapReduce-based machine learning Ryoji Wakayama, Ryuei Murata, Akisato Kimura, Takayoshi Yamashita, Yuji Yamauchi, Hironobu Fujiyoshi Chubu University, Japan. NTT Communication
The Improved Job Scheduling Algorithm of Hadoop Platform
The Improved Job Scheduling Algorithm of Hadoop Platform Yingjie Guo a, Linzhi Wu b, Wei Yu c, Bin Wu d, Xiaotian Wang e a,b,c,d,e University of Chinese Academy of Sciences 100408, China b Email: [email protected]
Processing Large Amounts of Images on Hadoop with OpenCV
Processing Large Amounts of Images on Hadoop with OpenCV Timofei Epanchintsev 1,2 and Andrey Sozykin 1,2 1 IMM UB RAS, Yekaterinburg, Russia, 2 Ural Federal University, Yekaterinburg, Russia {eti,avs}@imm.uran.ru
Challenges for Data Driven Systems
Challenges for Data Driven Systems Eiko Yoneki University of Cambridge Computer Laboratory Quick History of Data Management 4000 B C Manual recording From tablets to papyrus to paper A. Payberah 2014 2
An On-Line Algorithm for Checkpoint Placement
An On-Line Algorithm for Checkpoint Placement Avi Ziv IBM Israel, Science and Technology Center MATAM - Advanced Technology Center Haifa 3905, Israel [email protected] Jehoshua Bruck California Institute
Developing MapReduce Programs
Cloud Computing Developing MapReduce Programs Dell Zhang Birkbeck, University of London 2015/16 MapReduce Algorithm Design MapReduce: Recap Programmers must specify two functions: map (k, v) * Takes
Software tools for Complex Networks Analysis. Fabrice Huet, University of Nice Sophia- Antipolis SCALE (ex-oasis) Team
Software tools for Complex Networks Analysis Fabrice Huet, University of Nice Sophia- Antipolis SCALE (ex-oasis) Team MOTIVATION Why do we need tools? Source : nature.com Visualization Properties extraction
Accelerating Hadoop MapReduce Using an In-Memory Data Grid
Accelerating Hadoop MapReduce Using an In-Memory Data Grid By David L. Brinker and William L. Bain, ScaleOut Software, Inc. 2013 ScaleOut Software, Inc. 12/27/2012 H adoop has been widely embraced for
An analysis of suitable parameters for efficiently applying K-means clustering to large TCPdump data set using Hadoop framework
An analysis of suitable parameters for efficiently applying K-means clustering to large TCPdump data set using Hadoop framework Jakrarin Therdphapiyanak Dept. of Computer Engineering Chulalongkorn University
