Scalable Strategies for Computing with Massive Data: The Bigmemory Project
|
|
|
- Bryan Atkins
- 10 years ago
- Views:
Transcription
1 Scalable Strategies for Computing with Massive Data: The Bigmemory Project John W. Emerson and Michael J. Kane Associate Professor of Statistics, Yale University (Professor Emerson prefers to be called Jay ) Mike submitted his dissertation on August 31, 2010, and will be Research Faculty, Yale Center for Analytical Science Please feel free to ask questions along the way!
2 Outline Motivation and Overview 1 Motivation and Overview 2 3 4
3 A new era The analysis of very large data sets has recently become an active area of research in statistics and machine learning. Many new computational challenges arise when managing, exploring, and analyzing these data sets, challenges that effectively put the data beyond the reach of researchers who lack specialized software development skills of expensive hardware. We have entered an era of massive scientific data collection, with a demand for answers to large-scale inference problems that lie beyond the scope of classical statistics. Efron (2005) classical statistics should include mainstream computational statistics. Kane, Emerson, and Weston (in preparation, in reference to Efron s quote)
4 Example data sets Airline on-time data 2009 JSM Data Expo (thanks, Hadley!) About 120 million commercial US airline flights over 20 years 29 variables, integer-valued or categorical (recoded as integer) About 12 gigabytes (GB) Netflix data About 100 million ratings from 500,000 customers for 17,000 movies About 2 GB stored as integers No statisticians on the winning team; hard to find statisticians on the leaderboard Top teams: access to expensive hardware; professional computer science and programming expertise
5 Why R? Motivation and Overview R is the lingua franca of statistics: The syntax is simple and well-suited for data exploration and analysis. It has excellent graphical capabilities. It is extensible, with over 2500 packages available on CRAN alone. It is open source and freely available for Windows/MacOS/Linux platforms. Currently, the Bigmemory Project is designed to extend the R programming environment through a set of packages (bigmemory, bigtabulate, biganalytics, synchronicity, and bigalgebra), but it could also be used as a standalone C++ library or with other languages and programming environments.
6 The Bigmemory Project: NMF non-negative DNAcopy DNA copy irlba truncated matrix number data SVDs on factorization analysis (big.)matrices biglm regressions for data too big to fit in memory biganalytics statistical analyses with big.matrices bigalgebra linear algebra for (big.)matrices methods generic functions bigmemory Core big.matrix creation and manipution stats statistical functions foreach concurrentenabled loops bigtabulate synchronicity fast tabulation mutual and exclusions summaries The Bigmemory Project utils utility functions R base packages (not all are shown) domc parallel backend for SMP unix donws concurrent backend for NetworkSpaces dosnow concurrent backend for snow dosmp concurrent backend for SMP machines doredis concurrent backend for redis package description An R package and description A B A depends on (or imports) B Low-level parallel support B C B suggests C
7 In a nutshell... The approaches adopted by statisticians in analyzing small data sets don t scale to massive ones. Statisticians who want to explore massive data must be aware of the various pitfalls; adopt new approaches to avoid them. We will illustrate common challenges for dealing with massive data; provide general solutions for avoiding the pitfalls.
8 Importing and managing massive data The Netflix data 2 GB: read.table() uses 3.7 GB total in the process of importing the data (taking 140 seconds): > net <- read.table("netflix.txt", header=false, + sep="\t", colclasses = "integer") > object.size(net) bytes With bigmemory, only the 2 GB is needed (no memory overhead, taking 130 seconds): > net <- read.big.matrix("netflix.txt", header=false, + sep="\t", type = "integer", > net[1:3,] [,1] [,2] [,3] [,4] [,5] [1,] [2,] [3,]
9 Importing and managing massive data read.table(): memory overhead as much as 100% of size of data data.frame and matrix objects not available in shared memory limited in size by available RAM, recommended maximum 10%-20% of RAM read.big.matrix(): matrix-like data (not data frames) no memory overhead faster that read.table() supports shared memory for efficient parallel programming supports file-backed objects for data larger-than-ram Databases and other alternatives: Slower performance, no formal shared memory Not compatible with linear algebra libraries Require customized coding (chunking algorithms, generally)
10 Importing and managing massive data With the full Airline data ( 12 GB), bigmemory s file-backing allows you to work with the data even with only 4 GB of RAM, for example: > x <- read.big.matrix("airline.csv", header=true, + backingfile="airline.bin", + descriptorfile="airline.desc", + type="integer") > x An object of class "big.matrix" Slot "address": <pointer: 0x3031fc0> > rm(x) > x <- attach.big.matrix("airline.desc") > dim(x) [1]
11 Exploring massive data > summary(x[, "DepDelay"]) Min. 1st Qu. Median Mean rd Qu. Max. NA's > > quantile(x[, "DepDelay"], + probs=c(0.5, 0.9, 0.99), na.rm=true) 50% 90% 99%
12 Example: caching in action In a fresh R session on this laptop, newly rebooted: > library(bigmemory) > library(biganalytics) > setwd("/home/jay/desktop/bmptalks") > xdesc <- dget("airline.desc") > x <- attach.big.matrix(xdesc) > system.time( numplanes <- colmax(x, "TailNum", + na.rm=true) ) user system elapsed > system.time( numplanes <- colmax(x, "TailNum", + na.rm=true) ) user system elapsed
13 Split-apply-combine Many computational problems in statistics are solved by performing the same calculation repeatedly on independent sets of data. These problems can be solved by partitioning the data (the split) performing a single calculation on each partition (the apply) returning the results in a specified format (the combine) Recent attention: it can be particularly efficient and easily lends itself to parallel computing split-apply-combine was coined by Hadley Wickham, but the approach has been supported on a number of different environments for some time under different names: SAS: by Google: MapReduce Apache: Hadoop
14 Aside: split() > x <- matrix(c(rnorm(5), sample(c(1, 2), 5, + replace = T)), 5, 2) > x [,1] [,2] [1,] [2,] [3,] [4,] [5,] > split(x[, 1], x[, 2]) $`1` [1] $`2` [1]
15 Exploring massive data > GetDepQuantiles <- function(rows, data) { + return(quantile(data[rows, "DepDelay"], + probs=c(0.5, 0.9, 0.99), na.rm=true)) + } > > groups <- split(1:nrow(x), x[,"dayofweek"]) > > qs <- sapply(groups, GetDepQuantiles, data=x) > > colnames(qs) <- c("mon", "Tue", "Wed", "Thu", + "Fri", "Sat", "Sun") > qs Mon Tue Wed Thu Fri Sat Sun 50% % %
16 Aside: foreach The user may register any one of several parallel backends like domc, or none at all. The code will either run sequentially or will make use of the parallel backend, without modification. > library(foreach) > > library(domc) > registerdomc(2) > > ans <- foreach(i = 1:10,.combine = c) %dopar% + { + i^2 + } > > ans [1]
17 Concurrent programming with foreach (1995 only) Such split-apply-combine problems can be done in parallel. Here, we start with one year of data only, Why only 1995? Memory implications. The message: shared memory is essential. > library(foreach) > library(dosnow) > cl <- makesockcluster(4) > registerdosnow(cl) > > x <- read.csv("1995.csv") > dim(x) [1]
18 Concurrent programming with foreach (1995 only) 4 cores: 37 seconds; 3 cores: 23 seconds; 2 cores: 15 seconds, 1 core: 8 seconds. Memory overhead of 4 cores: > 2.5 GB! > groups <- split(1:nrow(x), x[,"dayofweek"]) > > qs <- foreach(g=groups,.combine=rbind) %dopar% { + GetDepQuantiles(g, data=x) + } > > daysofweek <- c("mon", "Tues", "Wed", "Thu", + "Fri", "Sat", "Sun") > rownames(qs) <- daysofweek > t(qs) Mon Tues Wed Thu Fri Sat Sun 50% % %
19 1995 delays in parallel: shared memory essential
20 1995 delays in parallel: shared memory essential
21 Aside: mwhich() mwhich() works with either regular R matrices or with big.matrix objects (a neat trick for developers of new functions): > x <- matrix(c(rnorm(5), sample(c(1, 2), 5, + replace = T)), 5, 2) > x [,1] [,2] [1,] [2,] [3,] [4,] [5,] > mwhich(x, c(1, 2), list(0, 1), + list("le", "eq"), "AND") [1] 2 3 5
22 Challenge: find plane birthdays 10 planes only (there are > 13,000 total planes). With 1 core: 74 seconds. With 2 cores: 38 seconds. All planes, 4 cores: 2 hours. > library(foreach) > library(domc) > registerdomc(cores=2) > planestart <- foreach(i=1:10,.combine=c) %dopar% { + + x <- attach.big.matrix(xdesc) + yearinds <- mwhich(x, "TailNum", i, comps="eq") + y <- x[yearinds,c("year", "Month")] + minyear <- min(y[,"year"], na.rm=true) + these <- which(y[,"year"]==minyear) + minmonth <- min(y[these,"month"], na.rm=true) + return(12*minyear + minmonth) + + }
23 A better solution: split-apply-combine! > birthmonth <- function(y) { + minyear <- min(y[,'year'], na.rm=true) + these <- which(y[,'year']==minyear) + minmonth <- min(y[these,'month'], na.rm=true) + return(12*minyear + minmonth) + } > > time.0 <- system.time( { + planemap <- split(1:nrow(x), x[,"tailnum"]) + planestart <- sapply( planemap, + function(i) birthmonth(x[i, c('year','month'), + drop=false]) ) + } ) > > time.0 user system elapsed
24 Parallel split-apply-combine Using 4 cores, we can reduce the time to 20 seconds (not including the read.big.matrix(), repeated here for a special reason: x <- read.big.matrix("airline.csv", header = TRUE, backingfile = "airline.bin", descriptorfile = "airline.desc", type = "integer", extracols = "Age") planeindices <- split(1:nrow(x), x[, "TailNum"]) planestart <- foreach(i = planeindices,.combine = c) %dopar% { birthmonth(x[i, c("year", "Month"), drop = FALSE]) } x[, "Age"] <- x[, "Year"] * as.integer(12) + x[, "Month"] - as.integer(planestart[x[, "TailNum"]])
25 Massive linear models via biglm > library (biganalytics) > x <- attach.big.matrix("airline.desc") > blm <- biglm.big.matrix(depdelay ~ Age, data = x) > summary(blm) Large data regression model: biglm(formula = formula, data = data,...) Sample size = Coef (95% CI) SE p (Intercept) Age
26 Netflix: a truncated singular value decomposition ST ST ST ST AP AP AP SW: I SW: II LOTR(ex) SW SW SW LOTR HP LOTR HP HP The Horror Within Horror Hospital The The West West Wing: Wing: Season Season Waterworld The Shawshank Redemption Rain Man Braveheart Mission: Impossible Men in Black Men in Black II Titanic The Rock The Green Mile Independence Day
27 Concluding example: satellite images read.landsat <- function(file, descfile, dims=null, type="char") { if (is.null(dims)) stop("dimensions must be known") if (length(grep(file, dir()))==0) stop("file does not exist") x <- big.matrix(1, 1, type=type, backingfile="x.bin", descriptorfile="x.desc") xdesc <- dget("x.desc") xdesc@description$filename <- file xdesc@description$totalrows <- prod(dims) xdesc@description$nrow <- prod(dims) dput(xdesc, descfile) x <- as(attach.big.matrix(descfile), "big.3d.array") x@dims <- dims return(x) }
28 Concluding example: satellite images x <- read.landsat(file = "Landsat_17Apr2005.dat", descfile = "Landsat_17Apr2005.desc", dims = c(800,800,6)) plot(rep(1:800, 800), rep(800:1, each=800), col=gray(x[,,1]/256), xlab="longitude", ylab="latitude", main="new Haven, CT")
29 Sample image (apologies for the resolution)
30 Summary The Bigmemory Project proposes three new ways to work with very large sets of data: memory and file-mapped data structures, which provide access to arbitrarily large sets of data while retaining a look and feel that is familiar to statisticians; data structures that are shared across processor cores on a single computer, in order to support efficient parallel computing techniques when multiple processors are used; and file-mapped data structures that allow concurrent access by the different nodes in a cluster of computers. Even though these three techniques are currently implemented only for R, they are intended to provide a flexible framework for future developments in the field of statistical computing.
31 Thanks! Dirk Eddelbuettel, Bryan Lewis, Steve Weston, and Martin Schultz, for their feedback and advice over the last three years Bell Laboratories (Rick Becker, John Chambers and Allan Wilks), for development of the S language Ross Ihaka and Robert Gentleman, for their work and unselfish vision for R The R Core team David Pollard, for pushing us to better communicate the contributions of the project to statisticians John Hartigan, for years of teaching and mentoring John Emerson (my father, Middlebury College), for getting me started in statistics Many of my students, for their willingness argue with me
32 OTHER SLIDES and...yale.edu/~jay/brazil/sp/bigmemory/
33 Overview: the Bigmemory Project Problems and challenges: R frequently makes copies of objects, which can be costly Guideline: R s performance begins to degrade with objects more than about 10% of the address space, or when total objects consume more than about 1/3 of RAM. swapping: not sufficient chunking: inefficient, customized coding parallel programming: memory problems, non-portable solutions shared memory: essentially inaccessible to non-experts Key parts of the solutions: operating system caching shared memory file-backing for larger-than-ram data a framework for platform-independent parallel programming (credit Steve Weston, independently of the BMP)
34 Extending capabilities versus extending the language Extending R s capabilities: Providing a new algorithm, statistical analysis, or data structure Examples: lm(), or package bcp Most of the packages on CRAN and elsewhere Extending the language: Example: grdevices, a low-level interface to create and manipulate graphics devices Example: grid graphics The packages of the Bigmemory Project: a developer s interface to underlying operating system functionality which could not have been written in R itself a higher-level interface designed to mimic R s matrix objects so that statisticians can use their current knowledge of computing with data sets as a starting point for dealing with massive ones.
Scalable Strategies for Computing with Massive Data: The Bigmemory Project
Scalable Strategies for Computing with Massive Data: The Bigmemory Project Jay Emerson 1 and Mike Kane 2 http://www.bigmemory.org/ (1) http://www.stat.yale.edu/~jay/, @johnwemerson Associate Professor
Deal with big data in R using bigmemory package
Deal with big data in R using bigmemory package Xiaojuan Hao Department of Statistics University of Nebraska-Lincoln April 28, 2015 Background What Is Big Data Size (We focus on) Complexity Rate of growth
Large Datasets and You: A Field Guide
Large Datasets and You: A Field Guide Matthew Blackwell [email protected] Maya Sen [email protected] August 3, 2012 A wind of streaming data, social data and unstructured data is knocking at
Your Best Next Business Solution Big Data In R 24/3/2010
Your Best Next Business Solution Big Data In R 24/3/2010 Big Data In R R Works on RAM Causing Scalability issues Maximum length of an object is 2^31-1 Some packages developed to help overcome this problem
Package biganalytics
Package biganalytics February 19, 2015 Version 1.1.1 Date 2012-09-20 Title A library of utilities for big.matrix objects of package bigmemory. Author John W. Emerson and Michael
Big Data and Parallel Work with R
Big Data and Parallel Work with R What We'll Cover Data Limits in R Optional Data packages Optional Function packages Going parallel Deciding what to do Data Limits in R Big Data? What is big data? More
Taking R to the Limit: Part II - Large Datasets
Taking R to the Limit, Part II: Working with Large Datasets August 17, 2010 The Brutal Truth We are here because we love R. Despite our enthusiasm, R has two major limitations, and some people may have
Scalable Data Analysis in R. Lee E. Edlefsen Chief Scientist UserR! 2011
Scalable Data Analysis in R Lee E. Edlefsen Chief Scientist UserR! 2011 1 Introduction Our ability to collect and store data has rapidly been outpacing our ability to analyze it We need scalable data analysis
Journal of Statistical Software
JSS Journal of Statistical Software MMMMMM YYYY, Volume VV, Issue II. http://www.jstatsoft.org/ The R Package bigmemory: Supporting Efficient Computation and Concurrent Programming with Large Data Sets.
Semiparametric Regression of Big Data in R
Semiparametric Regression of Big Data in R Nathaniel E. Helwig Department of Statistics University of Illinois at Urbana-Champaign CSE Big Data Workshop: May 29, 2014 Nathaniel E. Helwig (University of
Getting Started with domc and foreach
Steve Weston [email protected] February 26, 2014 1 Introduction The domc package is a parallel backend for the foreach package. It provides a mechanism needed to execute foreach loops in parallel.
Semiparametric Regression of Big Data in R
Semiparametric Regression of Big Data in R Nathaniel E. Helwig Department of Statistics University of Illinois at Urbana-Champaign CSE Big Data Workshop: May 29, 2014 Nathaniel E. Helwig (University of
Parallel Options for R
Parallel Options for R Glenn K. Lockwood SDSC User Services [email protected] Motivation "I just ran an intensive R script [on the supercomputer]. It's not much faster than my own machine." Motivation "I
The ff package: Handling Large Data Sets in R with Memory Mapped Pages of Binary Flat Files
UseR! 2007, Iowa State University, Ames, August 8-108 2007 The ff package: Handling Large Data Sets in R with Memory Mapped Pages of Binary Flat Files D. Adler, O. Nenadić, W. Zucchini, C. Gläser Institute
The Not Quite R (NQR) Project: Explorations Using the Parrot Virtual Machine
The Not Quite R (NQR) Project: Explorations Using the Parrot Virtual Machine Michael J. Kane 1 and John W. Emerson 2 1 Yale Center for Analytical Sciences, Yale University 2 Department of Statistics, Yale
Parallel Computing. Benson Muite. [email protected] http://math.ut.ee/ benson. https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage
Parallel Computing Benson Muite [email protected] http://math.ut.ee/ benson https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage 3 November 2014 Hadoop, Review Hadoop Hadoop History Hadoop Framework
Memory Management in BigData
Memory Management in BigData Yanish Pradhananga 1, Shridevi C. Karande 2 1 Student, Maharashtra Institute of Technology, Pune 411038, India 2 Assistant Professor, Maharashtra Institute of Technology, Pune
Bringing Big Data Modelling into the Hands of Domain Experts
Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks [email protected] 2015 The MathWorks, Inc. 1 Data is the sword of the
Introduction to parallel computing in R
Introduction to parallel computing in R Clint Leach April 10, 2014 1 Motivation When working with R, you will often encounter situations in which you need to repeat a computation, or a series of computations,
Up Your R Game. James Taylor, Decision Management Solutions Bill Franks, Teradata
Up Your R Game James Taylor, Decision Management Solutions Bill Franks, Teradata Today s Speakers James Taylor Bill Franks CEO Chief Analytics Officer Decision Management Solutions Teradata 7/28/14 3 Polling
Hands-On Data Science with R Dealing with Big Data. [email protected]. 27th November 2014 DRAFT
Hands-On Data Science with R Dealing with Big Data [email protected] 27th November 2014 Visit http://handsondatascience.com/ for more Chapters. In this module we explore how to load larger datasets
Big data in R EPIC 2015
Big data in R EPIC 2015 Big Data: the new 'The Future' In which Forbes magazine finds common ground with Nancy Krieger (for the first time ever?), by arguing the need for theory-driven analysis This future
Architectures for Big Data Analytics A database perspective
Architectures for Big Data Analytics A database perspective Fernando Velez Director of Product Management Enterprise Information Management, SAP June 2013 Outline Big Data Analytics Requirements Spectrum
GETTING STARTED WITH R AND DATA ANALYSIS
GETTING STARTED WITH R AND DATA ANALYSIS [Learn R for effective data analysis] LEARN PRACTICAL SKILLS REQUIRED FOR VISUALIZING, TRANSFORMING, AND ANALYZING DATA IN R One day course for people who are just
MANIPULATION OF LARGE DATABASES WITH "R"
MANIPULATION OF LARGE DATABASES WITH "R" Ana Maria DOBRE, Andreea GAGIU National Institute of Statistics, Bucharest Abstract Nowadays knowledge is power. In the informational era, the ability to manipulate
Package bigrf. February 19, 2015
Version 0.1-11 Date 2014-05-16 Package bigrf February 19, 2015 Title Big Random Forests: Classification and Regression Forests for Large Data Sets Maintainer Aloysius Lim OS_type
Accelerating Hadoop MapReduce Using an In-Memory Data Grid
Accelerating Hadoop MapReduce Using an In-Memory Data Grid By David L. Brinker and William L. Bain, ScaleOut Software, Inc. 2013 ScaleOut Software, Inc. 12/27/2012 H adoop has been widely embraced for
Classroom Demonstrations of Big Data
Classroom Demonstrations of Big Data Eric A. Suess Abstract We present examples of accessing and analyzing large data sets for use in a classroom at the first year graduate level or senior undergraduate
Report: Declarative Machine Learning on MapReduce (SystemML)
Report: Declarative Machine Learning on MapReduce (SystemML) Jessica Falk ETH-ID 11-947-512 May 28, 2014 1 Introduction SystemML is a system used to execute machine learning (ML) algorithms in HaDoop,
Fast Analytics on Big Data with H20
Fast Analytics on Big Data with H20 0xdata.com, h2o.ai Tomas Nykodym, Petr Maj Team About H2O and 0xdata H2O is a platform for distributed in memory predictive analytics and machine learning Pure Java,
RevoScaleR Speed and Scalability
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
Distributed R for Big Data
Distributed R for Big Data Indrajit Roy HP Vertica Development Team Abstract Distributed R simplifies large-scale analysis. It extends R. R is a single-threaded environment which limits its utility for
High Performance Predictive Analytics in R and Hadoop:
High Performance Predictive Analytics in R and Hadoop: Achieving Big Data Big Analytics Presented by: Mario E. Inchiosa, Ph.D. US Chief Scientist August 27, 2013 1 Polling Questions 1 & 2 2 Agenda Revolution
PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN
1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction
Presto/Blockus: Towards Scalable R Data Analysis
/Blockus: Towards Scalable R Data Analysis Andrew A. Chien University of Chicago and Argonne ational Laboratory IRIA-UIUC-AL Joint Institute Potential Collaboration ovember 19, 2012 ovember 19, 2012 Andrew
Applied Multivariate Analysis - Big data analytics
Applied Multivariate Analysis - Big data analytics Nathalie Villa-Vialaneix [email protected] http://www.nathalievilla.org M1 in Economics and Economics and Statistics Toulouse School of
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
Technical Paper. Performance of SAS In-Memory Statistics for Hadoop. A Benchmark Study. Allison Jennifer Ames Xiangxiang Meng Wayne Thompson
Technical Paper Performance of SAS In-Memory Statistics for Hadoop A Benchmark Study Allison Jennifer Ames Xiangxiang Meng Wayne Thompson Release Information Content Version: 1.0 May 20, 2014 Trademarks
Parallel Computing for Data Science
Parallel Computing for Data Science With Examples in R, C++ and CUDA Norman Matloff University of California, Davis USA (g) CRC Press Taylor & Francis Group Boca Raton London New York CRC Press is an imprint
How To Write A Data Processing Pipeline In R
New features and old concepts for handling large and streaming data in practice Simon Urbanek R Foundation Overview Motivation Custom connections Data processing pipelines Parallel processing Back-end
What is R? R s Advantages R s Disadvantages Installing and Maintaining R Ways of Running R An Example Program Where to Learn More
Bob Muenchen, Author R for SAS and SPSS Users, Co-Author R for Stata Users [email protected], http://r4stats.com What is R? R s Advantages R s Disadvantages Installing and Maintaining R Ways of Running
Distributed R for Big Data
Distributed R for Big Data Indrajit Roy, HP Labs November 2013 Team: Shivara m Erik Kyungyon g Alvin Rob Vanish A Big Data story Once upon a time, a customer in distress had. 2+ billion rows of financial
CS555: Distributed Systems [Fall 2015] Dept. Of Computer Science, Colorado State University
CS 555: DISTRIBUTED SYSTEMS [SPARK] Shrideep Pallickara Computer Science Colorado State University Frequently asked questions from the previous class survey Streaming Significance of minimum delays? Interleaving
22S:295 Seminar in Applied Statistics High Performance Computing in Statistics
22S:295 Seminar in Applied Statistics High Performance Computing in Statistics Luke Tierney Department of Statistics & Actuarial Science University of Iowa August 30, 2007 Luke Tierney (U. of Iowa) HPC
Big Data Analytics and HPC
Big Data Analytics and HPC Matthew J. Denny [email protected] www.mjdenny.com @MatthewJDenny www.mjdenny.com/icpsr Data Science 2015.html July 28, 2015 Overview 1. Overview of High Performance Computing/Big
Massive scale analytics with Stratosphere using R
Massive scale analytics with Stratosphere using R Jose Luis Lopez Pino [email protected] Database Systems and Information Management Technische Universität Berlin Supervised by Volker Markl Advised
Map-Reduce for Machine Learning on Multicore
Map-Reduce for Machine Learning on Multicore Chu, et al. Problem The world is going multicore New computers - dual core to 12+-core Shift to more concurrent programming paradigms and languages Erlang,
Large-Scale Test Mining
Large-Scale Test Mining SIAM Conference on Data Mining Text Mining 2010 Alan Ratner Northrop Grumman Information Systems NORTHROP GRUMMAN PRIVATE / PROPRIETARY LEVEL I Aim Identify topic and language/script/coding
Scaling Out With Apache Spark. DTL Meeting 17-04-2015 Slides based on https://www.sics.se/~amir/files/download/dic/spark.pdf
Scaling Out With Apache Spark DTL Meeting 17-04-2015 Slides based on https://www.sics.se/~amir/files/download/dic/spark.pdf Your hosts Mathijs Kattenberg Technical consultant Jeroen Schot Technical consultant
Big Data and Analytics: Getting Started with ArcGIS. Mike Park Erik Hoel
Big Data and Analytics: Getting Started with ArcGIS Mike Park Erik Hoel Agenda Overview of big data Distributed computation User experience Data management Big data What is it? Big Data is a loosely defined
Package Rdsm. February 19, 2015
Version 2.1.1 Package Rdsm February 19, 2015 Author Maintainer Date 10/01/2014 Title Threads Environment for R Provides a threads-type programming environment
Big Graph Processing: Some Background
Big Graph Processing: Some Background Bo Wu Colorado School of Mines Part of slides from: Paul Burkhardt (National Security Agency) and Carlos Guestrin (Washington University) Mines CSCI-580, Bo Wu Graphs
Monitis Project Proposals for AUA. September 2014, Yerevan, Armenia
Monitis Project Proposals for AUA September 2014, Yerevan, Armenia Distributed Log Collecting and Analysing Platform Project Specifications Category: Big Data and NoSQL Software Requirements: Apache Hadoop
rm(list=ls()) library(sqldf) system.time({large = read.csv.sql("large.csv")}) #172.97 seconds, 4.23GB of memory used by R
Big Data in R Importing data into R: 1.75GB file Table 1: Comparison of importing data into R Time Taken Packages Functions (second) Remark/Note base read.csv > 2,394 My machine (8GB of memory) ran out
Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
MapReduce and Hadoop Distributed File System V I J A Y R A O
MapReduce and Hadoop Distributed File System 1 V I J A Y R A O The Context: Big-data Man on the moon with 32KB (1969); my laptop had 2GB RAM (2009) Google collects 270PB data in a month (2007), 20000PB
How To Test The Performance Of An Ass 9.4 And Sas 7.4 On A Test On A Powerpoint Powerpoint 9.2 (Powerpoint) On A Microsoft Powerpoint 8.4 (Powerprobe) (
White Paper Revolution R Enterprise: Faster Than SAS Benchmarking Results by Thomas W. Dinsmore and Derek McCrae Norton In analytics, speed matters. How much? We asked the director of analytics from a
Adapting scientific computing problems to cloud computing frameworks Ph.D. Thesis. Pelle Jakovits
Adapting scientific computing problems to cloud computing frameworks Ph.D. Thesis Pelle Jakovits Outline Problem statement State of the art Approach Solutions and contributions Current work Conclusions
Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
Use of Hadoop File System for Nuclear Physics Analyses in STAR
1 Use of Hadoop File System for Nuclear Physics Analyses in STAR EVAN SANGALINE UC DAVIS Motivations 2 Data storage a key component of analysis requirements Transmission and storage across diverse resources
BIG DATA USING HADOOP
+ Breakaway Session By Johnson Iyilade, Ph.D. University of Saskatchewan, Canada 23-July, 2015 BIG DATA USING HADOOP + Outline n Framing the Problem Hadoop Solves n Meet Hadoop n Storage with HDFS n Data
Introduction to DISC and Hadoop
Introduction to DISC and Hadoop Alice E. Fischer April 24, 2009 Alice E. Fischer DISC... 1/20 1 2 History Hadoop provides a three-layer paradigm Alice E. Fischer DISC... 2/20 Parallel Computing Past and
MAPREDUCE Programming Model
CS 2510 COMPUTER OPERATING SYSTEMS Cloud Computing MAPREDUCE Dr. Taieb Znati Computer Science Department University of Pittsburgh MAPREDUCE Programming Model Scaling Data Intensive Application MapReduce
Introduction to Big Data! with Apache Spark" UC#BERKELEY#
Introduction to Big Data! with Apache Spark" UC#BERKELEY# This Lecture" The Big Data Problem" Hardware for Big Data" Distributing Work" Handling Failures and Slow Machines" Map Reduce and Complex Jobs"
SEIZE THE DATA. 2015 SEIZE THE DATA. 2015
1 Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Deep dive into Haven Predictive Analytics Powered by HP Distributed R and
Big Data and Analytics: A Conceptual Overview. Mike Park Erik Hoel
Big Data and Analytics: A Conceptual Overview Mike Park Erik Hoel In this technical workshop This presentation is for anyone that uses ArcGIS and is interested in analyzing large amounts of data We will
Extending Hadoop beyond MapReduce
Extending Hadoop beyond MapReduce Mahadev Konar Co-Founder @mahadevkonar (@hortonworks) Page 1 Bio Apache Hadoop since 2006 - committer and PMC member Developed and supported Map Reduce @Yahoo! - Core
Hadoop. http://hadoop.apache.org/ Sunday, November 25, 12
Hadoop http://hadoop.apache.org/ What Is Apache Hadoop? The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using
MapReduce and Distributed Data Analysis. Sergei Vassilvitskii Google Research
MapReduce and Distributed Data Analysis Google Research 1 Dealing With Massive Data 2 2 Dealing With Massive Data Polynomial Memory Sublinear RAM Sketches External Memory Property Testing 3 3 Dealing With
NoSQL and Hadoop Technologies On Oracle Cloud
NoSQL and Hadoop Technologies On Oracle Cloud Vatika Sharma 1, Meenu Dave 2 1 M.Tech. Scholar, Department of CSE, Jagan Nath University, Jaipur, India 2 Assistant Professor, Department of CSE, Jagan Nath
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, [email protected] Assistant Professor, Information
Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms
Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes
Big-data Analytics: Challenges and Opportunities
Big-data Analytics: Challenges and Opportunities Chih-Jen Lin Department of Computer Science National Taiwan University Talk at 台 灣 資 料 科 學 愛 好 者 年 會, August 30, 2014 Chih-Jen Lin (National Taiwan Univ.)
A Performance Analysis of Distributed Indexing using Terrier
A Performance Analysis of Distributed Indexing using Terrier Amaury Couste Jakub Kozłowski William Martin Indexing Indexing Used by search
PARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
Part I Courses Syllabus
Part I Courses Syllabus This document provides detailed information about the basic courses of the MHPC first part activities. The list of courses is the following 1.1 Scientific Programming Environment
Index Terms : Load rebalance, distributed file systems, clouds, movement cost, load imbalance, chunk.
Load Rebalancing for Distributed File Systems in Clouds. Smita Salunkhe, S. S. Sannakki Department of Computer Science and Engineering KLS Gogte Institute of Technology, Belgaum, Karnataka, India Affiliated
COMP 598 Applied Machine Learning Lecture 21: Parallelization methods for large-scale machine learning! Big Data by the numbers
COMP 598 Applied Machine Learning Lecture 21: Parallelization methods for large-scale machine learning! Instructor: ([email protected]) TAs: Pierre-Luc Bacon ([email protected]) Ryan Lowe ([email protected])
Secondary Storage. Any modern computer system will incorporate (at least) two levels of storage: magnetic disk/optical devices/tape systems
1 Any modern computer system will incorporate (at least) two levels of storage: primary storage: typical capacity cost per MB $3. typical access time burst transfer rate?? secondary storage: typical capacity
SAS Grid Manager Testing and Benchmarking Best Practices for SAS Intelligence Platform
SAS Grid Manager Testing and Benchmarking Best Practices for SAS Intelligence Platform INTRODUCTION Grid computing offers optimization of applications that analyze enormous amounts of data as well as load
The Saves Package. an approximate benchmark of performance issues while loading datasets. Gergely Daróczi [email protected].
The Saves Package an approximate benchmark of performance issues while loading datasets Gergely Daróczi [email protected] December 27, 2013 1 Introduction The purpose of this package is to be able
Introduction to Cloud Computing
Introduction to Cloud Computing Parallel Processing I 15 319, spring 2010 7 th Lecture, Feb 2 nd Majd F. Sakr Lecture Motivation Concurrency and why? Different flavors of parallel computing Get the basic
BIG DATA What it is and how to use?
BIG DATA What it is and how to use? Lauri Ilison, PhD Data Scientist 21.11.2014 Big Data definition? There is no clear definition for BIG DATA BIG DATA is more of a concept than precise term 1 21.11.14
Reducer Load Balancing and Lazy Initialization in Map Reduce Environment S.Mohanapriya, P.Natesan
Reducer Load Balancing and Lazy Initialization in Map Reduce Environment S.Mohanapriya, P.Natesan Abstract Big Data is revolutionizing 21st-century with increasingly huge amounts of data to store and be
APPROACHABLE ANALYTICS MAKING SENSE OF DATA
APPROACHABLE ANALYTICS MAKING SENSE OF DATA AGENDA SAS DELIVERS PROVEN SOLUTIONS THAT DRIVE INNOVATION AND IMPROVE PERFORMANCE. About SAS SAS Business Analytics Framework Approachable Analytics SAS for
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next
Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control
Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control EP/K006487/1 UK PI: Prof Gareth Taylor (BU) China PI: Prof Yong-Hua Song (THU) Consortium UK Members: Brunel University
Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc.
Tackling Big Data with MATLAB Adam Filion Application Engineer MathWorks, Inc. 2015 The MathWorks, Inc. 1 Challenges of Big Data Any collection of data sets so large and complex that it becomes difficult
R and High-Performance Computing
R and High-Performance Computing A (Somewhat Brief and Personal) Overview Dirk Eddelbuettel ISM HPCCON 2015 & ISM HPC on R Workshop The Institute of Statistical Mathematics, Tokyo, Japan October 9-12,
The Performance Characteristics of MapReduce Applications on Scalable Clusters
The Performance Characteristics of MapReduce Applications on Scalable Clusters Kenneth Wottrich Denison University Granville, OH 43023 [email protected] ABSTRACT Many cluster owners and operators have
MapReduce and Hadoop Distributed File System
MapReduce and Hadoop Distributed File System 1 B. RAMAMURTHY Contact: Dr. Bina Ramamurthy CSE Department University at Buffalo (SUNY) [email protected] http://www.cse.buffalo.edu/faculty/bina Partially
Recognization of Satellite Images of Large Scale Data Based On Map- Reduce Framework
Recognization of Satellite Images of Large Scale Data Based On Map- Reduce Framework Vidya Dhondiba Jadhav, Harshada Jayant Nazirkar, Sneha Manik Idekar Dept. of Information Technology, JSPM s BSIOTR (W),
Apache Hama Design Document v0.6
Apache Hama Design Document v0.6 Introduction Hama Architecture BSPMaster GroomServer Zookeeper BSP Task Execution Job Submission Job and Task Scheduling Task Execution Lifecycle Synchronization Fault
