Scalable Strategies for Computing with Massive Data: The Bigmemory Project
|
|
|
- Meredith Stone
- 10 years ago
- Views:
Transcription
1 Scalable Strategies for Computing with Massive Data: The Bigmemory Project Jay Emerson 1 and Mike Kane 2 (1) Associate Professor of Statistics, Yale University (2) Research Faculty, Yale Center for Analytical Science This talk also acknowledges work by Dan Adler, Christian Gleser, Oleg Nenadic, Jens Oehlschlegel, and Walter Zucchini (authors of package ff); Roger Peng (author of package filehash); Steve Weston and Revolution Analytics (authors of package foreach and most of the associated parallel backends); Thomas Lumley (author of biglm); Baglama and Reichel (authors of a 2005 paper that I ll reference); Bryan Lewis (author of doredis and irlba); and many of the participants of VLDS 2011.
2 A new era We have entered an era of massive scientific data collection, with a demand for answers to large-scale inference problems that lie beyond the scope of classical statistics. Efron (2005) classical statistics should include mainstream computational statistics. Kane, Emerson, and Weston (in submission)
3 Alex Szalay Scalability Cost of computing is an important consideration Cost of coding needs to be considered Let s not keep reinventing the wheel if we can avoid it: some algorithms could work on both large and small problems. Others can t. Whether the methods should be used is a different question (and a good one).
4 Robert Gentleman 1-2 TB of RAM? Must be nice! I note that in the afternoon discussion Robert noted that resource constraints are an important consideration for many researchers, however. Avoid many languages. This message was brought to you by the letter R. Open development Quality assessment Data processing and data reduction (and don t forget to think)... Processed data may often be stored as a matrix
5 Chris Volinsky Commented on streaming data and the importance of preprocessing Reduced data often have a nice structure (e.g. like a matrix, say) Importance of Twitter tags... mine not as cool
6 @johmemerson I ve only used Twitter to announce analyses of figure skating results!
7 Ciprian Crainiceanu Important big picture issues: How do we do this? Why do we do this?
8 Terry Speed and Rafa Irizarry Not plastics, not statistics... controls! The hope that what we do has broader impact than our particular problem Rafa via Terry: It s not how big your data set is, it s what you do with it that counts.
9 Nicole Lazar and Thomas Louis and Discussion How big is big? Moving target Relative, not simply absolute Gentleman s open development extend to open research? The Netflix competition
10 Steve Marron Why do you want to do that? Statistics Mathematics Manifolds grep *Euclidean MarronSlideText.txt more Encouraged by Steve s talk, I added some material to my own talk earlier today: restarted Lanczos bidiagonalization methods, augmentation of Krylov subspaces, and certain Ritz or harmonic Ritz vectors.
11 A tip of the hat: ff and filehash filehash: a simple key-value database in pure R ff has some very cool features (differentiating itself from bigmemory): support for arrays and data.frames, not just matrices additional types (quad, nibble,...) and extended types (factors, POSIXct, and user-defined) cool indexing for improved random access performance fast filtering of data frames via package bit
12 A tip of the hat: foreach Along with optional parallel packends domc, dosnow, dompi, doredis, and dosmp, package foreach provides an elegant framework for portable parallel programming. > library(foreach) > > library(domc) > registerdomc(2) > > ans <- foreach(i = 1:10,.combine = c) %dopar% + { + i^2 + } > > ans [1]
13 Example data sets Airline on-time data 2009 JSM Data Expo (thanks, Hadley!) About 120 million commercial US airline flights over 20 years 29 variables, integer-valued or categorical (recoded as integer) About 12 gigabytes (GB) Netflix data About 100 million ratings from 500,000 customers for 17,000 movies About 2 GB stored as integers No statisticians on the winning team; hard to find statisticians on the leaderboard Top teams: access to expensive hardware; professional computer science and programming expertise
14 Why R? R is the lingua franca of statistics (thanks to John Chambers for getting the ball rolling with the S language; Robert Gentleman, Ross Ihaka, the R Core team and the multitude of contributors for the R language and community). The syntax is simple and well-suited for data exploration and analysis. It has excellent graphical capabilities. It is extensible, with over 2500 packages available on CRAN alone. It is open source and freely available for Windows/MacOS/Linux platforms.
15 In a nutshell... The approaches adopted by statisticians in analyzing small data sets don t (often) scale to massive ones. Statisticians who want to explore massive data must be aware of the various pitfalls; adopt new approaches to avoid them.
16 Importing and managing massive data read.table() and R objects: memory overhead as much as 100% of size of data data.frame and matrix objects not available in shared memory limited in size by available RAM, recommended maximum 10%-20% of RAM read.big.matrix() and big.matrix objects: matrix-like data (not data frames), compatible with linear algebra libraries no memory overhead supports shared memory for efficient parallel programming supports file-backed objects for data larger-than-ram leverages the Boost C++ libraries Databases and other alternatives: Slower performance (often), no formal shared memory Not compatible with linear algebra libraries Require customized coding (chunking algorithms, generally)
17 Importing and managing massive data With the full Airline data ( 12 GB), bigmemory s file-backing allows you to work with the data even with only 4 GB of RAM, for example: > x <- read.big.matrix("airline.csv", header=true, + backingfile="airline.bin", + descriptorfile="airline.desc", + type="integer") > x An object of class "big.matrix" Slot "address": <pointer: 0x3031fc0> > rm(x) > x <- attach.big.matrix("airline.desc") > dim(x) [1]
18 Exploring massive data > summary(x[, "DepDelay"]) Min. 1st Qu. Median Mean rd Qu. Max. NA's > > quantile(x[, "DepDelay"], + probs=c(0.5, 0.9, 0.99), na.rm=true) 50% 90% 99%
19 Example: caching in action In a fresh R session on this laptop, newly rebooted: > library(bigmemory) > library(biganalytics) > xdesc <- dget("airline.desc") > x <- attach.big.matrix(xdesc) > system.time( numplanes <- colmax(x, "TailNum", + na.rm=true) ) user system elapsed > system.time( numplanes <- colmax(x, "TailNum", + na.rm=true) ) user system elapsed
20 Split-apply-combine Many computational problems in statistics are solved by performing the same calculation repeatedly on independent sets of data. These problems can be solved by partitioning the data (the split) performing a single calculation on each partition (the apply) returning the results in a specified format (the combine) Recent attention: it can be particularly efficient and easily lends itself to parallel computing split-apply-combine was coined by Hadley Wickham, but the approach has been supported on a number of different environments for some time under different names: SAS: by Google: MapReduce Apache: Hadoop
21 Exploring massive data > GetDepQuantiles <- function(rows, data) { + return(quantile(data[rows, "DepDelay"], + probs=c(0.5, 0.9, 0.99), na.rm=true)) + } > > groups <- split(1:nrow(x), x[,"dayofweek"]) > > qs <- sapply(groups, GetDepQuantiles, data=x) > > colnames(qs) <- c("mon", "Tue", "Wed", "Thu", + "Fri", "Sat", "Sun") > qs Mon Tue Wed Thu Fri Sat Sun 50% % %
22 Aside: foreach The user may register any one of several parallel backends like domc, or none at all. The code will either run sequentially or will make use of the parallel backend, without modification. > library(foreach) > > library(domc) > registerdomc(2) > > ans <- foreach(i = 1:10,.combine = c) %dopar% + { + i^2 + } > > ans [1]
23 Concurrent programming with foreach (1995 only) Such split-apply-combine problems can be done in parallel. Here, we start with one year of data only, Why only 1995? Memory implications. The message: shared memory is essential. > library(foreach) > library(dosnow) > cl <- makesockcluster(4) > registerdosnow(cl) > > x <- read.csv("1995.csv") > dim(x) [1]
24 Concurrent programming with foreach (1995 only) 4 cores: 37 seconds (memory overhead > 2.5 GB) 3 cores: 23 seconds 2 cores: 15 seconds 1 core: 8 seconds > groups <- split(1:nrow(x), x[,"dayofweek"]) > > qs <- foreach(g=groups,.combine=rbind) %dopar% { + GetDepQuantiles(g, data=x) + } > > # Same result as before
25 Challenge: find plane birthdays > birthmonth <- function(y) { + minyear <- min(y[,'year'], na.rm=true) + these <- which(y[,'year']==minyear) + minmonth <- min(y[these,'month'], na.rm=true) + return(12*minyear + minmonth) + } > > time.0 <- system.time( { + planemap <- split(1:nrow(x), x[,"tailnum"]) + planestart <- sapply( planemap, + function(i) birthmonth(x[i, c('year','month'), + drop=false]) ) + } ) > > time.0 user system elapsed
26 Parallel split-apply-combine Using 4 cores, we can reduce the time to 20 seconds (not including the read.big.matrix(), repeated here for a special reason: x <- read.big.matrix("airline.csv", header = TRUE, backingfile = "airline.bin", descriptorfile = "airline.desc", type = "integer", extracols = "Age") planeindices <- split(1:nrow(x), x[, "TailNum"]) planestart <- foreach(i = planeindices,.combine = c) %dopar% { birthmonth(x[i, c("year", "Month"), drop = FALSE]) } x[, "Age"] <- x[, "Year"] * as.integer(12) + x[, "Month"] - as.integer(planestart[x[, "TailNum"]])
27 Massive linear models via biglm > library (biganalytics) > x <- attach.big.matrix("airline.desc") > blm <- biglm.big.matrix(depdelay ~ Age, data = x) > summary(blm) Large data regression model: biglm(formula = formula, data = data,...) Sample size = Coef (95% CI) SE p (Intercept) Age
28 irlba: Implicitly-restarted Lanczos bidiagonalization algorithm Implemented by Bryan Lewis. The implicitly restarted Lanczos bi-diagonalization (IRLBA) algorithm finds a few approximate singular values and corresponding singular vectors of a matrix using the augmented implicitly-restarted Lanczos bidiagonalization method of Baglama and Reichel. It is a fast and memory-efficient way to compute a partial SVD. Augmented Implicitly Restarted Lanczos Bidiagonalization Methods, J. Baglama and L. Reichel, SIAM J. Sci. Comput
29 irlba: Code snippet for making an important point about bigmemory `irlba` <- function(a, nu=5, nv=5, adjust=3, aug="ritz", sigma="ls", maxit=1000, m_b=20, reorth=1, tol=1e-6, V=NULL) { } for (i in 1:nrow(A)) { W[i,j] = sum(a[i,] * V[,j,drop=FALSE]) }
30 Netflix: a PCA (not necessarily a great one, but...) ST ST ST ST AP AP AP SW: I SW: II LOTR(ex) SW SW SW LOTR HP LOTR HP HP The Horror Within Horror Hospital The The West West Wing: Wing: Season Season Waterworld The Shawshank Redemption Rain Man Braveheart Mission: Impossible Men in Black Men in Black II Titanic The Rock The Green Mile Independence Day
31 Summary The Bigmemory Project proposes three new ways to work with very large sets of data: memory and file-mapped data structures, which provide access to arbitrarily large sets of data while retaining a look and feel that is familiar to statisticians; data structures that are shared across processor cores on a single computer, in order to support efficient parallel computing techniques when multiple processors are used; and file-mapped data structures that allow concurrent access by the different nodes in a cluster of computers. Even though these three techniques are currently implemented only for R, they are intended to provide a flexible framework for future developments in the field of statistical computing.
32 A plug for JSM (making the most of Miami in August??!!) Session 101: The Future of Statistical Computing Environments Luke Tierney: Some Developments for the R Engine Simon Urbanek: Taking Statistical Computing Beyond S and R Andrew Runnals: CXXR: an Ideas Hatchery for Future R Development Mike Kane: The Q Project: Explorations Using the Parrot Virtual Machine
33 Thanks! Dirk Eddelbuettel, Bryan Lewis, Steve Weston, and Martin Schultz, for their feedback and advice over the last three years Bell Laboratories (Rick Becker, John Chambers and Allan Wilks), for development of the S language Ross Ihaka and Robert Gentleman, for their work and unselfish vision for R The R Core team David Pollard, for pushing us to better communicate the contributions of the project to statisticians John Hartigan, for years of teaching and mentoring John Emerson (my father, Middlebury College), for getting me started in statistics and pushing me to learn to write code Many of my students, for their willingness argue with me
34 Extra: Extending capabilities versus extending the language Extending R s capabilities: Providing a new algorithm, statistical analysis, or data structure Examples: lm(), or package bcp Most of the packages on CRAN and elsewhere Extending the language: Example: grdevices, a low-level interface to create and manipulate graphics devices Example: grid graphics The packages of the Bigmemory Project: a developer s interface to underlying operating system functionality which could not have been written in R itself a higher-level interface designed to mimic R s matrix objects so that statisticians can use their current knowledge of computing with data sets as a starting point for dealing with massive ones.
Scalable Strategies for Computing with Massive Data: The Bigmemory Project
Scalable Strategies for Computing with Massive Data: The Bigmemory Project John W. Emerson and Michael J. Kane http://www.bigmemory.org/ http://www.stat.yale.edu/~jay/ Associate Professor of Statistics,
Deal with big data in R using bigmemory package
Deal with big data in R using bigmemory package Xiaojuan Hao Department of Statistics University of Nebraska-Lincoln April 28, 2015 Background What Is Big Data Size (We focus on) Complexity Rate of growth
Your Best Next Business Solution Big Data In R 24/3/2010
Your Best Next Business Solution Big Data In R 24/3/2010 Big Data In R R Works on RAM Causing Scalability issues Maximum length of an object is 2^31-1 Some packages developed to help overcome this problem
Large Datasets and You: A Field Guide
Large Datasets and You: A Field Guide Matthew Blackwell [email protected] Maya Sen [email protected] August 3, 2012 A wind of streaming data, social data and unstructured data is knocking at
Package biganalytics
Package biganalytics February 19, 2015 Version 1.1.1 Date 2012-09-20 Title A library of utilities for big.matrix objects of package bigmemory. Author John W. Emerson and Michael
Big Data and Parallel Work with R
Big Data and Parallel Work with R What We'll Cover Data Limits in R Optional Data packages Optional Function packages Going parallel Deciding what to do Data Limits in R Big Data? What is big data? More
Taking R to the Limit: Part II - Large Datasets
Taking R to the Limit, Part II: Working with Large Datasets August 17, 2010 The Brutal Truth We are here because we love R. Despite our enthusiasm, R has two major limitations, and some people may have
The Not Quite R (NQR) Project: Explorations Using the Parrot Virtual Machine
The Not Quite R (NQR) Project: Explorations Using the Parrot Virtual Machine Michael J. Kane 1 and John W. Emerson 2 1 Yale Center for Analytical Sciences, Yale University 2 Department of Statistics, Yale
Getting Started with domc and foreach
Steve Weston [email protected] February 26, 2014 1 Introduction The domc package is a parallel backend for the foreach package. It provides a mechanism needed to execute foreach loops in parallel.
Scalable Data Analysis in R. Lee E. Edlefsen Chief Scientist UserR! 2011
Scalable Data Analysis in R Lee E. Edlefsen Chief Scientist UserR! 2011 1 Introduction Our ability to collect and store data has rapidly been outpacing our ability to analyze it We need scalable data analysis
Journal of Statistical Software
JSS Journal of Statistical Software MMMMMM YYYY, Volume VV, Issue II. http://www.jstatsoft.org/ The R Package bigmemory: Supporting Efficient Computation and Concurrent Programming with Large Data Sets.
How To Write A Data Processing Pipeline In R
New features and old concepts for handling large and streaming data in practice Simon Urbanek R Foundation Overview Motivation Custom connections Data processing pipelines Parallel processing Back-end
Memory Management in BigData
Memory Management in BigData Yanish Pradhananga 1, Shridevi C. Karande 2 1 Student, Maharashtra Institute of Technology, Pune 411038, India 2 Assistant Professor, Maharashtra Institute of Technology, Pune
The ff package: Handling Large Data Sets in R with Memory Mapped Pages of Binary Flat Files
UseR! 2007, Iowa State University, Ames, August 8-108 2007 The ff package: Handling Large Data Sets in R with Memory Mapped Pages of Binary Flat Files D. Adler, O. Nenadić, W. Zucchini, C. Gläser Institute
Map-Reduce for Machine Learning on Multicore
Map-Reduce for Machine Learning on Multicore Chu, et al. Problem The world is going multicore New computers - dual core to 12+-core Shift to more concurrent programming paradigms and languages Erlang,
Parallel Options for R
Parallel Options for R Glenn K. Lockwood SDSC User Services [email protected] Motivation "I just ran an intensive R script [on the supercomputer]. It's not much faster than my own machine." Motivation "I
Semiparametric Regression of Big Data in R
Semiparametric Regression of Big Data in R Nathaniel E. Helwig Department of Statistics University of Illinois at Urbana-Champaign CSE Big Data Workshop: May 29, 2014 Nathaniel E. Helwig (University of
Fast Analytics on Big Data with H20
Fast Analytics on Big Data with H20 0xdata.com, h2o.ai Tomas Nykodym, Petr Maj Team About H2O and 0xdata H2O is a platform for distributed in memory predictive analytics and machine learning Pure Java,
Semiparametric Regression of Big Data in R
Semiparametric Regression of Big Data in R Nathaniel E. Helwig Department of Statistics University of Illinois at Urbana-Champaign CSE Big Data Workshop: May 29, 2014 Nathaniel E. Helwig (University of
High-Performance Processing of Large Data Sets via Memory Mapping A Case Study in R and C++
High-Performance Processing of Large Data Sets via Memory Mapping A Case Study in R and C++ Daniel Adler, Jens Oelschlägel, Oleg Nenadic, Walter Zucchini Georg-August University Göttingen, Germany - Research
Cloud Computing at Google. Architecture
Cloud Computing at Google Google File System Web Systems and Algorithms Google Chris Brooks Department of Computer Science University of San Francisco Google has developed a layered system to handle webscale
Bringing Big Data Modelling into the Hands of Domain Experts
Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks [email protected] 2015 The MathWorks, Inc. 1 Data is the sword of the
Introduction to parallel computing in R
Introduction to parallel computing in R Clint Leach April 10, 2014 1 Motivation When working with R, you will often encounter situations in which you need to repeat a computation, or a series of computations,
Up Your R Game. James Taylor, Decision Management Solutions Bill Franks, Teradata
Up Your R Game James Taylor, Decision Management Solutions Bill Franks, Teradata Today s Speakers James Taylor Bill Franks CEO Chief Analytics Officer Decision Management Solutions Teradata 7/28/14 3 Polling
GETTING STARTED WITH R AND DATA ANALYSIS
GETTING STARTED WITH R AND DATA ANALYSIS [Learn R for effective data analysis] LEARN PRACTICAL SKILLS REQUIRED FOR VISUALIZING, TRANSFORMING, AND ANALYZING DATA IN R One day course for people who are just
MANIPULATION OF LARGE DATABASES WITH "R"
MANIPULATION OF LARGE DATABASES WITH "R" Ana Maria DOBRE, Andreea GAGIU National Institute of Statistics, Bucharest Abstract Nowadays knowledge is power. In the informational era, the ability to manipulate
Big data in R EPIC 2015
Big data in R EPIC 2015 Big Data: the new 'The Future' In which Forbes magazine finds common ground with Nancy Krieger (for the first time ever?), by arguing the need for theory-driven analysis This future
Hands-On Data Science with R Dealing with Big Data. [email protected]. 27th November 2014 DRAFT
Hands-On Data Science with R Dealing with Big Data [email protected] 27th November 2014 Visit http://handsondatascience.com/ for more Chapters. In this module we explore how to load larger datasets
Streaming Data, Concurrency And R
Streaming Data, Concurrency And R Rory Winston [email protected] About Me Independent Software Consultant M.Sc. Applied Computing, 2000 M.Sc. Finance, 2008 Apache Committer Working in the financial
Distributed R for Big Data
Distributed R for Big Data Indrajit Roy, HP Labs November 2013 Team: Shivara m Erik Kyungyon g Alvin Rob Vanish A Big Data story Once upon a time, a customer in distress had. 2+ billion rows of financial
Scaling Out With Apache Spark. DTL Meeting 17-04-2015 Slides based on https://www.sics.se/~amir/files/download/dic/spark.pdf
Scaling Out With Apache Spark DTL Meeting 17-04-2015 Slides based on https://www.sics.se/~amir/files/download/dic/spark.pdf Your hosts Mathijs Kattenberg Technical consultant Jeroen Schot Technical consultant
Architectures for Big Data Analytics A database perspective
Architectures for Big Data Analytics A database perspective Fernando Velez Director of Product Management Enterprise Information Management, SAP June 2013 Outline Big Data Analytics Requirements Spectrum
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Created by Doug Cutting and Mike Carafella in 2005. Cutting named the program after
MapReduce and Distributed Data Analysis. Sergei Vassilvitskii Google Research
MapReduce and Distributed Data Analysis Google Research 1 Dealing With Massive Data 2 2 Dealing With Massive Data Polynomial Memory Sublinear RAM Sketches External Memory Property Testing 3 3 Dealing With
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next
High Performance Predictive Analytics in R and Hadoop:
High Performance Predictive Analytics in R and Hadoop: Achieving Big Data Big Analytics Presented by: Mario E. Inchiosa, Ph.D. US Chief Scientist August 27, 2013 1 Polling Questions 1 & 2 2 Agenda Revolution
Parallel Computing. Benson Muite. [email protected] http://math.ut.ee/ benson. https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage
Parallel Computing Benson Muite [email protected] http://math.ut.ee/ benson https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage 3 November 2014 Hadoop, Review Hadoop Hadoop History Hadoop Framework
Hadoop. MPDL-Frühstück 9. Dezember 2013 MPDL INTERN
Hadoop MPDL-Frühstück 9. Dezember 2013 MPDL INTERN Understanding Hadoop Understanding Hadoop What's Hadoop about? Apache Hadoop project (started 2008) downloadable open-source software library (current
Classroom Demonstrations of Big Data
Classroom Demonstrations of Big Data Eric A. Suess Abstract We present examples of accessing and analyzing large data sets for use in a classroom at the first year graduate level or senior undergraduate
Accelerating Hadoop MapReduce Using an In-Memory Data Grid
Accelerating Hadoop MapReduce Using an In-Memory Data Grid By David L. Brinker and William L. Bain, ScaleOut Software, Inc. 2013 ScaleOut Software, Inc. 12/27/2012 H adoop has been widely embraced for
Distributed R for Big Data
Distributed R for Big Data Indrajit Roy HP Vertica Development Team Abstract Distributed R simplifies large-scale analysis. It extends R. R is a single-threaded environment which limits its utility for
RevoScaleR Speed and Scalability
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
GraySort and MinuteSort at Yahoo on Hadoop 0.23
GraySort and at Yahoo on Hadoop.23 Thomas Graves Yahoo! May, 213 The Apache Hadoop[1] software library is an open source framework that allows for the distributed processing of large data sets across clusters
Hadoop Scheduler w i t h Deadline Constraint
Hadoop Scheduler w i t h Deadline Constraint Geetha J 1, N UdayBhaskar 2, P ChennaReddy 3,Neha Sniha 4 1,4 Department of Computer Science and Engineering, M S Ramaiah Institute of Technology, Bangalore,
22S:295 Seminar in Applied Statistics High Performance Computing in Statistics
22S:295 Seminar in Applied Statistics High Performance Computing in Statistics Luke Tierney Department of Statistics & Actuarial Science University of Iowa August 30, 2007 Luke Tierney (U. of Iowa) HPC
Big Data and Analytics: Getting Started with ArcGIS. Mike Park Erik Hoel
Big Data and Analytics: Getting Started with ArcGIS Mike Park Erik Hoel Agenda Overview of big data Distributed computation User experience Data management Big data What is it? Big Data is a loosely defined
Mammoth Scale Machine Learning!
Mammoth Scale Machine Learning! Speaker: Robin Anil, Apache Mahout PMC Member! OSCON"10! Portland, OR! July 2010! Quick Show of Hands!# Are you fascinated about ML?!# Have you used ML?!# Do you have Gigabytes
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, [email protected] Assistant Professor, Information
Presto/Blockus: Towards Scalable R Data Analysis
/Blockus: Towards Scalable R Data Analysis Andrew A. Chien University of Chicago and Argonne ational Laboratory IRIA-UIUC-AL Joint Institute Potential Collaboration ovember 19, 2012 ovember 19, 2012 Andrew
MAPREDUCE Programming Model
CS 2510 COMPUTER OPERATING SYSTEMS Cloud Computing MAPREDUCE Dr. Taieb Znati Computer Science Department University of Pittsburgh MAPREDUCE Programming Model Scaling Data Intensive Application MapReduce
MapReduce and Hadoop Distributed File System V I J A Y R A O
MapReduce and Hadoop Distributed File System 1 V I J A Y R A O The Context: Big-data Man on the moon with 32KB (1969); my laptop had 2GB RAM (2009) Google collects 270PB data in a month (2007), 20000PB
Yahoo! Grid Services Where Grid Computing at Yahoo! is Today
Yahoo! Grid Services Where Grid Computing at Yahoo! is Today Marco Nicosia Grid Services Operations [email protected] What is Apache Hadoop? Distributed File System and Map-Reduce programming platform
Big Data With Hadoop
With Saurabh Singh [email protected] The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials
MapReduce and Hadoop Distributed File System
MapReduce and Hadoop Distributed File System 1 B. RAMAMURTHY Contact: Dr. Bina Ramamurthy CSE Department University at Buffalo (SUNY) [email protected] http://www.cse.buffalo.edu/faculty/bina Partially
APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder
APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large
Big Graph Processing: Some Background
Big Graph Processing: Some Background Bo Wu Colorado School of Mines Part of slides from: Paul Burkhardt (National Security Agency) and Carlos Guestrin (Washington University) Mines CSCI-580, Bo Wu Graphs
BIG DATA What it is and how to use?
BIG DATA What it is and how to use? Lauri Ilison, PhD Data Scientist 21.11.2014 Big Data definition? There is no clear definition for BIG DATA BIG DATA is more of a concept than precise term 1 21.11.14
Package bigrf. February 19, 2015
Version 0.1-11 Date 2014-05-16 Package bigrf February 19, 2015 Title Big Random Forests: Classification and Regression Forests for Large Data Sets Maintainer Aloysius Lim OS_type
R: A Free Software Project in Statistical Computing
R: A Free Software Project in Statistical Computing Achim Zeileis Institut für Statistik & Wahrscheinlichkeitstheorie http://www.ci.tuwien.ac.at/~zeileis/ Acknowledgments Thanks: Alex Smola & Machine Learning
Big Data Analytics and HPC
Big Data Analytics and HPC Matthew J. Denny [email protected] www.mjdenny.com @MatthewJDenny www.mjdenny.com/icpsr Data Science 2015.html July 28, 2015 Overview 1. Overview of High Performance Computing/Big
PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN
1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction
ANALYTICS CENTER LEARNING PROGRAM
Overview of Curriculum ANALYTICS CENTER LEARNING PROGRAM The following courses are offered by Analytics Center as part of its learning program: Course Duration Prerequisites 1- Math and Theory 101 - Fundamentals
Large-Scale Test Mining
Large-Scale Test Mining SIAM Conference on Data Mining Text Mining 2010 Alan Ratner Northrop Grumman Information Systems NORTHROP GRUMMAN PRIVATE / PROPRIETARY LEVEL I Aim Identify topic and language/script/coding
How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time
SCALEOUT SOFTWARE How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time by Dr. William Bain and Dr. Mikhail Sobolev, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T wenty-first
Report: Declarative Machine Learning on MapReduce (SystemML)
Report: Declarative Machine Learning on MapReduce (SystemML) Jessica Falk ETH-ID 11-947-512 May 28, 2014 1 Introduction SystemML is a system used to execute machine learning (ML) algorithms in HaDoop,
Big Data and Analytics: A Conceptual Overview. Mike Park Erik Hoel
Big Data and Analytics: A Conceptual Overview Mike Park Erik Hoel In this technical workshop This presentation is for anyone that uses ArcGIS and is interested in analyzing large amounts of data We will
Proposal for Undergraduate Certificate in Large Data Analysis
Proposal for Undergraduate Certificate in Large Data Analysis To: Helena Dettmer, Associate Dean for Undergraduate Programs and Curriculum From: Suely Oliveira (Computer Science), Kate Cowles (Statistics),
Big Data and Market Surveillance. April 28, 2014
Big Data and Market Surveillance April 28, 2014 Copyright 2014 Scila AB. All rights reserved. Scila AB reserves the right to make changes to the information contained herein without prior notice. No part
How To Test The Performance Of An Ass 9.4 And Sas 7.4 On A Test On A Powerpoint Powerpoint 9.2 (Powerpoint) On A Microsoft Powerpoint 8.4 (Powerprobe) (
White Paper Revolution R Enterprise: Faster Than SAS Benchmarking Results by Thomas W. Dinsmore and Derek McCrae Norton In analytics, speed matters. How much? We asked the director of analytics from a
CDH AND BUSINESS CONTINUITY:
WHITE PAPER CDH AND BUSINESS CONTINUITY: An overview of the availability, data protection and disaster recovery features in Hadoop Abstract Using the sophisticated built-in capabilities of CDH for tunable
Kerrighed: use cases. Cyril Brulebois. Kerrighed. Kerlabs
Kerrighed: use cases Cyril Brulebois [email protected] Kerrighed http://www.kerrighed.org/ Kerlabs http://www.kerlabs.com/ 1 / 23 Introducing Kerrighed What s Kerrighed? Single-System Image (SSI)
Machine Learning in Python with scikit-learn. O Reilly Webcast Aug. 2014
Machine Learning in Python with scikit-learn O Reilly Webcast Aug. 2014 Outline Machine Learning refresher scikit-learn How the project is structured Some improvements released in 0.15 Ongoing work for
Hadoop. http://hadoop.apache.org/ Sunday, November 25, 12
Hadoop http://hadoop.apache.org/ What Is Apache Hadoop? The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using
Monitis Project Proposals for AUA. September 2014, Yerevan, Armenia
Monitis Project Proposals for AUA September 2014, Yerevan, Armenia Distributed Log Collecting and Analysing Platform Project Specifications Category: Big Data and NoSQL Software Requirements: Apache Hadoop
Similarity Search in a Very Large Scale Using Hadoop and HBase
Similarity Search in a Very Large Scale Using Hadoop and HBase Stanislav Barton, Vlastislav Dohnal, Philippe Rigaux LAMSADE - Universite Paris Dauphine, France Internet Memory Foundation, Paris, France
COMP 598 Applied Machine Learning Lecture 21: Parallelization methods for large-scale machine learning! Big Data by the numbers
COMP 598 Applied Machine Learning Lecture 21: Parallelization methods for large-scale machine learning! Instructor: ([email protected]) TAs: Pierre-Luc Bacon ([email protected]) Ryan Lowe ([email protected])
The Need for Training in Big Data: Experiences and Case Studies
The Need for Training in Big Data: Experiences and Case Studies Guy Lebanon Amazon Background and Disclaimer All opinions are mine; other perspectives are legitimate. Based on my experience as a professor
rm(list=ls()) library(sqldf) system.time({large = read.csv.sql("large.csv")}) #172.97 seconds, 4.23GB of memory used by R
Big Data in R Importing data into R: 1.75GB file Table 1: Comparison of importing data into R Time Taken Packages Functions (second) Remark/Note base read.csv > 2,394 My machine (8GB of memory) ran out
Use of Hadoop File System for Nuclear Physics Analyses in STAR
1 Use of Hadoop File System for Nuclear Physics Analyses in STAR EVAN SANGALINE UC DAVIS Motivations 2 Data storage a key component of analysis requirements Transmission and storage across diverse resources
Hands-on Information Technology Virtual Laboratory Powered by Cloud Computing. Peng Li East Carolina University Global Collaboratory Consortium
Hands-on Information Technology Virtual Laboratory Powered by Cloud Computing Peng Li East Carolina University Global Collaboratory Consortium 1 Who we are: East Carolina University n n n College of Technology
Hadoop2, Spark Big Data, real time, machine learning & use cases. Cédric Carbone Twitter : @carbone
Hadoop2, Spark Big Data, real time, machine learning & use cases Cédric Carbone Twitter : @carbone Agenda Map Reduce Hadoop v1 limits Hadoop v2 and YARN Apache Spark Streaming : Spark vs Storm Machine
Applied Multivariate Analysis - Big data analytics
Applied Multivariate Analysis - Big data analytics Nathalie Villa-Vialaneix [email protected] http://www.nathalievilla.org M1 in Economics and Economics and Statistics Toulouse School of
What is R? R s Advantages R s Disadvantages Installing and Maintaining R Ways of Running R An Example Program Where to Learn More
Bob Muenchen, Author R for SAS and SPSS Users, Co-Author R for Stata Users [email protected], http://r4stats.com What is R? R s Advantages R s Disadvantages Installing and Maintaining R Ways of Running
Scalable Cloud Computing Solutions for Next Generation Sequencing Data
Scalable Cloud Computing Solutions for Next Generation Sequencing Data Matti Niemenmaa 1, Aleksi Kallio 2, André Schumacher 1, Petri Klemelä 2, Eija Korpelainen 2, and Keijo Heljanko 1 1 Department of
A Service for Data-Intensive Computations on Virtual Clusters
A Service for Data-Intensive Computations on Virtual Clusters Executing Preservation Strategies at Scale Rainer Schmidt, Christian Sadilek, and Ross King [email protected] Planets Project Permanent
R: A Free Software Project in Statistical Computing
R: A Free Software Project in Statistical Computing Achim Zeileis http://statmath.wu-wien.ac.at/ zeileis/ [email protected] Overview A short introduction to some letters of interest R, S, Z Statistics
MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012
MapReduce and Hadoop Aaron Birkland Cornell Center for Advanced Computing January 2012 Motivation Simple programming model for Big Data Distributed, parallel but hides this Established success at petabyte
Technical Paper. Performance of SAS In-Memory Statistics for Hadoop. A Benchmark Study. Allison Jennifer Ames Xiangxiang Meng Wayne Thompson
Technical Paper Performance of SAS In-Memory Statistics for Hadoop A Benchmark Study Allison Jennifer Ames Xiangxiang Meng Wayne Thompson Release Information Content Version: 1.0 May 20, 2014 Trademarks
Big Fast Data Hadoop acceleration with Flash. June 2013
Big Fast Data Hadoop acceleration with Flash June 2013 Agenda The Big Data Problem What is Hadoop Hadoop and Flash The Nytro Solution Test Results The Big Data Problem Big Data Output Facebook Traditional
The Impact of Big Data on Classic Machine Learning Algorithms. Thomas Jensen, Senior Business Analyst @ Expedia
The Impact of Big Data on Classic Machine Learning Algorithms Thomas Jensen, Senior Business Analyst @ Expedia Who am I? Senior Business Analyst @ Expedia Working within the competitive intelligence unit
SEIZE THE DATA. 2015 SEIZE THE DATA. 2015
1 Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Deep dive into Haven Predictive Analytics Powered by HP Distributed R and
Cluster Computing at HRI
Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: [email protected] 1 Introduction and some local history High performance computing
Benchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
Introduction to Big Data! with Apache Spark" UC#BERKELEY#
Introduction to Big Data! with Apache Spark" UC#BERKELEY# This Lecture" The Big Data Problem" Hardware for Big Data" Distributing Work" Handling Failures and Slow Machines" Map Reduce and Complex Jobs"
Recognization of Satellite Images of Large Scale Data Based On Map- Reduce Framework
Recognization of Satellite Images of Large Scale Data Based On Map- Reduce Framework Vidya Dhondiba Jadhav, Harshada Jayant Nazirkar, Sneha Manik Idekar Dept. of Information Technology, JSPM s BSIOTR (W),
