1 U.S. National Security Agency Research Directorate - R6 Technical Report NSA-RD v1 May 20, 2013
2 Graphs are everywhere! A graph is a collection of binary relationships, i.e. networks of pairwise interactions including social networks, digital networks... road networks utility grids internet protein interactomes Brain network of C. elegans Watts, Strogatz, Nature 393(6684), 1998 M. tuberculosis Vashisht, PLoS ONE 7(7), 2012
3 Scale of the first graph Nearly 300 years ago the first graph problem consisted of 4 vertices and 7 edges Seven Bridges of Königsberg problem. Crossing the River Pregel Is it possible to cross each of the Seven Bridges of Königsberg exactly once? Not too hard to fit in memory.
4 Scale of real-world graphs Graph scale in current Computer Science literature On order of billions of edges, tens of gigabytes. AS by Skitter (AS-Skitter) LiveJournal (LJ) U.S. Road Network (USRD) Billion Triple Challenge (BTC) WWW of UK (WebUK) Twitter graph (Twitter) Yahoo! Web Graph (YahooWeb) internet topology in 2005 (n = router, m = traceroute) social network (n = members, m = friendship) road network (n = intersections, m = roads) RDF dataset 2009 (n = object, m = relationship) Yahoo Web spam dataset (n = pages, m = hyperlinks) Twitter network 1 (n = users, m = tweets) WWW pages in 2002 (n = pages, m = hyperlinks) Popular graph datasets in current literature n (vertices in millions) m (edges in millions) size AS-Skitter MB LJ MB USRD MB BTC GB WebUK GB Twitter GB YahooWeb GB 1
5 Big Data begets Big Graphs Increasing volume,velocity,variety of Big Data are significant challenges to scalable algorithms. Big Data Graphs How will graph applications adapt to Big Data at petabyte scale? Ability to store and process Big Graphs impacts typical data structures. Orders of magnitude kilobyte (KB) = 2 10 terabyte (TB) = 2 40 megabyte (MB) = 2 20 petabyte (PB) = 2 50 gigabyte (GB) = 2 30 exabyte (EB) = 2 60 Undirected graph data structure space complexity 8 >< Θ(n 2 ) adjacency matrix Θ(bytes) Θ(n + 4m) adjacency list >: Θ(4m) edge list
6 Social scale... 1 billion vertices, 100 billion edges 111 PB adjacency matrix 2.92 TB adjacency list 2.92 TB edge list Twitter graph from Gephi dataset (http://www.gephi.org)
7 Web scale billion vertices, 1 trillion edges 271 EB adjacency matrix 29.5 TB adjacency list 29.1 TB edge list Web graph from the SNAP database (http://snap.stanford.edu/data) Internet graph from the Opte Project (http://www.opte.org/maps)
8 Brain scale billion vertices, 100 trillion edges 2.08 mna bytes 2 (molar bytes) adjacency matrix 2.84 PB adjacency list 2.84 PB edge list Human connectome. Gerhard et al., Frontiers in Neuroinformatics 5(3), NA = mol 1
9 Benchmarking scalability on Big Graphs Big Graphs challenge our conventional thinking on both algorithms and computer architecture. New Graph500.org benchmark provides a foundation for conducting experiments on graph datasets. Graph500 benchmark Problem classes from 17 GB to 1 PB many times larger than common datasets in literature.
10 Graph algorithms are challenging Difficult to parallelize... irregular data access increases latency skewed data distribution creates bottlenecks giant component high degree vertices Increased size imposes greater... latency resource contention (i.e. hot-spotting) Algorithm complexity really matters! Run-time of O(n 2 ) on a trillion node graph is not practical!
11 Problem: How do we store and process Big Graphs? Conventional approach is to store and compute in-memory. SHARED-MEMORY Parallel Random Access Machine (PRAM) data in globally-shared memory implicit communication by updating memory fast-random access DISTRIBUTED-MEMORY Bulk Synchronous Parallel (BSP) data distributed to local, private memory explicit communication by sending messages easier to scale by adding more machines
12 Memory is fast but... Algorithms must exploit computer memory hierarchy. designed for spatial and temporal locality registers, L1,L2,L3 cache, TLB, pages, disk... great for unit-stride access common in many scientific codes, e.g. linear algebra But common graph algorithm implementations have... lots of random access to memory causing... many cache and TLB misses
13 Poor locality increases latency... QUESTION: What is the memory throughput if 90% TLB hit and 0.01% page fault on miss? Example Effective memory access time T n = p n l n + (1 p n )T n 1 TLB = 20ns, RAM = 100ns, DISK = 10ms ( ns) T 2 = p 2 l 2 + (1 p 2 )(p 1 l 1 + (1 p 1 )T 0 ) =.9(TLB+RAM) +.1(.9999(TLB+2RAM) (DISK)) =.9(120ns) +.1(.9999(220ns) ns) = 230ns ANSWER:
14 Poor locality increases latency... QUESTION: What is the memory throughput if 90% TLB hit and 0.01% page fault on miss? Example Effective memory access time T n = p n l n + (1 p n )T n 1 TLB = 20ns, RAM = 100ns, DISK = 10ms ( ns) T 2 = p 2 l 2 + (1 p 2 )(p 1 l 1 + (1 p 1 )T 0 ) =.9(TLB+RAM) +.1(.9999(TLB+2RAM) (DISK)) =.9(120ns) +.1(.9999(220ns) ns) = 230ns ANSWER: 33 MB/s
15 If it fits... Graph problems that fit in memory can leverage excellent advances in architecture and libraries... Cray XMT2 designed for latency-hiding SGI UV2 designed for large, cache-coherent shared-memory body of literature and libraries Parallel Boost Graph Library (PBGL) Indiana University Multithreaded Graph Library (MTGL) Sandia National Labs GraphCT/STINGER Georgia Tech GraphLab Carnegie Mellon University Giraph Apache Software Foundation But some graphs do not fit in memory...
16 We can add more memory but... Memory capacity is limited by... number of CPU pins, memory controller channels, DIMMs per channel memory bus width parallel traces of same length, one line per bit Globally-shared memory limited by... CPU address space cache-coherency
17 Larger systems, greater latency... Increasing memory can increase latency. traverse more memory addresses larger system with greater physical distance between machines Light is only so fast (but still faster than neutrinos!) It takes approximately 1 nanosecond for light to travel 0.30 meters Latency causes significant inefficiency in new CPU architectures. IBM PowerPC A2 A single 1.6 GHz PowerPC A2 can perform operations per nanosecond!
18 Easier to increase capacity using disks Current Intel Xeon E5 architectures: 384 GB max. per CPU (4 channels x 3 DIMMS x 32 GB) 64 TB max. globally-shared memory (46-bit address space) 3881 dual Xeon E5 motherboards to store Brain Graph 98 racks Disk capacity not unlimited but higher than memory. Largest capacity HDD on market 4 TB HDD need 728 to store Brain Graph which can fit in 5 racks (4 drives per chassis) Disk is not enough applications will still require memory for processing
19 Top supercomputer installations Largest supercomputer installations do not have enough memory to process the Brain Graph (3 PB)! Titan Cray XK7 at ORNL #1 Top500 in million cores 710 TB memory 8.2 Megawatts sq.ft. (NBA basketball court is 4700 sq.ft.) Sequoia IBM Blue Gene/Q at LLNL #1 Graph500 in million cores 1 PB memory 7.9 Megawatts sq.ft. Electrical power cost 3 At 10 cents per kilowatt-hour $7 million per year to keep the lights on! 3 Power specs from
20 Cloud architectures can cope with graphs at Big Data scales Cloud technologies designed for Big Data problems. scalable to massive sizes fault-tolerant; restarting algorithms on Big Graphs is expensive simpler programming model; MapReduce Big Graphs from real data are not static...
21 Distributed key, value repositories Much better for storing a distributed, dynamic graph than a distributed filesystem like HDFS. create index on edges for fast random-access and... sort edges for efficient block sequential access updates can be fast but... not transactional; eventually consistent
22 NSA BigTable Apache Accumulo Google BigTable paper inspired a small group of NSA researchers to develop an implementation with cell-level security. Apache Accumulo Open-sourced in 2011 under the Apache Software Foundation.
23 Using Apache Accumulo for Big Graphs Accumulo records are flexible and can be used to store graphs with typed edges and weights. store graph as distributed edge list in an Edge Table edges are natural key, value pairs, i.e. vertex pairs updates to edges happen in memory and are immediately available to queries and... updates are written to write-ahead logs for fault-tolerance Apache Accumulo key, value record ROW ID KEY COLUMN FAMILY QUALIFIER VISIBILITY TIMESTAMP VALUE
24 But for bulk-processing, use MapReduce Big Graph processing suitable for MapReduce... large, bulk writes to HDFS are faster (no need to sort) temporary scratch space (local writes) greater control over parallelization of algorithms
25 Quick overview of MapReduce MapReduce processes data as key, value pairs in three steps: 1 MAP map tasks independently process blocks of data in parallel and output key, value pairs map tasks execute user-defined map function 2 SHUFFLE sorts key, value pairs and distributes to reduce tasks ensures value set for a key is processed by one reduce task framework executes a user-defined partitioner function 3 REDUCE reduce tasks process assigned (key, value set) in parallel reduce tasks execute user-defined reduce function
26 MapReduce is not great for iterative algorithms Iterative algorithms are implemented as a sequence of MapReduce jobs, i.e. rounds. But iteration in MapReduce is expensive... temporary data written to disk sort/shuffle may be unnecessary for each round framework overhead for scheduling tasks
27 Good principles for MapReduce algorithms Be prepared to Think in MapReduce... avoid iteration and minimize number of rounds limit child JVM memory number of concurrent tasks is limited by per machine RAM set IO buffers carefully to avoid spills (requires memory!) pick a good partitioner write raw comparators leverage compound keys minimizes hot-spots by distributing on key secondary-sort on compound keys is almost free Round-Memory tradeoff Constant, O(1), in memory and rounds.
28 Best of both worlds MapReduce and Accumulo Store the Big Graph in an Accumulo Edge Table... great for updating a big, typed, multi-graph more timely (lower latency) access to graph Then extract a sub-graph from the Edge Table and use MapReduce for graph analysis.
29 How well does this really work? Prove on the industry benchmark Graph500.org! Breadth-First Search (BFS) on an undirected R-MAT Graph count Traversed Edges per Second (TEPS) n = 2 scale, m = 16 n terabytes 1200 Problem Class scale Graph500 Problem Sizes Class Scale Storage Toy GB Mini GB Small 32 1 TB Medium TB Large TB Huge PB
30 Walking a ridiculously Big Graph... Everything is harder at scale! performance is dominated by ability to acquire adjacencies requires specialized MapReduce partitioner to load-balance queries from MapReduce to Accumulo Edge Table Space-efficient Breadth-First Search is critical! create vertex, distance records sliding window over distance records to minimize input at each k iteration reduce tasks get adjacencies from Edge Table but only if... all distance values for a vertex, v, are equal to k
31 Graph500 Experiment Graph500 Huge class scale (4.40 trillion) vertices 2 46 (70.4 trillion) edges 1 Petabyte Cluster specs 1200 nodes 2 Intel quad-core per node 48 GB RAM per node 7200 RPM sata drives Huge problem is 19.5x more than cluster memory Traversed Edges (trillion) Problem Class Problem Class Memory = 57.6 TB Time (h) 1126 Terabytes Graph500 Scalability Benchmark
32 Graph500 Experiment Graph500 Huge class scale (4.40 trillion) vertices 2 46 (70.4 trillion) edges 1 Petabyte Cluster specs 1200 nodes 2 Intel quad-core per node 48 GB RAM per node 7200 RPM sata drives Huge problem is 19.5x more than cluster memory linear performance from 1 trillion to 70 trillion edges... Traversed Edges (trillion) Problem Class Problem Class Memory = 57.6 TB Time (h) 1126 Terabytes Graph500 Scalability Benchmark
33 Graph500 Experiment Graph500 Huge class scale (4.40 trillion) vertices 2 46 (70.4 trillion) edges 1 Petabyte Cluster specs 1200 nodes 2 Intel quad-core per node 48 GB RAM per node 7200 RPM sata drives Huge problem is 19.5x more than cluster memory linear performance from 1 trillion to 70 trillion edges... despite multiple hardware failures! Traversed Edges (trillion) Problem Class Problem Class Memory = 57.6 TB Time (h) 1126 Terabytes Graph500 Scalability Benchmark
34 Graph500 Experiment Graph500 Huge class scale (4.40 trillion) vertices 246 (70.4 trillion) edges 1 Petabyte linear performance from 1 trillion to 70 trillion edges... despite multiple hardware failures! Problem Class Problem Class Memory = 57.6 TB Terabytes Huge problem is 19.5x more than cluster memory Traversed Edges (trillion) Time (h) Graph500 Scalability Benchmark
35 Future work What are the tradeoffs between disk- and memory-based Big Graph solutions? power space cooling efficiency software development and maintenance Can a hybrid approach be viable? memory-based processing for subgraphs or incremental updates disk-based processing for global analysis Advances in memory and disk technologies will blur the lines. Will solid-state disks be another level of memory?
Big Graph Processing: Some Background Bo Wu Colorado School of Mines Part of slides from: Paul Burkhardt (National Security Agency) and Carlos Guestrin (Washington University) Mines CSCI-580, Bo Wu Graphs
Beyond Watson: Predictive Analytics and Big Data U.S. National Security Agency Research Directorate - R6 Technical Report February 3, 2014 300 years before Watson there was Euler! The first (Jeopardy!)
Human connectome. Gerhard et al., Frontiers in Neuroinformatics 5(3), 2011 2 NA = 6.022 1023 mol 1 Paul Burkhardt, Chris Waring An NSA Big Graph experiment Fast Iterative Graph Computation with Resource
CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University email@example.com 14.9-2015 1/36 Google MapReduce A scalable batch processing
Large-Scale Data Processing Eiko Yoneki firstname.lastname@example.org http://www.cl.cam.ac.uk/~ey204 Systems Research Group University of Cambridge Computer Laboratory 2010s: Big Data Why Big Data now? Increase
MapReduce Algorithms A Sense of Scale At web scales... Mail: Billions of messages per day Search: Billions of searches per day Social: Billions of relationships 2 A Sense of Scale At web scales... Mail:
SYSTAP / Open Source High Performance Highly Available 1 SYSTAP, LLC Small Business, Founded 2006 100% Employee Owned Customers OEMs and VARs Government TelecommunicaHons Health Care Network Storage Finance
A Performance Evaluation of Open Source Graph Databases Robert McColl David Ediger Jason Poovey Dan Campbell David A. Bader Overview Motivation Options Evaluation Results Lessons Learned Moving Forward
MapReduce and Distributed Data Analysis Google Research 1 Dealing With Massive Data 2 2 Dealing With Massive Data Polynomial Memory Sublinear RAM Sketches External Memory Property Testing 3 3 Dealing With
GraySort and at Yahoo on Hadoop.23 Thomas Graves Yahoo! May, 213 The Apache Hadoop software library is an open source framework that allows for the distributed processing of large data sets across clusters
SCALEOUT SOFTWARE How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time by Dr. William Bain and Dr. Mikhail Sobolev, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T wenty-first
Software tools for Complex Networks Analysis Fabrice Huet, University of Nice Sophia- Antipolis SCALE (ex-oasis) Team MOTIVATION Why do we need tools? Source : nature.com Visualization Properties extraction
Mass Storage Structure 12 CHAPTER Practice Exercises 12.1 The accelerating seek described in Exercise 12.3 is typical of hard-disk drives. By contrast, floppy disks (and many hard disks manufactured before
HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW 757 Maleta Lane, Suite 201 Castle Rock, CO 80108 Brett Weninger, Managing Director email@example.com Dave Smelker, Managing Principal firstname.lastname@example.org
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
Big Data Processing 2013-2014 Q2 April 7, 2014 (Resit) Lecturer: Claudia Hauff Time Limit: 180 Minutes Name: Answer the questions in the spaces provided on this exam. If you run out of room for an answer,
Massive Streaming Data Analytics: A Case Study with Clustering Coefficients David Ediger, Karl Jiang, Jason Riedy and David A. Bader Overview Motivation A Framework for Massive Streaming hello Data Analytics
With Saurabh Singh email@example.com The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next
Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous
Overview on Graph Datastores and Graph Computing Systems -- Litao Deng (Cloud Computing Group) 06-08-2012 Graph - Everywhere 1: Friendship Graph 2: Food Graph 3: Internet Graph Most of the relationships
1 Any modern computer system will incorporate (at least) two levels of storage: primary storage: typical capacity cost per MB $3. typical access time burst transfer rate?? secondary storage: typical capacity
SCALEOUT SOFTWARE Using an In-Memory Data Grid for Near Real-Time Data Analysis by Dr. William Bain, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 IN today s competitive world, businesses
Big Data Technology Map-Reduce Motivation: Indexing in Search Engines Edward Bortnikov & Ronny Lempel Yahoo Labs, Haifa Indexing in Search Engines Information Retrieval s two main stages: Indexing process
BSPCloud: A Hybrid Programming Library for Cloud Computing * Xiaodong Liu, Weiqin Tong and Yan Hou Department of Computer Engineering and Science Shanghai University, Shanghai, China firstname.lastname@example.org,
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 7 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
NAND Flash Storage for High Performance Computing Craig Ulmer email@example.com September 25, 2007 Craig Ulmer Maya Gokhale Greg Diamos Michael Rewak SNL/CA, LLNL Georgia Institute of Technology University
Surfing the Data Tsunami: A New Paradigm for Big Data Processing and Analytics Dr. Liangxiu Han Future Networks and Distributed Systems Group (FUNDS) School of Computing, Mathematics and Digital Technology,
Systems and Algorithms for Big Data Analytics YAN, Da Email: firstname.lastname@example.org My Research Graph Data Distributed Graph Processing Spatial Data Spatial Query Processing Uncertain Data Querying & Mining
Introduction to Big Data! with Apache Spark" UC#BERKELEY# This Lecture" The Big Data Problem" Hardware for Big Data" Distributing Work" Handling Failures and Slow Machines" Map Reduce and Complex Jobs"
BIG DATA What it is and how to use? Lauri Ilison, PhD Data Scientist 21.11.2014 Big Data definition? There is no clear definition for BIG DATA BIG DATA is more of a concept than precise term 1 21.11.14
Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers of the future may have only 1,000 vacuum tubes and perhaps weigh 1½ tons. Popular Mechanics, March 1949
Storage and Retrieval of Large RDF Graph Using Hadoop and MapReduce Mohammad Farhan Husain, Pankil Doshi, Latifur Khan, and Bhavani Thuraisingham University of Texas at Dallas, Dallas TX 75080, USA Abstract.
Architectures for Big Data Analytics A database perspective Fernando Velez Director of Product Management Enterprise Information Management, SAP June 2013 Outline Big Data Analytics Requirements Spectrum
Managing Scale in Ontological Systems 1 This presentation offers a brief look scale in ontological (semantic) systems, tradeoffs in expressivity and data scale, and both information and systems architectural
Big Data and Analytics: A Conceptual Overview Mike Park Erik Hoel In this technical workshop This presentation is for anyone that uses ArcGIS and is interested in analyzing large amounts of data We will
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
Unified Big Data Processing with Apache Spark Matei Zaharia @matei_zaharia What is Apache Spark? Fast & general engine for big data processing Generalizes MapReduce model to support more types of processing
Mastering Massive Data Volumes with Hypertable Doug Judd Talk Outline Overview Architecture Performance Evaluation Case Studies Hypertable Overview Massively Scalable Database Modeled after Google s Bigtable
Intro to Map/Reduce a.k.a. Hadoop Based on: Mining of Massive Datasets by Ra jaraman and Ullman, Cambridge University Press, 2011 Data Mining for the masses by North, Global Text Project, 2012 Slides by
Data-Intensive Computing with Map-Reduce and Hadoop Shamil Humbetov Department of Computer Engineering Qafqaz University Baku, Azerbaijan email@example.com Abstract Every day, we create 2.5 quintillion
Hadoop MapReduce over Lustre* High Performance Data Division Omkar Kulkarni April 16, 2013 * Other names and brands may be claimed as the property of others. Agenda Hadoop Intro Why run Hadoop on Lustre?
FPGA-based Multithreading for In-Memory Hash Joins Robert J. Halstead, Ildar Absalyamov, Walid A. Najjar, Vassilis J. Tsotras University of California, Riverside Outline Background What are FPGAs Multithreaded
EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution
THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES Vincent Garonne, Mario Lassnig, Martin Barisits, Thomas Beermann, Ralph Vigne, Cedric Serfon Vincent.Garonne@cern.ch firstname.lastname@example.org XLDB
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
GraySort on Apache Spark by Databricks Reynold Xin, Parviz Deyhim, Ali Ghodsi, Xiangrui Meng, Matei Zaharia Databricks Inc. Apache Spark Sorting in Spark Overview Sorting Within a Partition Range Partitioner
GPU File System Encryption Kartik Kulkarni and Eugene Linkov 5/10/2012 SUMMARY. We implemented a file system that encrypts and decrypts files. The implementation uses the AES algorithm computed through
Big Fast Data Hadoop acceleration with Flash June 2013 Agenda The Big Data Problem What is Hadoop Hadoop and Flash The Nytro Solution Test Results The Big Data Problem Big Data Output Facebook Traditional
Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012 1 Market Trends Big Data Growing technology deployments are creating an exponential increase in the volume
Big Systems, Big Data When considering Big Distributed Systems, it can be noted that a major concern is dealing with data, and in particular, Big Data Have general data issues (such as latency, availability,
CS2510 Computer Operating Systems HADOOP Distributed File System Dr. Taieb Znati Computer Science Department University of Pittsburgh Outline HDF Design Issues HDFS Application Profile Block Abstraction
CS2510 Computer Operating Systems HADOOP Distributed File System Dr. Taieb Znati Computer Science Department University of Pittsburgh Outline HDF Design Issues HDFS Application Profile Block Abstraction
WHITE PAPER Oracle NoSQL Database and SanDisk Offer Cost-Effective Extreme Performance for Big Data 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Abstract... 3 What Is Big Data?...
BigData An Overview of Several Approaches David Mera Masaryk University Brno, Czech Republic 16/12/2013 Table of Contents 1 Introduction 2 Terminology 3 Approaches focused on batch data processing MapReduce-Hadoop
In-Memory Databases Algorithms and Data Structures on Modern Hardware Martin Faust David Schwalb Jens Krüger Jürgen Müller The Free Lunch Is Over 2 Number of transistors per CPU increases Clock frequency
Similarity Search in a Very Large Scale Using Hadoop and HBase Stanislav Barton, Vlastislav Dohnal, Philippe Rigaux LAMSADE - Universite Paris Dauphine, France Internet Memory Foundation, Paris, France
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,
E6893 Big Data Analytics Lecture 8: Spark Streams and Graph Computing (I) Ching-Yung Lin, Ph.D. Adjunct Professor, Dept. of Electrical Engineering and Computer Science IBM Chief Scientist, Graph Computing
Cloud Computing Where ISR Data Will Go for Exploitation 22 September 2009 Albert Reuther, Jeremy Kepner, Peter Michaleas, William Smith This work is sponsored by the Department of the Air Force under Air
Hard Disk Drive vs. Kingston Now V+ 200 Series 240GB: Comparative Test Contents Hard Disk Drive vs. Kingston Now V+ 200 Series 240GB: Comparative Test... 1 Hard Disk Drive vs. Solid State Drive: Comparative
Saint Petersburg State University of Information Technologies, Mechanics and Optics Department of Telecommunication Systems IMPLEMENTING GREEN IT APPROACH FOR TRANSFERRING BIG DATA OVER PARALLEL DATA LINK
Challenges for Data Driven Systems Eiko Yoneki University of Cambridge Computer Laboratory Quick History of Data Management 4000 B C Manual recording From tablets to papyrus to paper A. Payberah 2014 2
Efficient Parallel Graph Exploration on Multi-Core CPU and GPU Pervasive Parallelism Laboratory Stanford University Sungpack Hong, Tayo Oguntebi, and Kunle Olukotun Graph and its Applications Graph Fundamental
Data Processing in the Cloud at Sanjay Radia Chief Architect, Hadoop/Grid Sradia@yahoo-inc.com Cloud Computing & Data Infrastructure Yahoo Inc. 1 Outline What are clouds and their benefits Clouds at Yahoo
An Oracle White Paper June 2012 High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database Executive Overview... 1 Introduction... 1 Oracle Loader for Hadoop... 2 Oracle Direct
Distributed computing: index building and use Distributed computing Goals Distributing computation across several machines to Do one computation faster - latency Do more computations in given time - throughput
Convex Optimization for Big Data: Lecture 2: Frameworks for Big Data Analytics Sabeur Aridhi Aalto University, Finland Sabeur Aridhi Frameworks for Big Data Analytics 1 / 59 Introduction Contents 1 Introduction
Scientific Computing Meets Big Data Technology: An Astronomy Use Case Zhao Zhang AMPLab and BIDS UC Berkeley email@example.com In collaboration with Kyle Barbary, Frank Nothaft, Evan Sparks, Oliver
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
Outline High Performance Computing (HPC) Towards exascale computing: a brief history Challenges in the exascale era Big Data meets HPC Some facts about Big Data Technologies HPC and Big Data converging
White Paper Joshua Ruggiero Computer Systems Engineer Intel Corporation Measuring Cache and Memory Latency and CPU to Memory Bandwidth For use with Intel Architecture December 2008 1 321074 Executive Summary
7/04/05 Fundamentals of Distributed Systems CC5- PROCESAMIENTO MASIVO DE DATOS OTOÑO 05 Lecture 4: DFS & MapReduce I Aidan Hogan firstname.lastname@example.org Inside Google circa 997/98 MASSIVE DATA PROCESSING (THE
A1 and FARM scalable graph database on top of a transactional memory layer Miguel Castro, Aleksandar Dragojević, Dushyanth Narayanan, Ed Nightingale, Alex Shamis Richie Khanna, Matt Renzelmann Chiranjeeb
Evaluating partitioning of big graphs Fredrik Hallberg, Joakim Candefors, Micke Soderqvist email@example.com, firstname.lastname@example.org, email@example.com Royal Institute of Technology, Stockholm, Sweden Abstract. Distributed
CS54100: Database Systems Cloud Databases: The Next Post- Relational World 18 April 2012 Prof. Chris Clifton Beyond RDBMS The Relational Model is too limiting! Simple data model doesn t capture semantics
Capacity Planning for Microsoft SharePoint Technologies Capacity Planning The process of evaluating a technology against the needs of an organization, and making an educated decision about the configuration
Big Data Processing, 2014/15 Lecture 10: HBase!! Claudia Hauff (Web Information Systems)! firstname.lastname@example.org 1 Course content Introduction Data streams 1 & 2 The MapReduce paradigm Looking behind the
Programming Abstractions and Security for Cloud Computing Systems By Sapna Bedi A MASTERS S ESSAY SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty