Cosmos. Big Data and Big Challenges. Pat Helland July 2011
|
|
|
- Loraine Griffin
- 10 years ago
- Views:
Transcription
1 Cosmos Big Data and Big Challenges Pat Helland July
2 Outline Introduction Cosmos Overview The Structured s Project Some Other Exciting Projects Conclusion 2
3 What Is COSMOS? Petabyte Store and Computation System About 62 physical petabytes stored (~275 logical petabytes stored) Tens of thousands of computers across many datacenters Massively parallel processing based on Dryad Similar to MapReduce but can represent arbitrary DAGs of computation Automatic computation placement with data (Structured Computation Optimized for Parallel Execution) SQL-like language with set-oriented record and column manipulation Automatically compiled and optimized for execution over Dryad Management of hundreds of Virtual Clusters for computation allocation Buy your machines and give them to COSMOS Guaranteed that many compute resources May use more when they are not in use Ubiquitous access to OSD s data Combining knowledge from different datasets is today s secret sauce 3
4 Cosmos and OSD Computation OSD Applications fall into two broad categories: Back-end: Massive batch processing creates new datasets Front-end: Online request processing serves up and captures information Cosmos provides storage and computation for Back-End Batch data analysis It does not support storage and computation needs for the Front-End Internet Cosmos! Crawling Other Data OSD Computing/Storage Back-End Front-End On- Batch Line Web- Data Analysis Serving Large Data for Read-Only Results On-Line Datasets Work User & System Data
5 COSMOS: The Service Data drives search and advertising Web pages: Links, text, titles, etc Search logs: What people searched for, what they clicked, etc IE logs: What sites people visit, the browsing order, etc Advertising logs: What ads do people click on, what was shown, etc We generate about 2 PB every day SearchUX is hundreds of TB Toolbar is many 10s of TB Search is hundreds of TB Web snapshots are many 10s of TB MSN, Hotmail, IE, web, etc COSMOS is the backbone for Bing analysis and relevance Click-stream information is imported from many sources and cooked Queries analyzing user context, click commands, and success are processed COSMOS is a service We run the code ourselves (on many tens of thousands of servers) Users simply feed in data, submit jobs, and extract the results 5
6 Outline Introduction Cosmos Overview The Structured s Project Some Other Exciting Projects Conclusion 6
7 PN PN PN Cosmos Architecture from 100,000 Feet Layer: -- Code is submitted to the Compiler -- The optimizer make decisions about execution plan and parallelism -- Algebra (describing the job) is built to run on the Runtime Execution Layer: -- Jobs queues up per Virtual Cluster -- When a job starts, it gets a Job Mgr to deploy work in parallel close to its data -- Many Processing Nodes (PNs) host execution vertices running code Layer Execution Layer Job Mgr Data = SELECT * FROM S WHERE Col-1 > 10 Compiler Run/T Optimizer Run/T Run/T Store Layer: -- Many Extent Nodes store and compress replicated extents on disk -- Extents are combined to make unstructured streams -- CSM (COSMOS Store Manager) handles names, streams, & replication Store Layer Foo Bar Zot Extent Extent Extent Extent EN EN EN EN 7
8 Extent Nodes: The Store Layer Implement a file system holding extents Each extent is up to 2GB Compression and fault detection are important parts of the EN CSM: COSMOS Store Manager Instructs replication across 3 different ENs per extent Manages composition of streams out of extents Manages the namespace of streams Store Layer Foo Bar Zot Extent Extent Extent Extent EN EN EN EN 8
9 PN PN PN Execution Engine: The Execution Engine Takes the plan for the parallel execution of a job and finds computers to perform the work Responsible for the placement of the computation close to the data it reads Ensures all the inputs for the computation are available before firing it up Responsible for failures and restarts Dryad is similar to Map-Reduce Execution Layer Run/T Job Mgr Run/T Run/T Foo Bar Zot 9
10 The Language (Structured Computation Optimized for Parallel Execution) Heavily influenced by SQL and relational algebra Changed to deal with input and output streams -A Input Output -1 Scope Job -2 -B -3 Input Arrives as Sets of Records Computation Occurs as Sets of Records Output Written as Sets of Records is a high level declarative language for data manipulation It translates very naturally into parallel computation 10
11 The Compiler and Optimizer The Compiler and Optimizer take programs and create: The algebra describing the computation The breakdown of the work into processing units The description of the inputs and outputs from the processing units Many decisions about compiling and optimizing are driven by data size and minimizing data movement Layer Data = SELECT * FROM S WHERE Col-1 > 10 Compiler Optimizer Run/T Run/T Run/T 11
12 Virtual Cluster: a management tool The Virtual Cluster Allocates resources across groups within OSD Cost model captured in a queue of work (with priority) within the VC Each Virtual Cluster has a guaranteed capacity We will bump other users of the VC s capacity if necessary The VC can use other idle capacity 100 Hi-Pri PNs Work Queue VC-A 500 Hi-Pri PNs Work Queue VC-B 20 Hi-Pri PNs Work Queue VC-C 1000 Hi-Pri PNs Work Queue VC-D Work Queue 350 Hi-Pri PNs VC-E 12
13 Outline Introduction Cosmos Overview The Structured s Project Some Other Exciting Projects Conclusion 13
14 Introducing Structured s Cosmos currently supports streams An unstructured byte stream of data Created by append-only writing to the end of the stream Structured streams are streams with metadata Metadata defines column structure and affinity/clustering information Structured streams simplify extractors and outputters A structured stream may be imported into scope without an extractor Structured streams offer performance improvements Column features allow for processing optimizations Affinity management can dramatically improve performance Key-oriented features offer (sometimes very significant) access performance improvements A Sequence of Bytes Today s s (unstructured streams) New Structured s Record-Oriented Access A Sequence of Bytes Metadata
15 Today s Use of Extractors and Outputters Extractors Programs to input data and supply metadata Outputters Take Scope data and create a bytestream for storage Discards metadata known to the system Unstructured A Extractor Metadata Scope Scope Processing with Metadata, Structure, and Relational Ops Outputter source = EXTRACT col1, col2 FROM A Data = SELECT * FROM source <process Data> D Unstructured OUTPUT Data to D
16 Metadata, s, Extractors, & Outputters Scope has metadata for the data it is processing Extractors provide metadata info as they suck up unstructured streams Processing the Scope queries ensures metadata is preserved The new results may have different metadata than the old Scope knows the new metadata Scope writes structured streams The internal information used by Scope is written out as metadata Scope reads structured streams Reading a structured stream allows later jobs to see the metadata The Representation of a Structured on Disk Is Only Visible to Scope! Unstructured A Extractor Metadata C Metadata Structured Scope Structured B Metadata Scope Processing with Metadata, Structure, and Relational Ops Outputter D Unstructured Note: No Cosmos Notion of Metadata for D Only the Outputter Knows
17 s, Metadata, and Increased Performance By adding metadata (describing the stream) into the stream, we can provide performance improvements: Cluster-Key access: random reads of records identified by key Partitioning and affinity: data to be processed together (sometimes across multiple streams), can be placed together for faster processing Metadata for a structured stream is kept inside the stream The stream is a self-contained unit The structured stream is still an unstructured stream (plus some stuff) Cluster-Key Access A Metadata Partitioning and Affinity
18 Cluster-Key Lookup Cluster-Key Indices make a huge performance improvement Today: If you want a few records, you must process the whole stream Structured s: Directly access the records by cluster-key index How it works: Cluster-Key lookup is implemented by having indexing information contained in the metadata inside the stream The records must be stored in cluster-key order to use cluster-key lookup Cosmos managed index generated at structured stream creation Lookup D Foo Metadata (including index) A E J N Q W A B C D E F G H I J K L M N O P Q W X Y Z
19 Implementing Partitioning and Affinity Joins across streams can be very expensive Network traffic is a major expense when joining large datasets together Placing related data together can dramatically reduce processing cost We affinitize data when we believe it is likely to be processed together Affinitization places the data close together If we want affinity, we create a partition as we create a structured stream A partition is a subset of the stream intended to be affinitized together Scope Affinitized Data Is Stored Close Together
20 Case Study 1: Aggregation SELECT GetDomain(URL) AS Domain, SUM((MyNewScoreFunction(A, B, )) AS TotalScore FROM Web-Table GROUP BY Domain; SELECT TOP 100 Domain ORDER BY TotalScore; Super Expensive Expensive Structured Datasets (Sstream) (partitioned by URL, sorted by URL) Unstructured Datasets Much more efficient w/o shuffling data across network
21 Case Study 2: Selection SELECT URL, feature1, feature2 FROM Web-Table WHERE URL == Partition Range Metadata Partition Metadata P P Judiciously choose partition Push predicate close to data Massive data reads Unstructured Datasets Structured Datasets (Sstream) (partitioned by URL, sorted by URL)
22 Case Study 3: Join Multiple Datasets SELECT URL, COUNT(*) AS TotalClicks FROM Web-Table AS W, Click- AS C WHERE GetDomain(W.URL) == AND W.URL == C.URL AND W.Outlinks > 10 GROUP BY URL; SELECT TOP 100 URL ORDER BY TotalClicks; Targeted partitions Affinitized location Super Expensive Expensive Massive data reads Structured Datasets (Sstream) (partitioned by URL, sorted by URL) Unstructured Datasets Much more efficient w/o shuffling data across network
23 Outline Introduction Cosmos Overview The Structured s Project Some Other Exciting Projects Conclusion 23
24 Reliable Pub-Sub Event Processing Cosmos will add high performance pub-sub event processing Publications receive append-only events Subscriptions define the binding of publications to event processing app code Publications and subscriptions are designed to handle many tens of thousands of events per second Massively partitioned publications Cosmos managed pools of event processors with automatic load balancing Events may be appended to publications by other event processors or by external applications feeding work into Cosmos Publication. Subscription Event Processor App Code 24
25 High-Performance Event Processing Event processors (user application code) may: Read and update records within tables Append to publications Each event will be consumed in a transaction atomic with its table and publication changes Transactions may touch any record(s) in the tables These may be running on thousands of computers Pub-A Event-X Subscription Read B1 Event Processor Event-X Event-Y Update B2 Read C1 Update C2 Pub-D Atomic Transaction Boundary Rec-B1 Rec-B2 Rec-C1 Rec-C2 Table-B Table-C 25
26 Combining Random & Sequential Processing Random Processing: Event processor applications may be randomly reading and updating very large tables with extremely large throughput Applications external to Cosmos may access tables for random reads & updates Transactions control atomic updates by event processors Changes are accumulated as deltas visible to other event processors as soon as the transaction commits Sequential Processing: Massively parallel jobs may read consistent snapshots of the same tables being updated by event processors Very interesting optimization tradeoffs in the storage, placement, and representation of data for sequential versus random access The use of SSD offers very interesting opportunities Of course, there s not much SSD compared to the size of the data we manage 26
27 Outline Introduction Cosmos Overview The Structured s Project Some Other Exciting Projects Conclusion 27
Cosmos. Big Data and Big Challenges. Ed Harris - Microsoft Online Services Division HTPS 2011
Cosmos Big Data and Big Challenges Ed Harris - Microsoft Online Services Division HTPS 2011 1 Outline Introduction Cosmos Overview The Structured s Project Conclusion 2 What Is COSMOS? Petabyte Store and
Big Data With Hadoop
With Saurabh Singh [email protected] The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials
Data Management in the Cloud
Data Management in the Cloud Ryan Stern [email protected] : Advanced Topics in Distributed Systems Department of Computer Science Colorado State University Outline Today Microsoft Cloud SQL Server
Managing Big Data with Hadoop & Vertica. A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database
Managing Big Data with Hadoop & Vertica A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database Copyright Vertica Systems, Inc. October 2009 Cloudera and Vertica
Counterfactual analysis: a Big Data case-study using Cosmos/SCOPE. Ed Snelson
Counterfactual analysis: a Big Data case-study using Cosmos/SCOPE Ed Snelson Work by Léon Bottou Jonas Peters Denis Xavier Charles Elon Portugaly Patrice Simard Joaquin Quiñonero Candela D. Max Chickering
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE AGENDA Introduction to Big Data Introduction to Hadoop HDFS file system Map/Reduce framework Hadoop utilities Summary BIG DATA FACTS In what timeframe
Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh
1 Hadoop: A Framework for Data- Intensive Distributed Computing CS561-Spring 2012 WPI, Mohamed Y. Eltabakh 2 What is Hadoop? Hadoop is a software framework for distributed processing of large datasets
An Oracle White Paper November 2010. Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics
An Oracle White Paper November 2010 Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics 1 Introduction New applications such as web searches, recommendation engines,
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related Summary Xiangzhe Li Nowadays, there are more and more data everyday about everything. For instance, here are some of the astonishing
Hadoop IST 734 SS CHUNG
Hadoop IST 734 SS CHUNG Introduction What is Big Data?? Bulk Amount Unstructured Lots of Applications which need to handle huge amount of data (in terms of 500+ TB per day) If a regular machine need to
Chapter 7. Using Hadoop Cluster and MapReduce
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
CitusDB Architecture for Real-Time Big Data
CitusDB Architecture for Real-Time Big Data CitusDB Highlights Empowers real-time Big Data using PostgreSQL Scales out PostgreSQL to support up to hundreds of terabytes of data Fast parallel processing
Agenda. Some Examples from Yahoo! Hadoop. Some Examples from Yahoo! Crawling. Cloud (data) management Ahmed Ali-Eldin. First part: Second part:
Cloud (data) management Ahmed Ali-Eldin First part: ZooKeeper (Yahoo!) Agenda A highly available, scalable, distributed, configuration, consensus, group membership, leader election, naming, and coordination
Distributed File Systems
Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, [email protected] Assistant Professor, Information
Google File System. Web and scalability
Google File System Web and scalability The web: - How big is the Web right now? No one knows. - Number of pages that are crawled: o 100,000 pages in 1994 o 8 million pages in 2005 - Crawlable pages might
How To Handle Big Data With A Data Scientist
III Big Data Technologies Today, new technologies make it possible to realize value from Big Data. Big data technologies can replace highly customized, expensive legacy systems with a standard solution
Planning the Installation and Installing SQL Server
Chapter 2 Planning the Installation and Installing SQL Server In This Chapter c SQL Server Editions c Planning Phase c Installing SQL Server 22 Microsoft SQL Server 2012: A Beginner s Guide This chapter
BIG DATA TECHNOLOGY. Hadoop Ecosystem
BIG DATA TECHNOLOGY Hadoop Ecosystem Agenda Background What is Big Data Solution Objective Introduction to Hadoop Hadoop Ecosystem Hybrid EDW Model Predictive Analysis using Hadoop Conclusion What is Big
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
The Sierra Clustered Database Engine, the technology at the heart of
A New Approach: Clustrix Sierra Database Engine The Sierra Clustered Database Engine, the technology at the heart of the Clustrix solution, is a shared-nothing environment that includes the Sierra Parallel
Amazon EC2 Product Details Page 1 of 5
Amazon EC2 Product Details Page 1 of 5 Amazon EC2 Functionality Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of
NoSQL for SQL Professionals William McKnight
NoSQL for SQL Professionals William McKnight Session Code BD03 About your Speaker, William McKnight President, McKnight Consulting Group Frequent keynote speaker and trainer internationally Consulted to
CSE-E5430 Scalable Cloud Computing Lecture 2
CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University [email protected] 14.9-2015 1/36 Google MapReduce A scalable batch processing
Realtime Apache Hadoop at Facebook. Jonathan Gray & Dhruba Borthakur June 14, 2011 at SIGMOD, Athens
Realtime Apache Hadoop at Facebook Jonathan Gray & Dhruba Borthakur June 14, 2011 at SIGMOD, Athens Agenda 1 Why Apache Hadoop and HBase? 2 Quick Introduction to Apache HBase 3 Applications of HBase at
Apache Flink Next-gen data analysis. Kostas Tzoumas [email protected] @kostas_tzoumas
Apache Flink Next-gen data analysis Kostas Tzoumas [email protected] @kostas_tzoumas What is Flink Project undergoing incubation in the Apache Software Foundation Originating from the Stratosphere research
Parallel Processing of cluster by Map Reduce
Parallel Processing of cluster by Map Reduce Abstract Madhavi Vaidya, Department of Computer Science Vivekanand College, Chembur, Mumbai [email protected] MapReduce is a parallel programming model
Prepared By : Manoj Kumar Joshi & Vikas Sawhney
Prepared By : Manoj Kumar Joshi & Vikas Sawhney General Agenda Introduction to Hadoop Architecture Acknowledgement Thanks to all the authors who left their selfexplanatory images on the internet. Thanks
BigData. An Overview of Several Approaches. David Mera 16/12/2013. Masaryk University Brno, Czech Republic
BigData An Overview of Several Approaches David Mera Masaryk University Brno, Czech Republic 16/12/2013 Table of Contents 1 Introduction 2 Terminology 3 Approaches focused on batch data processing MapReduce-Hadoop
Hadoop for MySQL DBAs. Copyright 2011 Cloudera. All rights reserved. Not to be reproduced without prior written consent.
Hadoop for MySQL DBAs + 1 About me Sarah Sproehnle, Director of Educational Services @ Cloudera Spent 5 years at MySQL At Cloudera for the past 2 years [email protected] 2 What is Hadoop? An open-source
Lambda Architecture for Batch and Real- Time Processing on AWS with Spark Streaming and Spark SQL. May 2015
Lambda Architecture for Batch and Real- Time Processing on AWS with Spark Streaming and Spark SQL May 2015 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document
Hadoop s Entry into the Traditional Analytical DBMS Market. Daniel Abadi Yale University August 3 rd, 2010
Hadoop s Entry into the Traditional Analytical DBMS Market Daniel Abadi Yale University August 3 rd, 2010 Data, Data, Everywhere Data explosion Web 2.0 more user data More devices that sense data More
Building your Big Data Architecture on Amazon Web Services
Building your Big Data Architecture on Amazon Web Services Abhishek Sinha @abysinha [email protected] AWS Services Deployment & Administration Application Services Compute Storage Database Networking
Hadoop: Embracing future hardware
Hadoop: Embracing future hardware Suresh Srinivas @suresh_m_s Page 1 About Me Architect & Founder at Hortonworks Long time Apache Hadoop committer and PMC member Designed and developed many key Hadoop
Data Warehousing and Analytics Infrastructure at Facebook. Ashish Thusoo & Dhruba Borthakur athusoo,[email protected]
Data Warehousing and Analytics Infrastructure at Facebook Ashish Thusoo & Dhruba Borthakur athusoo,[email protected] Overview Challenges in a Fast Growing & Dynamic Environment Data Flow Architecture,
Architectures for massive data management
Architectures for massive data management Apache Kafka, Samza, Storm Albert Bifet [email protected] October 20, 2015 Stream Engine Motivation Digital Universe EMC Digital Universe with
How To Protect Your Data From Being Damaged On Vsphere Vdp Vdpa Vdpo Vdprod (Vmware) Vsphera Vdpower Vdpl (Vmos) Vdper (Vmom
VMware vsphere Data Protection 6.0 Maciej Kot Senior Systems Engineer VMware Overview vsphere Data Protection Overview Data protection for VMs and applications Agent-less VM backup and restore Agents for
Data-Intensive Computing with Map-Reduce and Hadoop
Data-Intensive Computing with Map-Reduce and Hadoop Shamil Humbetov Department of Computer Engineering Qafqaz University Baku, Azerbaijan [email protected] Abstract Every day, we create 2.5 quintillion
SQL Server 2012 Performance White Paper
Published: April 2012 Applies to: SQL Server 2012 Copyright The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication.
bigdata Managing Scale in Ontological Systems
Managing Scale in Ontological Systems 1 This presentation offers a brief look scale in ontological (semantic) systems, tradeoffs in expressivity and data scale, and both information and systems architectural
Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms
Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes
The Google File System
The Google File System By Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung (Presented at SOSP 2003) Introduction Google search engine. Applications process lots of data. Need good file system. Solution:
Open source Google-style large scale data analysis with Hadoop
Open source Google-style large scale data analysis with Hadoop Ioannis Konstantinou Email: [email protected] Web: http://www.cslab.ntua.gr/~ikons Computing Systems Laboratory School of Electrical
Data Mining in the Swamp
WHITE PAPER Page 1 of 8 Data Mining in the Swamp Taming Unruly Data with Cloud Computing By John Brothers Business Intelligence is all about making better decisions from the data you have. However, all
THE HADOOP DISTRIBUTED FILE SYSTEM
THE HADOOP DISTRIBUTED FILE SYSTEM Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Presented by Alexander Pokluda October 7, 2013 Outline Motivation and Overview of Hadoop Architecture,
Big Data Technology Map-Reduce Motivation: Indexing in Search Engines
Big Data Technology Map-Reduce Motivation: Indexing in Search Engines Edward Bortnikov & Ronny Lempel Yahoo Labs, Haifa Indexing in Search Engines Information Retrieval s two main stages: Indexing process
Big + Fast + Safe + Simple = Lowest Technical Risk
Big + Fast + Safe + Simple = Lowest Technical Risk The Synergy of Greenplum and Isilon Architecture in HP Environments Steffen Thuemmel (Isilon) Andreas Scherbaum (Greenplum) 1 Our problem 2 What is Big
BIG DATA What it is and how to use?
BIG DATA What it is and how to use? Lauri Ilison, PhD Data Scientist 21.11.2014 Big Data definition? There is no clear definition for BIG DATA BIG DATA is more of a concept than precise term 1 21.11.14
Jeffrey D. Ullman slides. MapReduce for data intensive computing
Jeffrey D. Ullman slides MapReduce for data intensive computing Single-node architecture CPU Machine Learning, Statistics Memory Classical Data Mining Disk Commodity Clusters Web data sets can be very
Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc [email protected]
Take An Internal Look at Hadoop Hairong Kuang Grid Team, Yahoo! Inc [email protected] What s Hadoop Framework for running applications on large clusters of commodity hardware Scale: petabytes of data
Hadoop Architecture. Part 1
Hadoop Architecture Part 1 Node, Rack and Cluster: A node is simply a computer, typically non-enterprise, commodity hardware for nodes that contain data. Consider we have Node 1.Then we can add more nodes,
Using distributed technologies to analyze Big Data
Using distributed technologies to analyze Big Data Abhijit Sharma Innovation Lab BMC Software 1 Data Explosion in Data Center Performance / Time Series Data Incoming data rates ~Millions of data points/
Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] [email protected]
Hadoop Distributed File System Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] [email protected] Hadoop, Why? Need to process huge datasets on large clusters of computers
FAWN - a Fast Array of Wimpy Nodes
University of Warsaw January 12, 2011 Outline Introduction 1 Introduction 2 3 4 5 Key issues Introduction Growing CPU vs. I/O gap Contemporary systems must serve millions of users Electricity consumed
Bigdata High Availability (HA) Architecture
Bigdata High Availability (HA) Architecture Introduction This whitepaper describes an HA architecture based on a shared nothing design. Each node uses commodity hardware and has its own local resources
Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. https://hadoop.apache.org. Big Data Management and Analytics
Overview Big Data in Apache Hadoop - HDFS - MapReduce in Hadoop - YARN https://hadoop.apache.org 138 Apache Hadoop - Historical Background - 2003: Google publishes its cluster architecture & DFS (GFS)
NoSQL and Hadoop Technologies On Oracle Cloud
NoSQL and Hadoop Technologies On Oracle Cloud Vatika Sharma 1, Meenu Dave 2 1 M.Tech. Scholar, Department of CSE, Jagan Nath University, Jaipur, India 2 Assistant Professor, Department of CSE, Jagan Nath
CS246: Mining Massive Datasets Jure Leskovec, Stanford University. http://cs246.stanford.edu
CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2 CPU Memory Machine Learning, Statistics Classical Data Mining Disk 3 20+ billion web pages x 20KB = 400+ TB
Lecture 10: HBase! Claudia Hauff (Web Information Systems)! [email protected]
Big Data Processing, 2014/15 Lecture 10: HBase!! Claudia Hauff (Web Information Systems)! [email protected] 1 Course content Introduction Data streams 1 & 2 The MapReduce paradigm Looking behind the
Hadoop & its Usage at Facebook
Hadoop & its Usage at Facebook Dhruba Borthakur Project Lead, Hadoop Distributed File System [email protected] Presented at the Storage Developer Conference, Santa Clara September 15, 2009 Outline Introduction
DATA MINING WITH HADOOP AND HIVE Introduction to Architecture
DATA MINING WITH HADOOP AND HIVE Introduction to Architecture Dr. Wlodek Zadrozny (Most slides come from Prof. Akella s class in 2014) 2015-2025. Reproduction or usage prohibited without permission of
Open source large scale distributed data management with Google s MapReduce and Bigtable
Open source large scale distributed data management with Google s MapReduce and Bigtable Ioannis Konstantinou Email: [email protected] Web: http://www.cslab.ntua.gr/~ikons Computing Systems Laboratory
Hadoop & its Usage at Facebook
Hadoop & its Usage at Facebook Dhruba Borthakur Project Lead, Hadoop Distributed File System [email protected] Presented at the The Israeli Association of Grid Technologies July 15, 2009 Outline Architecture
MapReduce Jeffrey Dean and Sanjay Ghemawat. Background context
MapReduce Jeffrey Dean and Sanjay Ghemawat Background context BIG DATA!! o Large-scale services generate huge volumes of data: logs, crawls, user databases, web site content, etc. o Very useful to be able
Where We Are. References. Cloud Computing. Levels of Service. Cloud Computing History. Introduction to Data Management CSE 344
Where We Are Introduction to Data Management CSE 344 Lecture 25: DBMS-as-a-service and NoSQL We learned quite a bit about data management see course calendar Three topics left: DBMS-as-a-service and NoSQL
Data processing goes big
Test report: Integration Big Data Edition Data processing goes big Dr. Götz Güttich Integration is a powerful set of tools to access, transform, move and synchronize data. With more than 450 connectors,
Introduction to Hadoop. New York Oracle User Group Vikas Sawhney
Introduction to Hadoop New York Oracle User Group Vikas Sawhney GENERAL AGENDA Driving Factors behind BIG-DATA NOSQL Database 2014 Database Landscape Hadoop Architecture Map/Reduce Hadoop Eco-system Hadoop
Big data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
In Memory Accelerator for MongoDB
In Memory Accelerator for MongoDB Yakov Zhdanov, Director R&D GridGain Systems GridGain: In Memory Computing Leader 5 years in production 100s of customers & users Starts every 10 secs worldwide Over 15,000,000
Assignment # 1 (Cloud Computing Security)
Assignment # 1 (Cloud Computing Security) Group Members: Abdullah Abid Zeeshan Qaiser M. Umar Hayat Table of Contents Windows Azure Introduction... 4 Windows Azure Services... 4 1. Compute... 4 a) Virtual
Hadoop and Relational Database The Best of Both Worlds for Analytics Greg Battas Hewlett Packard
Hadoop and Relational base The Best of Both Worlds for Analytics Greg Battas Hewlett Packard The Evolution of Analytics Mainframe EDW Proprietary MPP Unix SMP MPP Appliance Hadoop? Questions Is Hadoop
Real Time Big Data Processing
Real Time Big Data Processing Cloud Expo 2014 Ian Meyers Amazon Web Services Global Infrastructure Deployment & Administration App Services Analytics Compute Storage Database Networking AWS Global Infrastructure
Understanding Neo4j Scalability
Understanding Neo4j Scalability David Montag January 2013 Understanding Neo4j Scalability Scalability means different things to different people. Common traits associated include: 1. Redundancy in the
Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] June 3 rd, 2008
Hadoop Distributed File System Dhruba Borthakur Apache Hadoop Project Management Committee [email protected] June 3 rd, 2008 Who Am I? Hadoop Developer Core contributor since Hadoop s infancy Focussed
GraySort and MinuteSort at Yahoo on Hadoop 0.23
GraySort and at Yahoo on Hadoop.23 Thomas Graves Yahoo! May, 213 The Apache Hadoop[1] software library is an open source framework that allows for the distributed processing of large data sets across clusters
Deploying Flash- Accelerated Hadoop with InfiniFlash from SanDisk
WHITE PAPER Deploying Flash- Accelerated Hadoop with InfiniFlash from SanDisk 951 SanDisk Drive, Milpitas, CA 95035 2015 SanDisk Corporation. All rights reserved. www.sandisk.com Table of Contents Introduction
HadoopRDF : A Scalable RDF Data Analysis System
HadoopRDF : A Scalable RDF Data Analysis System Yuan Tian 1, Jinhang DU 1, Haofen Wang 1, Yuan Ni 2, and Yong Yu 1 1 Shanghai Jiao Tong University, Shanghai, China {tian,dujh,whfcarter}@apex.sjtu.edu.cn
SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications. Jürgen Primsch, SAP AG July 2011
SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications Jürgen Primsch, SAP AG July 2011 Why In-Memory? Information at the Speed of Thought Imagine access to business data,
Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database
Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Built up on Cisco s big data common platform architecture (CPA), a
Upgrading to Microsoft SQL Server 2008 R2 from Microsoft SQL Server 2008, SQL Server 2005, and SQL Server 2000
Upgrading to Microsoft SQL Server 2008 R2 from Microsoft SQL Server 2008, SQL Server 2005, and SQL Server 2000 Your Data, Any Place, Any Time Executive Summary: More than ever, organizations rely on data
HadoopTM Analytics DDN
DDN Solution Brief Accelerate> HadoopTM Analytics with the SFA Big Data Platform Organizations that need to extract value from all data can leverage the award winning SFA platform to really accelerate
Introduction to Big Data! with Apache Spark" UC#BERKELEY#
Introduction to Big Data! with Apache Spark" UC#BERKELEY# This Lecture" The Big Data Problem" Hardware for Big Data" Distributing Work" Handling Failures and Slow Machines" Map Reduce and Complex Jobs"
Cloud Computing at Google. Architecture
Cloud Computing at Google Google File System Web Systems and Algorithms Google Chris Brooks Department of Computer Science University of San Francisco Google has developed a layered system to handle webscale
3Gen Data Deduplication Technical
3Gen Data Deduplication Technical Discussion NOTICE: This White Paper may contain proprietary information protected by copyright. Information in this White Paper is subject to change without notice and
Testing Big data is one of the biggest
Infosys Labs Briefings VOL 11 NO 1 2013 Big Data: Testing Approach to Overcome Quality Challenges By Mahesh Gudipati, Shanthi Rao, Naju D. Mohan and Naveen Kumar Gajja Validate data quality by employing
NoSQL Data Base Basics
NoSQL Data Base Basics Course Notes in Transparency Format Cloud Computing MIRI (CLC-MIRI) UPC Master in Innovation & Research in Informatics Spring- 2013 Jordi Torres, UPC - BSC www.jorditorres.eu HDFS
BookKeeper. Flavio Junqueira Yahoo! Research, Barcelona. Hadoop in China 2011
BookKeeper Flavio Junqueira Yahoo! Research, Barcelona Hadoop in China 2011 What s BookKeeper? Shared storage for writing fast sequences of byte arrays Data is replicated Writes are striped Many processes
Hadoop Distributed File System. T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela
Hadoop Distributed File System T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela Agenda Introduction Flesh and bones of HDFS Architecture Accessing data Data replication strategy Fault tolerance
Hadoop & Spark Using Amazon EMR
Hadoop & Spark Using Amazon EMR Michael Hanisch, AWS Solutions Architecture 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Agenda Why did we build Amazon EMR? What is Amazon EMR?
A programming model in Cloud: MapReduce
A programming model in Cloud: MapReduce Programming model and implementation developed by Google for processing large data sets Users specify a map function to generate a set of intermediate key/value
Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)
Journal of science e ISSN 2277-3290 Print ISSN 2277-3282 Information Technology www.journalofscience.net STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS) S. Chandra
Hortonworks & SAS. Analytics everywhere. Page 1. Hortonworks Inc. 2011 2014. All Rights Reserved
Hortonworks & SAS Analytics everywhere. Page 1 A change in focus. A shift in Advertising From mass branding A shift in Financial Services From Educated Investing A shift in Healthcare From mass treatment
Introduction to Cloud Computing
Introduction to Cloud Computing Cloud Computing I (intro) 15 319, spring 2010 2 nd Lecture, Jan 14 th Majd F. Sakr Lecture Motivation General overview on cloud computing What is cloud computing Services
Benchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
Internals of Hadoop Application Framework and Distributed File System
International Journal of Scientific and Research Publications, Volume 5, Issue 7, July 2015 1 Internals of Hadoop Application Framework and Distributed File System Saminath.V, Sangeetha.M.S Abstract- Hadoop
HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW
HADOOP ON ORACLE ZFS STORAGE A TECHNICAL OVERVIEW 757 Maleta Lane, Suite 201 Castle Rock, CO 80108 Brett Weninger, Managing Director [email protected] Dave Smelker, Managing Principal [email protected]
SAP HANA SAP s In-Memory Database. Dr. Martin Kittel, SAP HANA Development January 16, 2013
SAP HANA SAP s In-Memory Database Dr. Martin Kittel, SAP HANA Development January 16, 2013 Disclaimer This presentation outlines our general product direction and should not be relied on in making a purchase
