1 The Democratization of Big Data Sean Fahey* In recent years, it has become common for discussions about managing and analyzing information to reference data scientists using the cloud to analyze big data. Indeed these terms have become so ubiquitous in discussions of data processing that they are covered in popular comic strips like Dilbert and the terms are tracked on Gartner s Hype cycle. 1 The Harvard Business Review even labeled data scientist as the sexiest job of the 21st century. 2 The goal of this paper is to demystify these terms and, in doing so, provide a sound technical basis for exploring the policy challenges of analyzing large stores of information for national security purposes. It is worth beginning by proposing a working definition for these terms before exploring them in more detail. One can spend much time and effort developing firm definitions for these terms it took the National Institutes of Science and technology several years and sixteen versions to build consensus around the definition of cloud computing in NIST Special Publication the purpose here is to provide definitions that will be useful in furthering discussions of policy implications. Rather than defining big data in absolute terms (a task made nearly impossible by the rapid pace of advancements in computing technologies) one can define big data as a collection of data that is so large that it exceeds one s capacity to process it in an acceptable amount of time with available tools. This difficulty in processing can be a result of the data s volume (e.g., its size as measured in petabytes 4 ), its velocity (e.g., the number of new data elements added each second), or its variety (e.g., the mix of different types of data including structured and unstructured text, images, videos, etc...). 5 Examples abound in the commercial and scientific arenas of systems managing massive quantities of data. YouTube users upload over one hundred hours of video every minute, 6 Wal-Mart processes more than one million transactions each hour, and Facebook stores, accesses and analyzes more than thirty petabytes * DHS Programs Manager, Applied Physics Lab, and Vice Provost for Institutional Research, The Johns Hopkins University. 2014, Sean Fahey. 1. Scott Adams, Dilbert, DILBERT (July 29, 2012), Gartner s 2013 Hype Cycle for Emerging Technologies Maps Out Evolving Relationship Between Humans and Machines,GARTNER (Aug. 19, 2013), 2. Thomas H. Davenport & D. J. Patil, Data Scientist: The Sexiest Job of the 21st Century, HARV.BUS.REV., Oct. 2012, at Nat l Inst. of Standards & Tech, Final Version of NIST Cloud Computing Definition Published, NIST (Oct. 25, 2011), 4. One petabyte is equal to one million gigabytes. 5. Edd Dumbill, What is Big Data, O REILLY (Jan. 11, 2012), 6. Statistics,YOUTUBE, 325
2 326 JOURNAL OF NATIONAL SECURITY LAW &POLICY [Vol. 7:325 of user-generated data. 7 In scientific applications, the Large Hadron Collider generates more than fifteen petabytes of data annually which are analyzed in the search for new subatomic particles. 8 Looking out into space rather than inward into particles, the Sloan Digital Sky Survey mapped more than a quarter of the sky gathering measurements for more than 500 million stars and galaxies. 9 In the national security environment, increasingly high quality video and photo sensors on unmanned aerial vehicles (UAVs) are generating massive quantities of imagery for analysts to sift through to find and analyze targets. For homeland security, the recent Boston marathon bombing investigation proved both the challenge and potential utility of being able to quickly sift through large volumes of video data to find a suspect of interest. While the scale of the data being collected and analyzed might be new, the challenge of finding ways to analyze large datasets is a problem that has been around for at least a century. The modern era of data processing could be considered to start with the 1890 census where the advent of punch card technology allowed the decennial census to be completed in one rather than eight years. 10 World War II spurred the development of code breaking and target tracking computers, which further advanced the state of practice in rapidly analyzing and acting upon large volumes of data. 11 The Cold War along with commercial interests, further fueled demand for increasingly high performance computers that could solve problems ranging from fluid dynamics and weather to space science, stock trading and cryptography. For decades the United States government has funded research to accelerate the development of high performance computing systems that could address these challenges. During the 1970s and 1980s this investment yielded the development and maturation of supercomputers built around specialized hardware and software (e.g., Cray 1). In the 1990s, a new generation of high performance computers emerged based not on specialized hardware but instead clustering of mass-market commodity PCs (e.g., Beowulf clusters). 12 This cluster computing approach sought to achieve high performance without the need for, and cost of, specialized hardware. The development and growth of the internet in the 1990s and 2000s led to the development of a new wave of companies including Google, Amazon, Yahoo and Facebook that captured and needed to analyze data on a scale that had been previously infeasible. These companies sought to understand the relationships among pieces of data such as the links between webpages or the purchasing 7. A Comprehensive List of Big Data Statistics, WIKIBON BLOG (Aug. 1, 2012), big-data-statistics/. 8. Computing,CERN, 9. Sloan Digital Sky Survey, SDSS, 10. Uri Friedman, Anthropology of an Idea: Big Data,FOREIGN POLICY, Nov. 2012, at 30, Id. 12. See Thomas Sterling & Daniel Savarese, A Coming of Age for Beowulf-Class Computing, in 1685 LECTURE NOTES IN COMPUTER SCIENCE: EURO-PAR 99 PARALLEL PROCESSING PROCEEDINGS 78 (1999).
3 2014] THE DEMOCRATIZATION OF BIG DATA 327 patterns of individuals, and use that knowledge to drive their businesses. Google s quest for a better search engine led their index of web pages to grow from one billion pages in June 2000 to three billion in December 2001 to eight billion in November This demand for massive scale data processing led to a revolution in data and the birth of the modern big data era. These companies developed scalable approaches to managing data that built upon five trends in computing and business. The result of their efforts, in addition to the development of several highly profitable internet companies, was the democratization of big data analysis a shift which resulted in big data analysis being available not only to those who had the money to afford a supercomputer, or the technical skill to develop, program and maintain a Beowulf cluster, but instead to a much wider audience. The democratization of big data analysis was possible because of multiple independent ongoing advances in the computing including the evolution of computer hardware, new computer architectures, new operating software and programming languages, the open source community, and new business models. The growth in computing capability begins with advances at the chip and storage device level. For nearly fifty years, the semiconductor industry has found ways to consistently deliver on Gordon Moore s 1965 prediction that the number of components on an integrated circuit would double every year or two. This has led to the development of increasingly powerful computer chips and, by extension, computers. At the same time, manufacturers of storage devices like hard drives have been able to also achieve exponential increases in the density of storage along with exponential decreases in the cost of storage. 14 Since the introduction of the disk drive in 1956, the density of information it can record has swelled from a paltry 2,000 bits to 100 billion bits (gigabits), all crowded in the small space of a square inch. That represents a 50-million-fold increase. 15 This increase in computing power and storage capacity has outstripped the needs of most individuals and programs and led to a second key innovation underlying modern big data virtualization. Virtualization is the process of using one physical computer to host multiple virtual computers. Each virtual computer operates as though it were its own physical computer even though it is actually running on shared or simulated hardware. For example, rather than having five web servers each operating at 10% capacity, one could run all five web servers on one virtualized server operating at 50% capacity. This shift from physical to virtual machines, has created the ability to easily add new storage or processing capabilities on demand when needed and to modify the arrangement of those computers virtually rather than having to physically run new network cabling. This has led 13. Our History in Depth,GOOGLE, 14. Chip Walters, Kryder s Law, SCI.AM., Aug. 2005, at 32; see also Matt Komorowski, A History of Storage Cost, MKOMO BLOG, (graphing the decrease in hard drive cost per gigabyte over time). 15. Walters, supra note 16, at 32.
4 328 JOURNAL OF NATIONAL SECURITY LAW &POLICY [Vol. 7:325 to the growth of cloud computing which NIST defines as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. 16 Providers such as Amazon have created services (e.g., Amazon Web Services) to allow individuals and companies to benefit from cloud computing by purchasing storage and processing as required. Rather than investing up front in a datacenter and computing hardware, companies can now purchase computing resources as a utility when needed for their business. While increased computing power, virtualization, and the development of cloud computing business models were fundamental to the advent of the current big data era, they were not sufficient. As late as 2002, analysis of large quantities of data still required specialized supercomputing or expensive enterprise database hardware. Advances in cluster computing showed promise but had not yet been brought to full commercial potential. Google changed this between 2003 and 2006 with the publication of three seminal papers that together laid the foundation for the current era of big data. Google was conceived from its founding to be a massively scalable as a web search company. 17 It needed to be able to index billions of webpages and analyze the connections between those pages. That required finding new web pages, copying their contents onto Google servers, identifying the content of the pages, and divining the degree of authority of a page. The PageRank algorithm developed by Larry Page laid out a mathematical approach to indexing the web but required a robust information backbone to allow scaling to internet size. Google s first step to addressing this scalability challenge was, around 2000, to commit to using commodity computer hardware rather than specialized computer hardware. In doing so the company assumed failures of computers and disk drives would be the norm and so had to design a file system to have constant monitoring, error detection, fault tolerance, and automatic recovery. Google developed a distributed file system (called the Google File System) that accomplished this by replicating data across multiple computers so that the data would not be lost if any one computer or hard drive were to fail. The Google File System managed the process of copying and maintaining awareness of file copies in the system so programmers didn t have to NAT L INST. OF STANDARDS & TECH, THE NIST DEFINITION OF CLOUD COMPUTING, SPECIAL PUBLICA- TION , at 2 (2011), available at pdf. 17. Sergey Brin & Lawrence Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine, COMP. NETWORKS & ISDN SYS., Apr. 1998, at 107, available at html. 18. SANJAY GHEMAWAT, HOWARD GOBIOFF, &SHUN-TAK LEUNG, THE GOOGLE FILE SYSTEM (2003), available at pdf.
5 2014] THE DEMOCRATIZATION OF BIG DATA 329 Google second innovation to address scalability was to develop a new programming language to allow processing of the data in the distributed file system. Traditionally, scalability of high performance computing systems was limited by the ability to move large quantities of data from its storage location to the computer s processor. Recognizing this limitation, Google adopted a parallel computing paradigm and pushed the (much smaller) program out to where the data was stored rather than moving the data to a processing node. Google developed a programming model called Map/Reduce that allowed for the development of programs that could run across the distributed data in the Google File System. 19 Finally, Google developed a large-scale distributed database called Big Table that could store and analyze data at scales that exceeded the capability of conventional relational databases. 20 This final piece, in conjunction with the Google File System and Map/Reduce provided Google with the infrastructure not only to provide search of indexed webpages but also to provide other offerings at massive scale such as Google Earth and Google Analytics. While these innovations were profound and highly profitable for Google, their publication as a series of papers from 2003 to 2006 led directly to the democratization of big data analysis. The papers were instrumental in guiding the thinking and development of other technology innovators and companies such as Yahoo! and Facebook. One innovator, Doug Cutting, adapted the concepts in the Google papers into an open source search engine. Later with the support of Yahoo!, this project became the open source Hadoop project with the Apache Software Foundation. 21 Since its launch, Hadoop has become a common tool for analysis of big data and is used in many web companies (e.g., Facebook, ebay, Twitter, etc...)tosupport their data analysis. In 2011, Yahoo s Hadoop infrastructure consisted of 42,000 nodes and hundreds of petabytes of storage. 22 Even with the advent of an open source package built on the innovative ideas of Google and decades of advances in hardware development and high performance computing, there was one critical step remaining in the process of democratizing big data analysis making these advancements accessible. Multiple companies sprung up around the open source Hadoop ecosystem to provide technical support to help apply these approaches to the wide range of business challenges in the marketplace. Companies like Cloudera, Hortonworks, and MapR have helped make these powerful technologies available to companies who had data but not the technological workforce to analyze them. Collectively, these advances have moved analysis of large datasets from a 19. Jeff Dean & Sanjay Ghemawat, MapReduce: Simplified Data Processing on Large Clusters, COMM. OF THE ACM, Jan. 2008, at FAY CHANG ET AL., BIGTABLE:A DISTRIBUTED STORAGE SYSTEM FOR STRUCTURED DATA (2006). 21. HADOOP, 22. Derrick Harris, The history of Hadoop: From 4 nodes to the future of data, GIGAOM (Mar. 4, 2013), from
6 330 JOURNAL OF NATIONAL SECURITY LAW &POLICY [Vol. 7:325 high-cost, specialized industry fifteen years ago to one that is accessible and affordable to any company. The impact on the business community has been profound with the big data marketplace for hardware, software and services in 2012 topping $11 billion and projected to grow at a greater than 30% rate for the coming four years. 23 Stepping back from the hype there a several fundamental changes that have resulted from the democratization of big data. First, the proliferation of sensors and the Internet have led to many new streams of data. There is simply more data available to be collected and analyzed now than there has been in years past. Second, the cost of storage is now often lower than the cost of deciding what to throw away. This fact combined with the belief that some data, while seemingly unimportant when it is collected, can yield great value when connected to other data in the future, will lead companies to increasingly default to storing data rather than discarding it. Finally, cloud-based big data providers provide zero capital investment ways to start analyzing big data. This provides accessible, scalable methods for storage and analysis. Though less apparent in revenue figures, the impact on the national security community has been profound as well since these new big data analysis technologies offer new ways to address challenges of dealing with large datasets for the defense, intelligence, and homeland security realms. The national security community faces data analysis challenges in many domains that require sifting through large volumes of data to find a signal of interest. Whether it is finding evidence of money laundering in financial transaction data, suspect shipping containers in transportation manifest data or malicious cyber activity in internet log data, the government faces challenges that require storing and analyzing massive quantities of data. The democratization of big data puts these new analytic tools within reach of government agencies as well as companies and offers the ability for improved mission performance. In other words, one area where democratization may affect national security is in terms of cost. Given our current budget constrained environment, anything that is becoming more cost effective will be welcomed with open arms. As noted above, the major news item regarding the democratization of big data is that is now much cheaper to store and analyze data. Small businesses can now afford the analytical tools, services and experts whereas before this storage and analysis was prohibitively expensive. One of the benefits, then, may be that the U.S. Government, like the small business owner, can store and analyze more data at lower cost. This is magnificent news to those who are wrestling with budgets. Both the National Security Agency and Central Intelligence Agency have publicly cited the advent of big data analytic techniques as core to their technology roadmaps going forward. Revolutionize Big Data Exploitation is 23. Jeff Kelly et al., Big Data Vendor Revenue And Market Forecast , WIKIBON (Oct. 19, 2013),
7 2014] THE DEMOCRATIZATION OF BIG DATA 331 one of the Four Big Bets that the CIA is currently making in its technology strategy. 24 Similarly, NSA s Chief Information Officer, Lonny Anderson, points out that the NSA is at the forefront of the IC s efforts to implement big data analytics and an underlying cloud computing infrastructure. 25 One of the purposes of intelligence, particularly defense intelligence, is to inform commanders decisions by providing insightful intelligence. DoD Joint Publication 2-0 on Joint Intelligence lays out the intelligence analysis process by which raw data is collected from the operational environment by sensors. 26 This data is, in turn, distilled through processing and exploitation into information and finally, the information is refined through analysis and production into intelligence. 27 That analytical process is not new. The military has been using data analytics within the context of the defense intelligence cycle long before big data and cloud-computing services became democratized. Now, thanks to that exciting technological development, the defense intelligence cycle can deliver to warfighters and commanders better intelligence at lower cost. For instance, ground troops can plan mounted and dismounted patrol routes based on queries of attack types in an area of operation. Anti-piracy task forces in the Indian Ocean could, using similar analytics, plan more efficient coverage of that vast space with queries and analysis of types and times of pirate activities. Remote piloted vehicle (RPV) coverage of border areas or drug routes could be improved both domestically and abroad. In sum, big data democratization affects the battlefield, not just the budget, because technological tools are more accessible now then they were a few years ago. In sum, advances in big data offer the national security community, and the defense department in particular, the capability to both distill and relate information more efficiently than could be done with conventional methods. Whether it is identifying objects of interest from RPV video streams or connecting disparate pieces of information that allow for the detection of potential terrorists, the national security community has high hopes for the operational benefits of big data. 24. Matt Sledge, CIA s Gus Hunt On Big Data: We Try To Collect Everything And Hang On To It Forever. HUFFINGTON POST (Mar. 20, 2013), 25. George I. Seffers, Big Data in Demand for Intelligence Community, AFCEA (Jan. 4, 2013), node/ JOINT CHIEFS OF STAFF, JOINT PUB. 2-0, JOINT INTELLIGENCE (2007), available at doctrine/new_pubs/jp2_0.pdf. 27. Raw data by itself has relatively limited utility. However, when data is collected from a sensor and processed into an intelligible form, it becomes information and gains greater utility. Information on its own is a fact or a series of facts that may be of utility to the commander, but when related to other information already known about the operational environment and considered in the light of past experience regarding an adversary, it gives rise to a new set of facts, which may be termed intelligence. The relating of one set of information to another or the comparing of information against a database of knowledge already held and the drawing of conclusions by an intelligence analyst, is the foundation of the process by which intelligence is produced. Id. at I-2.
Big Data a threat or a chance? Helwig Hauser University of Bergen, Dept. of Informatics Big Data What is Big Data? well, lots of data, right? we come back to this in a moment. certainly, a buzz-word but
BIG DATA TRENDS AND TECHNOLOGIES THE WORLD OF DATA IS CHANGING Cloud WHAT IS BIG DATA? Big data are datasets that grow so large that they become awkward to work with using onhand database management tools.
ISSN: 2454-2377, October 2015 Big Data and Hadoop Simmi Bagga 1 Satinder Kaur 2 1 Assistant Professor, Sant Hira Dass Kanya MahaVidyalaya, Kala Sanghian, Distt Kpt. INDIA E-mail: email@example.com
+ Breakaway Session By Johnson Iyilade, Ph.D. University of Saskatchewan, Canada 23-July, 2015 BIG DATA USING HADOOP + Outline n Framing the Problem Hadoop Solves n Meet Hadoop n Storage with HDFS n Data
Abstract- Today data is increasing in volume, variety and velocity. To manage this data, we have to use databases with massively parallel software running on tens, hundreds, or more than thousands of servers.
Danny Wang, Ph.D. Vice President of Business Strategy and Risk Management Republic Bank Agenda» Overview» What is Big Data?» Accelerates advances in computer & technologies» Revolutionizes data measurement»
W H I T E P A P E R Deriving Intelligence from Large Data Using Hadoop and Applying Analytics Abstract This white paper is focused on discussing the challenges facing large scale data processing and the
Data Intensive Scalable Computing Harnessing the Power of Cloud Computing Randal E. Bryant February, 2009 Our world is awash in data. Millions of devices generate digital data, an estimated one zettabyte
What is Big Data? Concepts, Ideas and Principles Hitesh Dharamdasani # whoami Security Researcher, Malware Reversing Engineer, Developer GIT > George Mason > UC Berkeley > FireEye > On Stage Building Data-driven
Introduction to Hadoop HDFS and Ecosystems ANSHUL MITTAL Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data Topics The goal of this presentation is to give
AGENDA What is BIG DATA? What is Hadoop? Why Microsoft? The Microsoft BIG DATA story Hadoop PDW Our BIG DATA Roadmap BIG DATA? Volume 59% growth in annual WW information 1.2M Zetabytes (10 21 bytes) this
Surfing the Data Tsunami: A New Paradigm for Big Data Processing and Analytics Dr. Liangxiu Han Future Networks and Distributed Systems Group (FUNDS) School of Computing, Mathematics and Digital Technology,
Hadoop Beyond Hype: Complex Adaptive Systems Conference Nov 16, 2012 Viswa Sharma Solutions Architect Tata Consultancy Services 1 Agenda What is Hadoop Why Hadoop? The Net Generation is here Sizing the
BIG DATA FUNDAMENTALS Timeframe Minimum of 30 hours Use the concepts of volume, velocity, variety, veracity and value to define big data Learning outcomes Critically evaluate the need for big data management
Introduction to Hadoop 1 What is Hadoop? the big data revolution extracting value from data cloud computing 2 Understanding MapReduce the word count problem more examples MCS 572 Lecture 24 Introduction
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
Are You Ready for Big Data? Jim Gallo National Director, Business Analytics February 11, 2013 Agenda What is Big Data? How do you leverage Big Data in your company? How do you prepare for a Big Data initiative?
Hadoop Big Data for Processing Data and Performing Workload Girish T B 1, Shadik Mohammed Ghouse 2, Dr. B. R. Prasad Babu 3 1 M Tech Student, 2 Assosiate professor, 3 Professor & Head (PG), of Computer
CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University firstname.lastname@example.org 14.9-2015 1/36 Google MapReduce A scalable batch processing
Survey Results Table of Contents Survey Results... 4 Big Data Company Strategy... 6 Big Data Business Drivers and Benefits Received... 8 Big Data Integration... 10 Big Data Implementation Challenges...
Research Note What is Big Data? By: Devin Luco Copyright 2012, ASA Institute for Risk & Innovation Keywords: Big Data, Database Management, Data Variety, Data Velocity, Data Volume, Structured Data, Unstructured
Cloud Computing based on the Hadoop Platform Harshita Pandey 1 UG, Department of Information Technology RKGITW, Ghaziabad ABSTRACT In the recent years,cloud computing has come forth as the new IT paradigm.
International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 7 (2013), pp. 655-662 International Research Publications House http://www. irphouse.com /ijict.htm Data
White Paper: SAS and Apache Hadoop For Government Unlocking Higher Value From Business Analytics to Further the Mission Inside: Using SAS and Hadoop Together Design Considerations for Your SAS and Hadoop
BIG, MAPREDUCE & HADOOP LARGE SCALE DISTRIBUTED SYSTEMS By Jean-Pierre Lozi A tutorial for the LSDS class LARGE SCALE DISTRIBUTED SYSTEMS BIG, MAPREDUCE & HADOOP 1 OBJECTIVES OF THIS LAB SESSION The LSDS
Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop Kanchan A. Khedikar Department of Computer Science & Engineering Walchand Institute of Technoloy, Solapur, Maharashtra,
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 9, September 2014,
Hadoop http://hadoop.apache.org/ What Is Apache Hadoop? The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using
Big Data Analytics Prof. Dr. Lars Schmidt-Thieme Information Systems and Machine Learning Lab (ISMLL) Institute of Computer Science University of Hildesheim, Germany 33. Sitzung des Arbeitskreises Informationstechnologie,
SEPTEMBER 2013 Buyer s Guide to Big Data Integration Sponsored by Contents Introduction 1 Challenges of Big Data Integration: New and Old 1 What You Need for Big Data Integration 3 Preferred Technology
Architecting the Future of Big Data Whitepaper Apache Hadoop: The Big Data Refinery Introduction Big data has become an extremely popular term, due to the well-documented explosion in the amount of data
White Paper Big Data Executive Overview WP-BD-10312014-01 By Jafar Shunnar & Dan Raver Page 1 Last Updated 11-10-2014 Table of Contents Section 01 Big Data Facts Page 3-4 Section 02 What is Big Data? Page
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 7, July 2014, pg.759
III Big Data Technologies Today, new technologies make it possible to realize value from Big Data. Big data technologies can replace highly customized, expensive legacy systems with a standard solution
CTOlabs.com White Paper: What You Need To Know About Hadoop June 2011 A White Paper providing succinct information for the enterprise technologist. Inside: What is Hadoop, really? Issues the Hadoop stack
L1: Introduction to Hadoop Feng Li email@example.com School of Statistics and Mathematics Central University of Finance and Economics Revision: December 1, 2014 Today we are going to learn... 1 General
Managing Big Data with Hadoop & Vertica A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database Copyright Vertica Systems, Inc. October 2009 Cloudera and Vertica
GE Intelligent Platforms The Rise of Industrial Big Data Leveraging large time-series data sets to drive innovation, competitiveness and growth capitalizing on the big data opportunity The Rise of Industrial
Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware Created by Doug Cutting and Mike Carafella in 2005. Cutting named the program after
Big Data Big Deal? Salford Systems www.salford-systems.com 2015 Copyright Salford Systems 2010-2015 Big Data Is The New In Thing Google trends as of September 24, 2015 Difficult to read trade press without
Big Data, Big Traffic And the WAN Internet Research Group January, 2012 About The Internet Research Group www.irg-intl.com The Internet Research Group (IRG) provides market research and market strategy
EXECUTIVE REPORT Big Data and the 3 V s: Volume, Variety and Velocity The three V s are the defining properties of big data. It is critical to understand what these elements mean. The main point of the
Introduction to Cloud Computing Cloud Computing I (intro) 15 319, spring 2010 2 nd Lecture, Jan 14 th Majd F. Sakr Lecture Motivation General overview on cloud computing What is cloud computing Services
The 3 questions to ask yourself about BIG DATA Do you have a big data problem? Companies looking to tackle big data problems are embarking on a journey that is full of hype, buzz, confusion, and misinformation.
WHITE PAPER Ubuntu and Hadoop: the perfect match February 2012 Copyright Canonical 2012 www.canonical.com Executive introduction In many fields of IT, there are always stand-out technologies. This is definitely
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A REVIEW ON HIGH PERFORMANCE DATA STORAGE ARCHITECTURE OF BIGDATA USING HDFS MS.
Welcome back! We will have more fun. Databases & Business Intelligence Part 1 BUSA345 Lecture #8-1 Claire Hitosugi, PhD, MBA In the previous lecture We learned Define Open Source Software (OSS) and provide
Cloud Computing and Big Data. What s the Big Deal? Arlene Minkiewicz, Chief Scientist PRICE Systems, LLC firstname.lastname@example.org 2013 PRICE Systems, LLC All Rights Reserved Decades of Cost Management
Next Beyond Big Data: Riding the Technology Wave Ira A. (Gus) Hunt Chief Technology Officer Profound Change is under way 1 Social Mobile Cloud 2 3 4 Altered the Flow of Information 5 Nano Bio Sensors 6
Big Data Explained An introduction to Big Data Science. 1 Presentation Agenda What is Big Data Why learn Big Data Who is it for How to start learning Big Data When to learn it Objective and Benefits of
NoSQL for SQL Professionals William McKnight Session Code BD03 About your Speaker, William McKnight President, McKnight Consulting Group Frequent keynote speaker and trainer internationally Consulted to
Hadoop Distributed File System T-111.5550 Seminar On Multimedia 2009-11-11 Eero Kurkela Agenda Introduction Flesh and bones of HDFS Architecture Accessing data Data replication strategy Fault tolerance
From Internet Data Centers to Data Centers in the Cloud This case study is a short extract from a keynote address given to the Doctoral Symposium at Middleware 2009 by Lucy Cherkasova of HP Research Labs
Big Data Putting data to productive use Fast Forward What is big data, and why should you care? Get familiar with big data terminology, technologies, and techniques. Getting started with big data to realize
Big data and its transformational effects Professor Fai Cheng Head of Research & Technology September 2015 Working together for a safer world Topics Lloyd s Register Big Data Data driven world Data driven
Scientific Bulletin Economic Sciences, Volume 14/ Issue 1 BIG DATA IN BUSINESS ENVIRONMENT Logica BANICA 1, Alina HAGIU 2 1 Faculty of Economics, University of Pitesti, Romania email@example.com 2 Faculty
Big Data Challenges and Opportunities Ira A. (Gus) Hunt Chief Technology Officer Our Mission We are the nation's first line of defense. We accomplish what others cannot accomplish and go where others cannot
WHITE PAPER Page 1 of 8 Data Mining in the Swamp Taming Unruly Data with Cloud Computing By John Brothers Business Intelligence is all about making better decisions from the data you have. However, all
Impact of Big Data in Oil & Gas Industry Pranaya Sangvai Reliance Industries Limited 04 Feb 15, DEJ, Mumbai, India. New Age Information 2.92 billions Internet Users in 2014 Twitter processes 7 terabytes
Manifest for Big Data Pig, Hive & Jaql Ajay Chotrani, Priyanka Punjabi, Prachi Ratnani, Rupali Hande Final Year Student, Dept. of Computer Engineering, V.E.S.I.T, Mumbai, India Faculty, Computer Engineering,
Analytics in the Cloud Peter Sirota, GM Elastic MapReduce Data-Driven Decision Making Data is the new raw material for any business on par with capital, people, and labor. What is Big Data? Terabytes of
Big Data and Analytics: Getting Started with ArcGIS Mike Park Erik Hoel Agenda Overview of big data Distributed computation User experience Data management Big data What is it? Big Data is a loosely defined
WHITE PAPER CDH AND BUSINESS CONTINUITY: An overview of the availability, data protection and disaster recovery features in Hadoop Abstract Using the sophisticated built-in capabilities of CDH for tunable
CONTEXTUAL ADVERTISEMENT MINING BASED ON BIG DATA ANALYTICS A.Divya *1, A.M.Saravanan *2, I. Anette Regina *3 MPhil, Research Scholar, Muthurangam Govt. Arts College, Vellore, Tamilnadu, India Assistant
white paper Big Data for Small Business Why small to medium enterprises need to know about Big Data and how to manage it Sponsored by: Big Data is the ability to collect information from diverse sources
Large scale processing using Hadoop Ján Vaňo What is Hadoop? Software platform that lets one easily write and run applications that process vast amounts of data Includes: MapReduce offline computing engine
CTOlabs.com White Paper: Datameer s User-Focused Big Data Solutions May 2012 A White Paper providing context and guidance you can use Inside: Overview of the Big Data Framework Datameer s Approach Consideration
Big Data Analytics Lucas Rego Drumond Information Systems and Machine Learning Lab (ISMLL) Institute of Computer Science University of Hildesheim, Germany Big Data Analytics Big Data Analytics 1 / 36 Outline
Big Data: Opportunities for the Dental Benefits Industry Joel Reichert - VP, Data Strategy Herschel Reich - VP, Payer Consulting September 16, 2014 Big Data: Opportunities for the Dental Benefits Industry
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A REVIEW ON BIG DATA MANAGEMENT AND ITS SECURITY PRUTHVIKA S. KADU 1, DR. H. R.
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, firstname.lastname@example.org Assistant Professor, Information
Community Driven Apache Hadoop Apache Hadoop Patterns of Use April 2013 2013 Hortonworks Inc. http://www.hortonworks.com Big Data: Apache Hadoop Use Distilled There certainly is no shortage of hype when
June, 2012 Table of Contents EXECUTIVE SUMMARY... 3 INTRODUCTION... 3 VIRTUALIZING APACHE HADOOP... 4 INTRODUCTION TO VSPHERE TM... 4 USE CASES AND ADVANTAGES OF VIRTUALIZING HADOOP... 4 MYTHS ABOUT RUNNING
Government Technology Trends to Watch in 2014: Big Data OVERVIEW The federal government manages a wide variety of civilian, defense and intelligence programs and services, which both produce and require
Trends and Research Opportunities in Spatial Big Data Analytics and Cloud Computing NCSU GeoSpatial Forum Siva Ravada Senior Director of Development Oracle Spatial and MapViewer 2 Evolving Technology Platforms
Laurence Liew General Manager, APAC Economics Is Driving Big Data Analytics to the Cloud Big Data 101 The Analytics Stack Economics of Big Data Convergence of the 3 forces Big Data Analytics in the Cloud
Accelerating and Simplifying Apache Hadoop with Panasas ActiveStor White paper NOvember 2012 1.888.PANASAS www.panasas.com Executive Overview The technology requirements for big data vary significantly
The API Revolution: What it means for Data Center Infrastructure Management The API Revolution Most, likely you are familiar with application programming interfaces (APIs) and don t even realize it - in
Microsoft Big Data Solution Brief Contents Introduction... 2 The Microsoft Big Data Solution... 3 Key Benefits... 3 Immersive Insight, Wherever You Are... 3 Connecting with the World s Data... 3 Any Data,
Here comes the flood Tools for Big Data analytics Guy Chesnot -June, 2012 Agenda Data flood Implementations Hadoop Not Hadoop 2 Agenda Data flood Implementations Hadoop Not Hadoop 3 Forecast Data Growth
CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Spring 2016 MapReduce MapReduce is a programming model
FALL 2012 VOL.54 NO.1 Thomas H. Davenport, Paul Barth and Randy Bean How Big Data is Different Brought to you by Please note that gray areas reflect artwork that has been intentionally removed. The substantive
Blazent IT Data Intelligence Technology: From Disparate Data Sources to Tangible Business Value White Paper The phrase garbage in, garbage out (GIGO) has been used by computer scientists since the earliest
With Saurabh Singh email@example.com The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials
Knowledgent White Paper Series Big Data and Healthcare Payers WHITE PAPER Summary With the implementation of the Affordable Care Act, the transition to a more member-centric relationship model, and other
Volume 4, Issue 1, January 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Transitioning