1 Data Intensive Science Education Thomas J. Hacker Associate Professor, Computer & Information Technology Purdue University, West Lafayette, Indiana USA Gjesteprofessor (Visiting Professor), Department of Electrical Engineering and Computer Science University of Stavanger, Norway EU-China-Nord America Workshop on HPC Cloud and Big Data June 20, 2013 University of Stavanger, Norway
2 Introduction and Motivation Theory and Experiment (1800s) Computational Simulation Third leg of science Past 50 years or so (1950s) Data (21st century science) Fourth leg of science Researchers are flooded with data Tremendous quantity and multiple scales of data Difficult to collect, store, and manage How can we distill meaningful knowledge from data?
3 Data is the 4th Paradigm Producing an avalanche of high resolution digital data All (or most) of the data needs to be accessible over a long period of time Much of the data is not reproducible Example NEES project Structure or sample destroyed through testing Very expensive to rebuild for more tests
4 Data, data every where We are surrounded by data that we want, but it is difficult to find the information that we need Water, water every where, Nor any dro to drink. Samuel Taylor Coleridge, Rime of the Ancient Mariner Private, shared, and public data repositories Files on your computer Group documents and files Experimental results Published papers Data are scattered across many systems and devices Personal computer, old diskettes in a box, several systems, Old computer systems The Rime of the Ancient Mariner: Plate 32: The Pliot, by Gustave Doré
5 Need for Data Education Data is the 4th paradigm of Science and Engineering We are losing valuable data every day The techniques we were taught to maintain a lab notebook has not been effectively transferred to computer based data collection and registration systems. So much data is available and collected today, it is not possible to keep it on paper anymore.
6 Two Examples of Data Intensive Science Two large-scale science and engineering projects illustrate the problems related to data intensive science National Science Foundation George E. Brown Network for Earthquake Engineering Simulation (NEES) Purdue operates the headquarters for the NEEScomm, the community of NEES research facilities The Compact Muon Solenoid project Purdue operates a Tier-2 CMS center
7 NSF Network for Earthquake Engineering Simulation (NEES) Safer buildings and civil infrastructure are needed to reduce damage and loss from earthquakes and tsunamis To facilitate research to improve seismic design of buildings and civil infrastructure, the National Science Foundation established NEES NEES Objectives Develop a national, multi-user, research infrastructure to support research and innovation in earthquake and tsunami loss reduction Create an educated workforce in hazard mitigation Conduct broader outreach and lifelong learning activities
8 Vision for NEES Facilitate access to the world's best integrated network of state-of-the art physical simulation facilities Build a cyber-enabled community that shares ideas, data, and computational tools and models. Promote education and training for the next generation of researchers and practitioners. Cultivate partnerships with other organizations to disseminate research results, leverage cyberinfrastructure, and reduce risk by transferring results into practice.
9 NEES Research Facilities NEES has a broad set of experimental facilities Each type of equipment produces unique data Located at 14 sites across the United States Shake Table Tsunami Wave Basin Large-Scale Testing Facilities Centrifuge Field and Mobile Facilities Large-Displacement Facility Cyberinfrastructure
10 Oregon State University University of Minnesota University of Illinois- Urbana University of California Berkeley University of California Davis https://www.nees.org University of Buffalo University of California Santa Barbara Cornell University University of California Los Angeles Rensselaer Polytechnic Institute University of California San Diego University of Nevada Reno University of Texas Austin Lehigh University
11 Large-Scale Testing Facilties Lehigh University Reaction wall, strong floor dynamic actuators UC-Berkeley Reconfigurable Reaction Wall University of Illinois Urbana-Champaign Multi-Axial Full-Scale Sub-Structured Testing & Simulation (MUST-SIM) University of Minnesota Reaction walls Multi-Axial Subassemblage Testing (MAST) Images: Univ of Minnesota
12 NEEShub at Nees.org
13 Compact Muon Solenoid Project Another example of a big data project Two primary computational goals Move detector data from Large Hadron Collider at CERN to remote sites for processing Examine detector data for evidence of Higgs boson ~15 PB/yr data Applications used by CMS are not inherently parallel Data is split up and distributed across nodes Embarrassingly parallel
14 CMS Project Overview CERN Large Hadron Collider Project (LHC) Particle accelerator and collider largest in the world 17 mile circumference tunnel Providing evidence to support the existence of the Higgs boson Six detector experiments at the LHC Atlas, CMS, LHCb, ALICE, TOTEM, LHCf Compact Muon Solenoid (CMS) Very large solenoid with 4 Tesla magnetic field Earth s magnetic field 60 x 10^-6 Tesla
15 CMS Detector
17 Purdue CMS Tier-2 Center Computing Infrastructure ~10,000 computing cores within the Purdue University Community Cluster program Purdue recently (June 18) announced the Conte Supercomputer Fastest university-owned supercomputer in the United States 3 PB of disk storage running Hadoop Sharing a 100 Gb/sec network uplink to Indianapolis and Chicago Ultimately connecting to Fermi National Lab in Chicago Provided 14% of all Tier-2 computing globally in 2012
18 Purdue CMS Tier-2 Center Physicists from around the world submit computational jobs to Purdue Data is copied from the Tier-1 to Purdue storage on user request Simulation codes also run at Purdue, with results pushed up to Tier-1 center or other Tier-2s. International data sharing Data interoperability is designed into the project from the beginning. There is one instrument (the CMS detector), which greatly simplified the sharing and reuse of data compared with a project like NEES
19 Challenges involved in Big Data Performance at Scale How can we effectively match data performance with HPC capabilities? How can we ensure good reliability of these systems? Data Curation Challenges What should we preserve, how should we preserve it, and how can we ensure the long-term viability of the data? Disciplinary Sociology and Cyberinfrastructure How can we effectively promote and support the adoption and use of new technologies? How can we foster the development of new disciplinary practices focused on the long-term accessibility of data?
20 Performance at Scale Petaflop scale systems are now available for use by researchers Example: Purdue Conte system announced this week (Rmax 943 TF, Rpeak Petaflops) Conte was built with 580 HP ProLiant SL250 Generation 8 (Gen8) servers, each incorporating two Intel Xeon processors and two Intel Xeon Phi coprocessors, integrated with Mellanox 56Gb/S FDR InfiniBand. Conte has 580 servers (570 at the time of testing) with 9,120 standard cores and 68,400 Phi cores, for a total of 77,520 cores. Big data analytics coupled with petascale systems requires high bandwidth storage systems Avoid wasteful and expensive CPU stalls Scaling up is along two axes: Large volume of data (example: CMS Project) Large variety and number of files (example: NEES project)
21 Curation Challenges Data production rate is tremendous Volume of data is growing over time Sensor sampling rate increasing High definition video Managing data transfer Time required to upload and download data is growing Upload and download time can take a lot of time if there are network bottlenecks Ensuring data integrity Filtering, cleaning, and calibration is often needed before upload and curating data The community needs to also retain the raw data in case an error is made or in case a researcher can later distill further insights from the data.
22 Curation Challenges File type management Data is stored in files through the intermediary of an application This means that the information in the data will be encoded into some kind of format It s difficult (if not impossible) to restrict the file formats used by the research community As these applications change (or disappear) over time, the information encoded in the data may become stranded Risk of stranded data When the file format cannot be precisely identified, then we don t know which application can be used as an intermediary for reading the information encoded in the data. This leads to stranded data that is useless.
23 Curation Challenges Linking computation with data and archived data Will need the ability to quickly search archived data much more detailed that what Google can deliver How can we quickly discover, convert, and transfer archived data to be close to the user and to computation? (especially HPC) Need to match data I/O capabilities with growth in the number of CPU cores and core speed.
24 Long-term accessibility We have data in the NEEShub from the 1970s Science: Rescue of Old Data Offers Lesson for Particle Physicists by Andrew Curry (Feb 2011) Described the need to find old, almost lost data for a physics experiment from the 1980s The data will need to remain viable and accessible for years into the future
25 Discipline Sociology Sociological factors in data curation Disciplinary differences in how data are archived, how to value archived data, and determining what is worth retaining Who determines what is worth keeping? What is the practice in the specific discipline? International standards and practices in metadata tagging, representing numbers, and even character sets NEES is working with partners in Japan and China we need to determine how to represent their data in a common standard framework Terminology for numbers (, vs.., lakh vs. 100,000) Changing the behavior of scientists to value curation and long-term accessibility
26 Managing Curation at Scale How can we efficiently use data curator s time? NEES now has 1.8M files, what will happen in 3 more years? How can we manage 10M files with a limited curation staff? For NEES,we are using the OAIS model as a guideline for designing a curation pipeline for curating NEES data The OAIS model is proving to be a useful model for thinking about how to undertake data curation We are developing a curation pipeline to help automate curation for the many files in the NEES Project Warehouse
27 Data Analytics There are technologies available today that can be used to provide solutions to these problems High performance computing Parallel file systems Map Reduce/Hadoop A sustainable solution requires more than a set of technologies An effective data cyberinfrastructure involves both sociological and technological components. What is needed to educate and train researchers to effectively learn to use new technologies?
28 Our approach Developing a joint research and education and program in big data analytics between the University of Stavanger and Purdue University and AMD Research. Chunming Rong, Tomasz Wlodarczyk (Stavanger) Thomas Hacker, Ray Hansen, Natasha Nikolaidis (Purdue) Greg Rodgers (AMD Research) Funded by SIU: Strategic Collaboration on Advanced Data Analysis and Communication between Purdue University and University of Stavanger Developing a semester long joint course in HPC and Big Data Analytics, and a short summer course (to be delivered next week)
29 Planned Course Objectives Students will learn to put modern tools to use in order to do data analysis of large and complex data sets. Students will be able to: design, construct, test, and benchmark a small data processing cluster (based on Hadoop) Demonstrate knowledge of MapReduce functionalities through the development of a MapReduce program Understand Hadoop job tracker, task tracker, scheduling issues, communications, and resource management. Construct programs based on MapReduce paradigm for typical algorithmic problems Use functional programming concept to describe data dependencies and analyze complexity of MapReduce programs
30 Planned Course Objectives Algorithms Understand algorithmic complexity of the worst case, expected case, and best case running time (big-oh notation), and the orders of complexity (e.g. N, N^2, Log N, NP-Hard) Examine a basic algorithm and identify the algorithmic complexity order File Systems Describe the concepts of a distributed file system, how it differs from a local file system, the performance of distributed file systems. Describe a parallel file system, the performance advantages possible through the use of a parallel file system, and the inherent reliability and fault tolerance mechanisms needed for parallel file systems. Examples include OrangeFS and Lustre understand peak and sustained bandwidth rates understand the differences between RDBMS, data warehouse, unstructured big data, and keyed files.
31 Short Course Format Lecture in the morning followed by lab in the afternoon Labs are built on a set of Desktop PCs running Hadoop in an RHEL6 virtual machine running on top of VMware Using pfsense (open source firewall) to create a secure network connection from the instruction site to the computers running Hadoop Working to refine the network and lab equipment setup based on our experiences delivering the short course next week.
32 Short Course Day 1 Topics Lecture Introduction and motivation for the course History of HPC, big data, Moore's Law. Science domain areas, and problems in each of those areas that motivate the need for this. Where are we today, and what is the projected need later? How are things driven by increases in computing power? Definition of big data, big compute, why we need both combined Mixture of trends, principles, and implementation in historic context that students should understand. Parallel application types Introduction to MapReduce Dataflow within MapReduce with plug-in Labs The hadoop command, HDFS, and Linux basics Hadoop basic examples from lectures
33 Short Course Day 2 Topics Lectures Introduction to MapReduce, continued Combiners More complex MapReduce example (search assist) Hadoop Architecture Motivation for Hadoop Hadoop building blocks (name node, data node, etc.) Fault tolerance and failures, replication, and data aware scheduling. Main components (HDFS, MapReduce, modes (local, distributed, pseudo distributed), etc.) HDFS GUI Labs We will use combiners and multiple reducers to improve performance. We will look at network traffic and data counters to evaluate. Students will evaluate the performance improvement for each optimization of MapReduce program. The advanced student will gather network and data statistics to explain why each phase got better.
34 Short Course Day 3 Topics Lectures Hadoop Architecture, continued Comparison of HDFS with other Parallel File System architectures (GoogleFS, Lustre, OrangeFS), and how Hadoop differs from these systems Chaining MapReduce jobs Mapreduce Algorithms: K-means or other algorithms Schemas for unstructured data using Hive Introduction to data organization. Why are we concerned about data organization? What are the impacts of poor organization on performance and correctness? Data organization: Level of data organization - data structure, file level, cluster level, data parallelization, organization level. How do we deal with large sequential files from a performance perspective and how it would be represented in parallel file system (e.g. HDFS) Lab Hive
35 Expected Outcomes Provide education and training to researchers to allow them to effectively think about big data to effectively use the technologies in their research and daily work. Improved data collection and management practices by researchers Development of new techniques for collaboration on a joint course across the Atlantic with a shared lab infrastructure for lab assignments.
36 Conclusions There is a need for data intensive training and education for scientists and engineers Effectively use existing technologies Develop new disciplinary practices for annotating and preserving valuable data Understand the critical need for data curation for the viability and long-term accessibility of data We are developing a education and research program focused on these issues Short course Semester length joint course at University of Stavanger and Purdue University Holding a symposium at the CloudCom conference in December DataCom - Symposium on High Performance and Data Intensive Computing Thomas Hacker, Purdue Univ., USA Tomasz Wiktor Wlodarczyk, University of Stavanger, Norway DataCom is organized under CloudCom as two tracks Big Data HPC on Cloud
George E. Brown Jr. Network for Earthquake Engineering Simulation Facility Access Challenges: Data Management Rudi Eigenmann NEES Operations Headquarters NEEScomm Center Purdue University Why Data Management?
Cybersecurity Operations in a Multi- Institutional Academic Setting: The NEES Story Saurabh Bagchi, Fahad Ali Arshad, Gaspar Modelo- Howard Network for Earthquake Engineering and Simulation (NEES) School
(Possible) HEP Use Case for NDN Phil DeMar; Wenji Wu NDNComm (UCLA) Sept. 28, 2015 Outline LHC Experiments LHC Computing Models CMS Data Federation & AAA Evolving Computing Models & NDN Summary Phil DeMar:
Big Data and Storage Management at the Large Hadron Collider Dirk Duellmann CERN IT, Data & Storage Services Accelerating Science and Innovation CERN was founded 1954: 12 European States Science for Peace!
Scaling-out Semantic Data Management and Processing Tomasz Wiktor Wlodarczyk, Norway CIPSI Director: prof. Chunming Rong Areas of interest: Reasoning, analytics and simulation Distributed systems Dependability
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
P - 271 IBM Deep Computing Visualization Offering Parijat Sharma, Infrastructure Solution Architect, IBM India Pvt Ltd. email: email@example.com Summary Deep Computing Visualization in Oil & Gas
Hadoop on the Gordon Data Intensive Cluster Amit Majumdar, Scientific Computing Applications Mahidhar Tatineni, HPC User Services San Diego Supercomputer Center University of California San Diego Dec 18,
CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University firstname.lastname@example.org 14.9-2015 1/36 Google MapReduce A scalable batch processing
Outline High Performance Computing (HPC) Towards exascale computing: a brief history Challenges in the exascale era Big Data meets HPC Some facts about Big Data Technologies HPC and Big Data converging
Testing 3Vs (Volume, Variety and Velocity) of Big Data 1 A lot happens in the Digital World in 60 seconds 2 What is Big Data Big Data refers to data sets whose size is beyond the ability of commonly used
WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5
CYBERINFRASTRUCTURE FRAMEWORK FOR 21 st CENTURY SCIENCE AND ENGINEERING (CIF21) Goal Develop and deploy comprehensive, integrated, sustainable, and secure cyberinfrastructure (CI) to accelerate research
Here comes the flood Tools for Big Data analytics Guy Chesnot -June, 2012 Agenda Data flood Implementations Hadoop Not Hadoop 2 Agenda Data flood Implementations Hadoop Not Hadoop 3 Forecast Data Growth
Hadoop http://hadoop.apache.org/ What Is Apache Hadoop? The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using
A Service for Data-Intensive Computations on Virtual Clusters Executing Preservation Strategies at Scale Rainer Schmidt, Christian Sadilek, and Ross King email@example.com Planets Project Permanent
Architecture & Experience Data Mining - Combination from SAP HANA, R & Hadoop Markus Severin, Solution Principal Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein
Big Data on Microsoft Platform Prepared by GJ Srinivas Corporate TEG - Microsoft Page 1 Contents 1. What is Big Data?...3 2. Characteristics of Big Data...3 3. Enter Hadoop...3 4. Microsoft Big Data Solutions...4
Integration of Virtualized Worker Nodes Dr. A. Scheurer, Dr. V. Büge, O. Oberst, P. Krauß Linuxtag 2010, Berlin, Session: Cloud Computing, Talk ID: #16197 KIT University of the State of Baden-Wuerttemberg
Surfing the Data Tsunami: A New Paradigm for Big Data Processing and Analytics Dr. Liangxiu Han Future Networks and Distributed Systems Group (FUNDS) School of Computing, Mathematics and Digital Technology,
Hadoop & its Usage at Facebook Dhruba Borthakur Project Lead, Hadoop Distributed File System firstname.lastname@example.org Presented at the Storage Developer Conference, Santa Clara September 15, 2009 Outline Introduction
Engineering the Data Processing Pipeline Mark Stalzer Center for Advanced Computing Research California Institute of Technology email@example.com October 29, 2009 A systems engineering view of computational
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, firstname.lastname@example.org Assistant Professor, Information
International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 7 (2013), pp. 655-662 International Research Publications House http://www. irphouse.com /ijict.htm Data
Testing the In-Memory Column Store for in-database physics analysis Dr. Maaike Limper About CERN CERN - European Laboratory for Particle Physics Support the research activities of 10 000 scientists from
An Oracle White Paper November 2010 Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics 1 Introduction New applications such as web searches, recommendation engines,
Large scale processing using Hadoop Ján Vaňo What is Hadoop? Software platform that lets one easily write and run applications that process vast amounts of data Includes: MapReduce offline computing engine
Maximizing Hadoop Performance and Storage Capacity with AltraHD TM Executive Summary The explosion of internet data, driven in large part by the growth of more and more powerful mobile devices, has created
The Resilient Smart Grid Workshop Network-based Data Service October 16 th, 2014 Jin Chang Agenda Fermilab Introduction Smart Grid Resilience Challenges Network-based Data Service (NDS) Introduction Network-based
An Oracle White Paper June 2012 High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database Executive Overview... 1 Introduction... 1 Oracle Loader for Hadoop... 2 Oracle Direct
Hadoop IST 734 SS CHUNG Introduction What is Big Data?? Bulk Amount Unstructured Lots of Applications which need to handle huge amount of data (in terms of 500+ TB per day) If a regular machine need to
Challenges in e-science: Research in a Digital World Thom Dunning National Center for Supercomputing Applications National Center for Supercomputing Applications University of Illinois at Urbana-Champaign
Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components of Hadoop. We will see what types of nodes can exist in a Hadoop
BIG DATA IN THE CLOUD : CHALLENGES AND OPPORTUNITIES MARY- JANE SULE & PROF. MAOZHEN LI BRUNEL UNIVERSITY, LONDON Overview * Introduction * Multiple faces of Big Data * Challenges of Big Data * Cloud Computing
Big Data, Big Red II, Data Capacitor II, Wrangler, Jetstream, and Globus Online A national science & engineering cloud funded by the National Science Foundation Award #ACI-1445604 Big Data, Big Red II,
THE DEVELOPMENT OF THE NETWORK FOR EARTHQUAKE ENGINEERING SIMULATION (NEES) Stephen Mahin 1, Robert Nigbor 2, Cherri Pancake 3, Robert Reitherman 4 and Sharon Wood 5 1 2 3 4 5 Dept. of Civil and Environmental
Hadoop & its Usage at Facebook Dhruba Borthakur Project Lead, Hadoop Distributed File System email@example.com Presented at the The Israeli Association of Grid Technologies July 15, 2009 Outline Architecture
BIG DATA TRENDS AND TECHNOLOGIES THE WORLD OF DATA IS CHANGING Cloud WHAT IS BIG DATA? Big data are datasets that grow so large that they become awkward to work with using onhand database management tools.
PACE Predictive Analytics Center of Excellence @ San Diego Supercomputer Center, UCSD Natasha Balac, Ph.D. Brief History of SDSC 1985-1997: NSF national supercomputer center; managed by General Atomics
W H I T E P A P E R Deriving Intelligence from Large Data Using Hadoop and Applying Analytics Abstract This white paper is focused on discussing the challenges facing large scale data processing and the
The Future of Data Management with Hadoop and the Enterprise Data Hub Amr Awadallah (@awadallah) Cofounder and CTO Cloudera Snapshot Founded 2008, by former employees of Employees Today ~ 800 World Class
Oracle Big Data SQL Technical Update Jean-Pierre Dijcks Oracle Redwood City, CA, USA Keywords: Big Data, Hadoop, NoSQL Databases, Relational Databases, SQL, Security, Performance Introduction This technical
High-Performance Computing and Big Data Challenge Dr Violeta Holmes Matthew Newall The University of Huddersfield Outline High-Performance Computing E-Infrastructure Top500 -Tianhe-II UoH experience: HPC
Architecting for the next generation of Big Data Hortonworks HDP 2.0 on Red Hat Enterprise Linux 6 with OpenJDK 7 Yan Fisher Senior Principal Product Marketing Manager, Red Hat Rohit Bakhshi Product Manager,
Piyush Chaudhary Technical Computing Solutions Data Centric Computing Revisited SPXXL/SCICOMP Summer 2013 Bottom line: It is a time of Powerful Information Data volume is on the rise Dimensions of data
Hadoop implementation of MapReduce computational model Ján Vaňo What is MapReduce? A computational model published in a paper by Google in 2004 Based on distributed computation Complements Google s distributed
Architecting the Future of Big Data Whitepaper Apache Hadoop: The Big Data Refinery Introduction Big data has become an extremely popular term, due to the well-documented explosion in the amount of data
A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS Dr. Ananthi Sheshasayee 1, J V N Lakshmi 2 1 Head Department of Computer Science & Research, Quaid-E-Millath Govt College for Women, Chennai, (India)
Cloud Computing for Research Roger Barga Cloud Computing Futures, Microsoft Research Trends: Data on an Exponential Scale Scientific data doubles every year Combination of inexpensive sensors + exponentially
Exploring the roles and responsibilities of data centres and institutions in curating research data a preliminary briefing. Dr Liz Lyon, UKOLN, University of Bath Introduction and Objectives UKOLN is undertaking
SEAIP 2009 Presentation By David Tan Chair of Yahoo! Hadoop SIG, 2008-2009,Singapore EXCO Member of SGF SIG Imperial College (UK), Institute of Fluid Science (Japan) & Chicago BOOTH GSB (USA) Alumni Email:
Data sharing and Big Data in the physical sciences 2 October 2015 Content Digital curation: Data and metadata Why consider the physical sciences? Astronomy: Video Physics: LHC for example. Video The Research
HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect firstname.lastname@example.org EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training
From Distributed Computing to Distributed Artificial Intelligence Dr. Christos Filippidis, NCSR Demokritos Dr. George Giannakopoulos, NCSR Demokritos Big Data and the Fourth Paradigm The two dominant paradigms
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 9, September 2014,
An Alternative Storage Solution for MapReduce Eric Lomascolo Director, Solutions Marketing MapReduce Breaks the Problem Down Data Analysis Distributes processing work (Map) across compute nodes and accumulates
Beyond Web Application Log Analysis using Apache TM Hadoop A Whitepaper by Orzota, Inc. 1 Web Applications As more and more software moves to a Software as a Service (SaaS) model, the web application has
Freshmen University of California, Berkeley 27% University of California, Los Angeles 27% University of North Carolina, Chapel Hill 37% University of Virginia 38% University of California, San Diego 44%
School of Systems Engineering A Novel Cloud Based Elastic Framework for Big Data Preprocessing Omer Dawelbeit and Rachel McCrindle October 21, 2014 University of Reading 2008 www.reading.ac.uk Overview
Open source Google-style large scale data analysis with Hadoop Ioannis Konstantinou Email: email@example.com Web: http://www.cslab.ntua.gr/~ikons Computing Systems Laboratory School of Electrical
Mapping the Universe; Challenge for the Big Data science Dr.Utain Sawangwit National Astronomical Research Institute of Thailand firstname.lastname@example.org Observational Cosmology relies on big surveys and very
INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK A REVIEW ON HIGH PERFORMANCE DATA STORAGE ARCHITECTURE OF BIGDATA USING HDFS MS.
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE AGENDA Introduction to Big Data Introduction to Hadoop HDFS file system Map/Reduce framework Hadoop utilities Summary BIG DATA FACTS In what timeframe
Unstructured Data Accelerator (UDA) Author: Motti Beck, Mellanox Technologies Date: March 27, 2012 1 Market Trends Big Data Growing technology deployments are creating an exponential increase in the volume
The Greenplum Analytics Workbench External Overview 1 The Greenplum Analytics Workbench Definition Is a 1000-node Hadoop Cluster. Pre-configured with publicly available data sets. Contains the entire Hadoop
Modern Data Architecture for Predictive Analytics David Smith VP Marketing and Community - Revolution Analytics John Kreisa VP Strategic Marketing- Hortonworks Hortonworks Inc. 2013 Page 1 Your Presenters
The George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES Experimental Site Backup Plan at NEES@Lehigh Last Modified November 12, 2012 Acknowledgement: This work was supported primarily
www.thinkparq.com www.beegfs.com KEY ASPECTS Maximum Flexibility Maximum Scalability BeeGFS supports a wide range of Linux distributions such as RHEL/Fedora, SLES/OpenSuse or Debian/Ubuntu as well as a
March 10, 2015 1 NT1: An example for future EISCAT_3D data centre and archiving? John White NeIC xx March 10, 2015 2 Introduction High Energy Physics and Computing Worldwide LHC Computing Grid Nordic Tier
Unterstützung datenintensiver Forschung am KIT Aktivitäten, Dienste und Erfahrungen Achim Streit Steinbuch Centre for Computing (SCC) KIT Universität des Landes Baden-Württemberg und nationales Forschungszentrum
SEPTEMBER 2013 Buyer s Guide to Big Data Integration Sponsored by Contents Introduction 1 Challenges of Big Data Integration: New and Old 1 What You Need for Big Data Integration 3 Preferred Technology
AGENDA What is BIG DATA? What is Hadoop? Why Microsoft? The Microsoft BIG DATA story Hadoop PDW Our BIG DATA Roadmap BIG DATA? Volume 59% growth in annual WW information 1.2M Zetabytes (10 21 bytes) this
Moving Beyond the Web, a Look at the Potential Benefits of Grid Computing for Future Power Networks by Malcolm Irving, Gareth Taylor, and Peter Hobson 1999 ARTVILLE, LLC. THE WORD GRID IN GRID-COMPUTING
Analysis and Optimization of Massive Data Processing on High Performance Computing Architecture He Huang, Shanshan Li, Xiaodong Yi, Feng Zhang, Xiangke Liao and Pan Dong School of Computer Science National
Impact of Big Data in Oil & Gas Industry Pranaya Sangvai Reliance Industries Limited 04 Feb 15, DEJ, Mumbai, India. New Age Information 2.92 billions Internet Users in 2014 Twitter processes 7 terabytes
High Performance Computing (HPC) High Performance Computing (HPC) White Paper Attn: Name, Title Phone: xxx.xxx.xxxx Fax: xxx.xxx.xxxx 1.0 OVERVIEW When heterogeneous enterprise environments are involved,
Storage Architectures for Big Data in the Cloud Sam Fineberg HP Storage CT Office/ May 2013 Overview Introduction What is big data? Big Data I/O Hadoop/HDFS SAN Distributed FS Cloud Summary Research Areas
Workshop on Hadoop with Big Data Hadoop? Apache Hadoop is an open source framework for distributed storage and processing of large sets of data on commodity hardware. Hadoop enables businesses to quickly
on AWS Services Overview Bernie Nallamotu Principle Solutions Architect \ So what is it? When your data sets become so large that you have to start innovating around how to collect, store, organize, analyze
Big Data: A Storage Systems Perspective Muthukumar Murugan Ph.D. HP Storage Division In this talk Big data storage: Current trends Issues with current storage options Evolution of storage to support big
EMC Perspective Integrated Grid Solutions from SAS, EMC Isilon and Greenplum Introduction Intensifying competitive pressure and vast growth in the capabilities of analytic computing platforms are driving
white paper Optimizing Data Centers for Big Infrastructure Applications Contents Whether you need to analyze large data sets or deploy a cloud, building big infrastructure is a big job. This paper discusses
CYBERINFRASTRUCTURE FRAMEWORK FOR 21 ST CENTURY SCIENCE, ENGINEERING, AND EDUCATION (CIF21) Overview The Cyberinfrastructure Framework for 21 st Century Science, Engineering, and Education (CIF21) investment