BIG DATA IN BUSINESS ENVIRONMENT
|
|
|
- Abraham Burns
- 10 years ago
- Views:
Transcription
1 Scientific Bulletin Economic Sciences, Volume 14/ Issue 1 BIG DATA IN BUSINESS ENVIRONMENT Logica BANICA 1, Alina HAGIU 2 1 Faculty of Economics, University of Pitesti, Romania [email protected] 2 Faculty of Economics, University of Pitesti, Romania [email protected] Abstract: In recent years, dealing with a lot of data originating from social media sites and mobile communications among data from business environments and institutions, lead to the definition of a new concept, known as Big. The economic impact of the sheer amount of data produced in a last two years has increased rapidly. It is necessary to aggregate all types of data (structured and unstructured) in order to improve current transactions, to develop new business models, to provide a real image of the supply and demand and thereby, generate market advantages. So, the companies that turn to Big have a competitive advantage over other firms. Looking from the perspective of IT organizations, they must accommodate the storage and processing Big, and provide analysis tools that are easily integrated into business processes. This paper aims to discuss aspects regarding the Big concept, the principles to build, organize and analyse huge datasets in the business environment, offering a threelayer architecture, based on actual software solutions. Also, the article refers to the graphical tools for exploring and representing unstructured data, Gephi and XL. Key words: Big, business environment, analysis software JEL Classification Codes: C82, M21, C INTRODUCTION Increasingly, nowadays, companies realize the importance of using Big in business environment, because of their confrontation with huge amounts of data that originate in social networks and mobile communications, in addition to the traditional databases. The economic impact of the sheer amount of data produced in a last two years has increased rapidly. It is necessary to aggregate all types of data (structured and unstructured) in order to improve current transactions, to develop new business models, to provide a real image of the supply and demand and thereby, generate market advantages. Big already penetrated businesses processes and in few years will impact all economic and social sectors, even there are several technical challenges. The influence of Big analysis on management and decision making is too valuable to be ignored. A major issue for companies leveraging Big is the processing time for data analysis. After gathering and storing data, the massive parallel processing and the usage of analytics in order to better understand the dynamics of business is essential. Looking from the perspective of IT organizations, they must accommodate the storage and processing Big, and provide analysis tools that are easily integrated into business processes. Also, the business analysts are faced with a challenge regarding data flow filtering, and implementation of Business Intelligence or Forecasting strategies for their activity (Banica et al., 2014). 79
2 Twitter Facebook Google+ LinkedIn Logica BANICA, Alina HAGIU Starting from the current studies and implementations concerning the Big concept, and their integration in actual software systems, we have concentrated our efforts around the principles to build, organize and analyse huge datasets in the business environment, offering a three-layer architecture. This paper is organized as follows: Section 2 presents the concept of Big, summarizing the state-of-the-art, and proposes several software solutions to store, process and analyze massive volume of data. Section 3 develops a system able to accommodate Big in business environment, by presenting the software platforms for each of the three levels of the designed architectural model. Conclusions states our confidence in obtaining successful results by the companies using Big and suggest ways of improving our research in this domain, firstly by building and testing such an experimental system. 2. STATE OF THE ART 2.1 Big concept Big data is a popular term used to describe the exponential growth and availability of data, both structured and unstructured. And big data may be as important to business and society as the Internet has become. Why? More data may lead to more accurate analyses. (Davenport & Dyché, 2013). An important volume of semi-structured and unstructured data is generated by transaction systems, social networks, server logs, sensors and mobile networks. Even the ratio of data coming from social media sites, mobile networks and relational databases is clearly in favor of the first group, the ratio of useful information collected is reversed. The main characteristics of Big are described by experts through the five V s as follow (Bowden, 2014) (Dijcks, 2013): Volume refers to the scalability as the most important aspect for every domain of application. The data volume build-up may be from unstructured sources like the social media and from traditional databases. The relevance of the volume of data collected may be obtained by filtering using analytic tools in order to identify important patterns and metrics that are found in business field. Velocity the increasing flows of data need hardware and software solutions to process data streaming in a paced as fast as possible; the actors of the market need answers to their questions in real time. Variety concerns the combination of all types of formats and the differing meanings attached to the same forms, Big must cover every opportunity of connecting the business with the customers in a virtual marketplace. Veracity - refers to the trustworthiness of the information; the data may be not significant, also there could be discrepancies in the sample of data collected, filtered and processed. Value is the most important V of Big because it turn the increased amount of data into commercial or scientific value. The final target of processing Big is to develop the business, to obtain a stronger competitive position and a high level of knowledge, to find new solutions in all areas (economic, social, health and education). 2.2 Introducing Big in business environment Business environment is interested in collecting information from unconventional data sources, in order to analyse and extract meaningful insight from this maze of data, be it security related or simply behavioural patterns of consumers. 80
3 Big in Business Environment We will specify several ways by means of which the companies using Big could improve their business (Rosenbush & Totty, 2013): 1. Great software companies, like Google, Facebook and Amazon, showed their interest in processing Big in the Cloud environment many years ago. They are collecting huge amounts of information, analyze traditional measures like sales using comments on socialmedia sites and location information from mobile devices. This information is useful to figure out how improve their products, cut costs and keep customers coming back. 2. Product development for online companies Big could help to capture customer preferences and use the information in designing new products. For example, Zynga Inc., the game maker, uses the collected data for customer service, quality assurance and designing the features for the next generation of games. Also, Ford Motor Co. designed a common set of components that would be on Ford cars and trucks by using algorithms that summarize more than 10,000 relevant comments. 3. Human resources - some companies are using Big to better handle the health care of their employees. For example, Caesars Entertainment Corp. analyzes health-insurance data for its 65,000 employees and their family members, finding information about how employees use medical services, the number of emergency-room visits and whether they choose a generic or brand-name drug. 4. Marketing is the enterprise department made to understand the customers and their choices. Using Big analytics the information is better filtered and the forecasts are more accurate. An important company, InterContinental Hotels Group PLC has gathered details about the 71 million members of its Priority Club program, such as income levels and whether they prefer family-style or business-traveler accommodations. 5. Manufacturing companies, as well as retailers begin to monitor Facebook and Twitter and analyse those data from different angles, e.g. customer satisfaction. Retailers are also collecting large amounts of data by storing log files and combine that growing number of information with other data sources such as sales data in order to forecast customer behaviour. 3. METHODOLOGY In order to build a Big infrastructure for an enterprise, there are required major hardware resources and specific software applications. The Cloud Computing environment is a solution to provide the resources required to store and access important data volumes. In this chapter we will briefly present a new type of database - NoSQL, used for storing Big, and a series of successful implementations, such as the software that allows for massively parallel computing Apache Hadoop, and several applications for collecting structured and unstructured data, such as Apache Flume and Apache Sqoop. Also, we consider that Gephi and XL are two representative applications for SNA software, so we made a short presentation of their facilities. NoSQL databases are a new type of database that manages a wide variety of unstructured data described by several models: key value stores, graph, and document data (Mohamed et al., 2014). NoSQL data are implemented in a distributed architecture, based on multiple nodes, as Hadoop software. Hadoop is an open source software platform that enables the processing of large data sets in a distributed computing environment. Its core concept is distributing the data in the system to the individual nodes, which can work on data locally and without transferring data over the network for processing. 81
4 Logica BANICA, Alina HAGIU Hadoop is compatible with any of operating system families (Linux, Unix, Windows, Mac OS) and can be run on a single or on multi-node cluster. The most widespread solution is the open source Apache Hadoop distribution, including several components (Frank, 2014): Hadoop Distributed File System (HDFS) - the storage component MapReduce engine - the processing component Hadoop Common - a module for libraries and utilities (Hadoop YARN) the component responsible for managing and scheduling cluster resources A Hadoop Cluster is a set of computers running HDFS and MapReduce, one of them being master node and the others slave nodes (Figure 1). Master Slave 1 Slave n Map Reduce Job HDFS Name HADOOP MULTI-NODES Figure 1. The structure of a Hadoop cluster (Source: Gupta, 2014) HDFS splits data files into blocks and distributes them across the cluster, each block being replicated three times (by default), to ensure system reliability and security (Yahoo Developer Network, 2007). The master node called the Name monitors distributed blocks, known as s. MapReduce is the component that Hadoop uses to distribute work around a cluster, so that operations can be run in parallel on different nodes (Frank, 2014). MapReduce is controlled by Job software, that resides on the master node and it monitors the applications, running tasks at each node. The MapReduce system is based on two components performed (Brust, 2012): Map reads data in the form of key/value pairs and processes tasks on nodes, avoiding network traffic; Reduce is waiting until Map phase is completed and combines all the intermediate values into a list, providing final key/value pairs as outputs that are written into HDFS. Hadoop 2.0 is a new version, providing more capacity to be integrated with stack technology, adding support for running non-batch applications through the introduction of YARN (Yet 82
5 Big in Business Environment Another Resource Negotiator), a separate layer that includes resource management and job scheduling functions, according to Gartner researchers (Laskowski, 2014). Hadoop 2.0 added new important features: support for Microsoft Windows and improvements of system availability and scalability. In figure 2 is presented the architecture diagram of Hadoop 1.0 in comparison with Hadoop 2.0/YARN. HADOOP 1.0 MAPREDUCE (Cluster Resource Management and Batch Processing) HADOOP 2.0 Batch Interactive MapReduce YARN Real- Time (Cluster Resource Management HDFS (Distributed File System) HDFS (Distributed File System) Figure 2. The architecture diagram of Hadoop 1.0 and Hadoop 2.0/YARN Source: Nowadays, another open source technology emerged, called Spark, that can be applied for handling massive amounts of data and also deployed as a component running on the Hadoop 2.0 platform (Rouse, 2014). Spark's in-memory allows user programs to load data into a cluster's memory and query it repeatedly, providing performance up to 100 times faster for certain applications (Vaughan, 2014). For implementing Big into a company, is required to extend the existing systems, usually based-on RDBMS (Relational Base Management System). For this purpose, the information system of the company should accept the transfer of their structured data, into a consolidated Warehouse, by using a software solution, such as (Mohamed et al., 2014): ETL process (Extract, Transform, Load) - structured data are moved into the Warehouse; Apache Sqoop a tool for moving data from relational databases into Hadoop and back. Finally, the methodology includes Analytics layer, having different goals: finding correlations across multiple data sources, forecasting the indicators or analysing social networks (Krebs, 2013). In this study, we suggest that the system can be further explored using social network analysis (SNA) software, such as Gephi and XL. Gephi is a free, open source interactive visualization and exploration platform for all kinds of networks and complex systems, capable of accommodating networks up to 50,000 nodes and 1,000,000 edges (Bastian et al., 2009). Also, it generates metrics, identifies subgroups in a network, clusters of actors or individuals, or emphasizes isolated nodes of the network. The application is successfully used to analyze pages and groups of Facebook, Twitter networks and . Gephi works with imported files from.csv,.gml,.gdf and.gefx format, which can be achieved with software converters (e.g. Facebook or Twitter to.gdf files) from unstructured data, by applying an algorithm that transforms key-value words on nodes and their connections on edges. 83
6 Logica BANICA, Alina HAGIU XL is a free, open-source template for Microsoft Excel 2007, 2010 and 2013 that makes it easy to explore network graphs. Concerning metric calculations, it allows calculate degree, centrality, PageRank, clustering coefficient and graph density (Messarra, 2014) XL can have a direct connection to social networks: Twitter, YouTube, Flickr and . requesting the user's permission before collecting personal data and focuses on the collection of publicly available data, such as Twitter statuses and follows relationships for users who have made their accounts public (Kennedy et al., 2013) After using both SNA applications and making a comparison of their features, we believe that Gephi tool is more powerful than XL. 4. A MODEL FOR INTEGRATION HADOOP IN AN ENTERPRISE INFORMATION SYSTEM In this chapter is described a three-layer Big architecture for business environment, based on a Hadoop cluster. The main task is not Hadoop itself, but integrating it with the existing ERP (Enterprise resource planning) system that the companies usually already own. Thus, our model refers to a unified architecture, using Hadoop as a data integration platform (figure 3). Web Sites Social Networks Enterprise bases Other SQL bases Apache Flume Apache Sqoop ETL Process Gathering Master Job Name Map Reduce HDFS HADOOP Slave n Warehouse BI Tools Visualization & Analytics (Gephi, XL) ANALYTICS Query Tool Figure 3. An architecture for Big in business environment The three-layer architecture includes: 1) Gathering structured and unstructured data interesting for enterprise - The first layer is designated for collecting any type of data and is based on software tools Apache Flume (unstructured data), Apache Sqoop (structured data) and ETL (structured data). 84
7 Big in Business Environment 2) Parallel Processing of Big using Hadoop - At the second level is implemented Apache Hadoop that aggregates and process data; 3) Analyzing Big - The third layer provides analytics tools, including business and modeling tools, exploring and visualization applications, such as Gephi and XL. For many enterprises, identifying the Web presence through the hyperlinks network and the components of social media such as Twitter, Facebook and YouTube is increasingly important for their business. Therefore, they are interested in investing for development of IT tools capable to mapping and deciphering organisational presence on the Web. 5. CONCLUSIONS AND FUTURE WORK Big will radically change the current business models for gaining important benefits on marketplace. The evolution of open-source software for NoSQL databases will allow small and medium enterprises to benefit from this new trend that empowers today s business. But, there is another problem that is required to be solve: the integration between Hadoop and the existing ERP system of the organizations. Consequently, a possible scenario refers to a unified architecture that integrates Big technologies in actual systems. In our future research we intent to implement a single node Hadoop structure and evaluate its performance working with unstructured data from Social media. Also, considering Analytics a fundamental part of Big value, we will make a comparative analysis of several tools, testing them against workloads with query, graphic interpretation and dynamic approach. REFERENCES 1. Banica, L., Paun, V., Stefan, C., 2014, Big leverages Cloud Computing opportunities, International Journal of Computers & Technology, Volume 13, No.12, available at 2. Thomas H. Davenport, Jill Dyché, Big in Big Companies, 2013, available at: 3. Jason Bowden, 2014, The 4 V s in Big for Digital Marketing, available at digital-marketing/ 4-vs-big-data-digital-marketing Dijcks, J., 2013, Big for Enterprise, 5. Rosenbush, S., Totty, M., 2013, How Big Is Changing the Whole Equation for Business, available at 6. Mohamed, A., M., Altrafi, O.,G., Ismail, M., O., 2014, Relational vs. NoSQL bases: A Survey, International Journal of Computer and Information Technology, Vol.3, Issue 3, pp Frank Lo, 2014, Big Technology, available at 8. Gupta, R., 2014, Big With SQL, available at big-data-with-sql 9. Yahoo Developer Network, 2007, Hadoop Tutorial, available at Brust, A., 2012, CEP and MapReduce: Connected in complex ways, 85
8 Logica BANICA, Alina HAGIU 11. Laskowski, L., 2014, Hadoop 2.0's Deep Impact on Big and Big Technologies, available at Vaughan, J., 2014, Hadoop and Spark are Coming of Age Together, Rouse, M., 2014, HADOOP 2, available at definition/hadoop Mohamed, A., M., Altrafi, O.,G., Ismail, M., O., 2014, Relational vs. NoSQL bases: A Survey, International Journal of Computer and Information Technology, Vol.3, Issue 3, pp Krebs, V., 2013, Social Network Analysis, A Brief Introduction, available at Bastian M., Heymann S., Jacomy M., 2009, Gephi: an open source software for exploring and manipulating networks, International AAAI Conference on Weblogs and Social Media, San Jose, USA, available at Messarra, N., 2014, Introduction to Social Graph and XL, available at Kennedy, H., Moss, G., Birchall,C., Moshonas, S., 2013, Digital Analysis: Guide to tools for social media & web analytics and insights available at 4&url=http%3A%2F%2Fwww.communitiesandculture.org%2Ffiles%2F2013%2F02%2FDigital -data-analysis-guide-to-tools.pdf 86
Chapter 7. Using Hadoop Cluster and MapReduce
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
How To Handle Big Data With A Data Scientist
III Big Data Technologies Today, new technologies make it possible to realize value from Big Data. Big data technologies can replace highly customized, expensive legacy systems with a standard solution
Hadoop. MPDL-Frühstück 9. Dezember 2013 MPDL INTERN
Hadoop MPDL-Frühstück 9. Dezember 2013 MPDL INTERN Understanding Hadoop Understanding Hadoop What's Hadoop about? Apache Hadoop project (started 2008) downloadable open-source software library (current
Testing Big data is one of the biggest
Infosys Labs Briefings VOL 11 NO 1 2013 Big Data: Testing Approach to Overcome Quality Challenges By Mahesh Gudipati, Shanthi Rao, Naju D. Mohan and Naveen Kumar Gajja Validate data quality by employing
AGENDA. What is BIG DATA? What is Hadoop? Why Microsoft? The Microsoft BIG DATA story. Our BIG DATA Roadmap. Hadoop PDW
AGENDA What is BIG DATA? What is Hadoop? Why Microsoft? The Microsoft BIG DATA story Hadoop PDW Our BIG DATA Roadmap BIG DATA? Volume 59% growth in annual WW information 1.2M Zetabytes (10 21 bytes) this
Big Data: Tools and Technologies in Big Data
Big Data: Tools and Technologies in Big Data Jaskaran Singh Student Lovely Professional University, Punjab Varun Singla Assistant Professor Lovely Professional University, Punjab ABSTRACT Big data can
Chukwa, Hadoop subproject, 37, 131 Cloud enabled big data, 4 Codd s 12 rules, 1 Column-oriented databases, 18, 52 Compression pattern, 83 84
Index A Amazon Web Services (AWS), 50, 58 Analytics engine, 21 22 Apache Kafka, 38, 131 Apache S4, 38, 131 Apache Sqoop, 37, 131 Appliance pattern, 104 105 Application architecture, big data analytics
HDP Hadoop From concept to deployment.
HDP Hadoop From concept to deployment. Ankur Gupta Senior Solutions Engineer Rackspace: Page 41 27 th Jan 2015 Where are you in your Hadoop Journey? A. Researching our options B. Currently evaluating some
Big Data Open Source Stack vs. Traditional Stack for BI and Analytics
Big Data Open Source Stack vs. Traditional Stack for BI and Analytics Part I By Sam Poozhikala, Vice President Customer Solutions at StratApps Inc. 4/4/2014 You may contact Sam Poozhikala at [email protected].
BIG DATA TRENDS AND TECHNOLOGIES
BIG DATA TRENDS AND TECHNOLOGIES THE WORLD OF DATA IS CHANGING Cloud WHAT IS BIG DATA? Big data are datasets that grow so large that they become awkward to work with using onhand database management tools.
Transforming the Telecoms Business using Big Data and Analytics
Transforming the Telecoms Business using Big Data and Analytics Event: ICT Forum for HR Professionals Venue: Meikles Hotel, Harare, Zimbabwe Date: 19 th 21 st August 2015 AFRALTI 1 Objectives Describe
How to use Big Data in Industry 4.0 implementations. LAURI ILISON, PhD Head of Big Data and Machine Learning
How to use Big Data in Industry 4.0 implementations LAURI ILISON, PhD Head of Big Data and Machine Learning Big Data definition? Big Data is about structured vs unstructured data Big Data is about Volume
Managing Big Data with Hadoop & Vertica. A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database
Managing Big Data with Hadoop & Vertica A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database Copyright Vertica Systems, Inc. October 2009 Cloudera and Vertica
Reference Architecture, Requirements, Gaps, Roles
Reference Architecture, Requirements, Gaps, Roles The contents of this document are an excerpt from the brainstorming document M0014. The purpose is to show how a detailed Big Data Reference Architecture
Big Data. White Paper. Big Data Executive Overview WP-BD-10312014-01. Jafar Shunnar & Dan Raver. Page 1 Last Updated 11-10-2014
White Paper Big Data Executive Overview WP-BD-10312014-01 By Jafar Shunnar & Dan Raver Page 1 Last Updated 11-10-2014 Table of Contents Section 01 Big Data Facts Page 3-4 Section 02 What is Big Data? Page
The 4 Pillars of Technosoft s Big Data Practice
beyond possible Big Use End-user applications Big Analytics Visualisation tools Big Analytical tools Big management systems The 4 Pillars of Technosoft s Big Practice Overview Businesses have long managed
A Tour of the Zoo the Hadoop Ecosystem Prafulla Wani
A Tour of the Zoo the Hadoop Ecosystem Prafulla Wani Technical Architect - Big Data Syntel Agenda Welcome to the Zoo! Evolution Timeline Traditional BI/DW Architecture Where Hadoop Fits In 2 Welcome to
Managing Cloud Server with Big Data for Small, Medium Enterprises: Issues and Challenges
Managing Cloud Server with Big Data for Small, Medium Enterprises: Issues and Challenges Prerita Gupta Research Scholar, DAV College, Chandigarh Dr. Harmunish Taneja Department of Computer Science and
Executive Summary... 2 Introduction... 3. Defining Big Data... 3. The Importance of Big Data... 4 Building a Big Data Platform...
Executive Summary... 2 Introduction... 3 Defining Big Data... 3 The Importance of Big Data... 4 Building a Big Data Platform... 5 Infrastructure Requirements... 5 Solution Spectrum... 6 Oracle s Big Data
BIG DATA What it is and how to use?
BIG DATA What it is and how to use? Lauri Ilison, PhD Data Scientist 21.11.2014 Big Data definition? There is no clear definition for BIG DATA BIG DATA is more of a concept than precise term 1 21.11.14
ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat
ESS event: Big Data in Official Statistics Antonino Virgillito, Istat v erbi v is 1 About me Head of Unit Web and BI Technologies, IT Directorate of Istat Project manager and technical coordinator of Web
Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control
Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control EP/K006487/1 UK PI: Prof Gareth Taylor (BU) China PI: Prof Yong-Hua Song (THU) Consortium UK Members: Brunel University
Associate Professor, Department of CSE, Shri Vishnu Engineering College for Women, Andhra Pradesh, India 2
Volume 6, Issue 3, March 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Special Issue
Keywords Big Data; OODBMS; RDBMS; hadoop; EDM; learning analytics, data abundance.
Volume 4, Issue 11, November 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Analytics
Beyond Web Application Log Analysis using Apache TM Hadoop. A Whitepaper by Orzota, Inc.
Beyond Web Application Log Analysis using Apache TM Hadoop A Whitepaper by Orzota, Inc. 1 Web Applications As more and more software moves to a Software as a Service (SaaS) model, the web application has
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, [email protected] Assistant Professor, Information
BIG DATA CHALLENGES AND PERSPECTIVES
BIG DATA CHALLENGES AND PERSPECTIVES Meenakshi Sharma 1, Keshav Kishore 2 1 Student of Master of Technology, 2 Head of Department, Department of Computer Science and Engineering, A P Goyal Shimla University,
Big Data With Hadoop
With Saurabh Singh [email protected] The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials
BIG DATA ANALYTICS REFERENCE ARCHITECTURES AND CASE STUDIES
BIG DATA ANALYTICS REFERENCE ARCHITECTURES AND CASE STUDIES Relational vs. Non-Relational Architecture Relational Non-Relational Rational Predictable Traditional Agile Flexible Modern 2 Agenda Big Data
Data Refinery with Big Data Aspects
International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 7 (2013), pp. 655-662 International Research Publications House http://www. irphouse.com /ijict.htm Data
Capitalize on Big Data for Competitive Advantage with Bedrock TM, an integrated Management Platform for Hadoop Data Lakes
Capitalize on Big Data for Competitive Advantage with Bedrock TM, an integrated Management Platform for Hadoop Data Lakes Highly competitive enterprises are increasingly finding ways to maximize and accelerate
Integrating Big Data into the Computing Curricula
Integrating Big Data into the Computing Curricula Yasin Silva, Suzanne Dietrich, Jason Reed, Lisa Tsosie Arizona State University http://www.public.asu.edu/~ynsilva/ibigdata/ 1 Overview Motivation Big
Interactive data analytics drive insights
Big data Interactive data analytics drive insights Daniel Davis/Invodo/S&P. Screen images courtesy of Landmark Software and Services By Armando Acosta and Joey Jablonski The Apache Hadoop Big data has
Big Data on Microsoft Platform
Big Data on Microsoft Platform Prepared by GJ Srinivas Corporate TEG - Microsoft Page 1 Contents 1. What is Big Data?...3 2. Characteristics of Big Data...3 3. Enter Hadoop...3 4. Microsoft Big Data Solutions...4
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related Summary Xiangzhe Li Nowadays, there are more and more data everyday about everything. For instance, here are some of the astonishing
NoSQL and Hadoop Technologies On Oracle Cloud
NoSQL and Hadoop Technologies On Oracle Cloud Vatika Sharma 1, Meenu Dave 2 1 M.Tech. Scholar, Department of CSE, Jagan Nath University, Jaipur, India 2 Assistant Professor, Department of CSE, Jagan Nath
Analytics in the Cloud. Peter Sirota, GM Elastic MapReduce
Analytics in the Cloud Peter Sirota, GM Elastic MapReduce Data-Driven Decision Making Data is the new raw material for any business on par with capital, people, and labor. What is Big Data? Terabytes of
Hadoop implementation of MapReduce computational model. Ján Vaňo
Hadoop implementation of MapReduce computational model Ján Vaňo What is MapReduce? A computational model published in a paper by Google in 2004 Based on distributed computation Complements Google s distributed
End to End Solution to Accelerate Data Warehouse Optimization. Franco Flore Alliance Sales Director - APJ
End to End Solution to Accelerate Data Warehouse Optimization Franco Flore Alliance Sales Director - APJ Big Data Is Driving Key Business Initiatives Increase profitability, innovation, customer satisfaction,
Big Data and Analytics: Challenges and Opportunities
Big Data and Analytics: Challenges and Opportunities Dr. Amin Beheshti Lecturer and Senior Research Associate University of New South Wales, Australia (Service Oriented Computing Group, CSE) Talk: Sharif
Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook
Hadoop Ecosystem Overview CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook Agenda Introduce Hadoop projects to prepare you for your group work Intimate detail will be provided in future
Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data
Introduction to Hadoop HDFS and Ecosystems ANSHUL MITTAL Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data Topics The goal of this presentation is to give
BIG DATA & ANALYTICS. Transforming the business and driving revenue through big data and analytics
BIG DATA & ANALYTICS Transforming the business and driving revenue through big data and analytics Collection, storage and extraction of business value from data generated from a variety of sources are
The Future of Data Management
The Future of Data Management with Hadoop and the Enterprise Data Hub Amr Awadallah (@awadallah) Cofounder and CTO Cloudera Snapshot Founded 2008, by former employees of Employees Today ~ 800 World Class
5 Keys to Unlocking the Big Data Analytics Puzzle. Anurag Tandon Director, Product Marketing March 26, 2014
5 Keys to Unlocking the Big Data Analytics Puzzle Anurag Tandon Director, Product Marketing March 26, 2014 1 A Little About Us A global footprint. A proven innovator. A leader in enterprise analytics for
HDP Enabling the Modern Data Architecture
HDP Enabling the Modern Data Architecture Herb Cunitz President, Hortonworks Page 1 Hortonworks enables adoption of Apache Hadoop through HDP (Hortonworks Data Platform) Founded in 2011 Original 24 architects,
COMP9321 Web Application Engineering
COMP9321 Web Application Engineering Semester 2, 2015 Dr. Amin Beheshti Service Oriented Computing Group, CSE, UNSW Australia Week 11 (Part II) http://webapps.cse.unsw.edu.au/webcms2/course/index.php?cid=2411
Big Data Introduction
Big Data Introduction Ralf Lange Global ISV & OEM Sales 1 Copyright 2012, Oracle and/or its affiliates. All rights Conventional infrastructure 2 Copyright 2012, Oracle and/or its affiliates. All rights
Comprehensive Analytics on the Hortonworks Data Platform
Comprehensive Analytics on the Hortonworks Data Platform We do Hadoop. Page 1 Page 2 Back to 2005 Page 3 Vertical Scaling Page 4 Vertical Scaling Page 5 Vertical Scaling Page 6 Horizontal Scaling Page
Monitis Project Proposals for AUA. September 2014, Yerevan, Armenia
Monitis Project Proposals for AUA September 2014, Yerevan, Armenia Distributed Log Collecting and Analysing Platform Project Specifications Category: Big Data and NoSQL Software Requirements: Apache Hadoop
W H I T E P A P E R. Deriving Intelligence from Large Data Using Hadoop and Applying Analytics. Abstract
W H I T E P A P E R Deriving Intelligence from Large Data Using Hadoop and Applying Analytics Abstract This white paper is focused on discussing the challenges facing large scale data processing and the
Architecting for Big Data Analytics and Beyond: A New Framework for Business Intelligence and Data Warehousing
Architecting for Big Data Analytics and Beyond: A New Framework for Business Intelligence and Data Warehousing Wayne W. Eckerson Director of Research, TechTarget Founder, BI Leadership Forum Business Analytics
QLIKVIEW DEPLOYMENT FOR BIG DATA ANALYTICS AT KING.COM
QLIKVIEW DEPLOYMENT FOR BIG DATA ANALYTICS AT KING.COM QlikView Technical Case Study Series Big Data June 2012 qlikview.com Introduction This QlikView technical case study focuses on the QlikView deployment
TAMING THE BIG CHALLENGE OF BIG DATA MICROSOFT HADOOP
Pythian White Paper TAMING THE BIG CHALLENGE OF BIG DATA MICROSOFT HADOOP ABSTRACT As companies increasingly rely on big data to steer decisions, they also find themselves looking for ways to simplify
Hadoop IST 734 SS CHUNG
Hadoop IST 734 SS CHUNG Introduction What is Big Data?? Bulk Amount Unstructured Lots of Applications which need to handle huge amount of data (in terms of 500+ TB per day) If a regular machine need to
Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop
Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop Kanchan A. Khedikar Department of Computer Science & Engineering Walchand Institute of Technoloy, Solapur, Maharashtra,
Workshop on Hadoop with Big Data
Workshop on Hadoop with Big Data Hadoop? Apache Hadoop is an open source framework for distributed storage and processing of large sets of data on commodity hardware. Hadoop enables businesses to quickly
INTRODUCTION TO CASSANDRA
INTRODUCTION TO CASSANDRA This ebook provides a high level overview of Cassandra and describes some of its key strengths and applications. WHAT IS CASSANDRA? Apache Cassandra is a high performance, open
Native Connectivity to Big Data Sources in MSTR 10
Native Connectivity to Big Data Sources in MSTR 10 Bring All Relevant Data to Decision Makers Support for More Big Data Sources Optimized Access to Your Entire Big Data Ecosystem as If It Were a Single
Forecast of Big Data Trends. Assoc. Prof. Dr. Thanachart Numnonda Executive Director IMC Institute 3 September 2014
Forecast of Big Data Trends Assoc. Prof. Dr. Thanachart Numnonda Executive Director IMC Institute 3 September 2014 Big Data transforms Business 2 Data created every minute Source http://mashable.com/2012/06/22/data-created-every-minute/
Intro to Big Data and Business Intelligence
Intro to Big Data and Business Intelligence Anjana Susarla Eli Broad College of Business What is Business Intelligence A Simple Definition: The applications and technologies transforming Business Data
OPEN MODERN DATA ARCHITECTURE FOR FINANCIAL SERVICES RISK MANAGEMENT
WHITEPAPER OPEN MODERN DATA ARCHITECTURE FOR FINANCIAL SERVICES RISK MANAGEMENT A top-tier global bank s end-of-day risk analysis jobs didn t complete in time for the next start of trading day. To solve
Datenverwaltung im Wandel - Building an Enterprise Data Hub with
Datenverwaltung im Wandel - Building an Enterprise Data Hub with Cloudera Bernard Doering Regional Director, Central EMEA, Cloudera Cloudera Your Hadoop Experts Founded 2008, by former employees of Employees
ISSN: 2320-1363 CONTEXTUAL ADVERTISEMENT MINING BASED ON BIG DATA ANALYTICS
CONTEXTUAL ADVERTISEMENT MINING BASED ON BIG DATA ANALYTICS A.Divya *1, A.M.Saravanan *2, I. Anette Regina *3 MPhil, Research Scholar, Muthurangam Govt. Arts College, Vellore, Tamilnadu, India Assistant
Hadoop Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science
A Seminar report On Hadoop Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science SUBMITTED TO: www.studymafia.org SUBMITTED BY: www.studymafia.org
How To Scale Out Of A Nosql Database
Firebird meets NoSQL (Apache HBase) Case Study Firebird Conference 2011 Luxembourg 25.11.2011 26.11.2011 Thomas Steinmaurer DI +43 7236 3343 896 [email protected] www.scch.at Michael Zwick DI
Introduction to Hadoop. New York Oracle User Group Vikas Sawhney
Introduction to Hadoop New York Oracle User Group Vikas Sawhney GENERAL AGENDA Driving Factors behind BIG-DATA NOSQL Database 2014 Database Landscape Hadoop Architecture Map/Reduce Hadoop Eco-system Hadoop
GigaSpaces Real-Time Analytics for Big Data
GigaSpaces Real-Time Analytics for Big Data GigaSpaces makes it easy to build and deploy large-scale real-time analytics systems Rapidly increasing use of large-scale and location-aware social media and
Internals of Hadoop Application Framework and Distributed File System
International Journal of Scientific and Research Publications, Volume 5, Issue 7, July 2015 1 Internals of Hadoop Application Framework and Distributed File System Saminath.V, Sangeetha.M.S Abstract- Hadoop
An Oracle White Paper October 2011. Oracle: Big Data for the Enterprise
An Oracle White Paper October 2011 Oracle: Big Data for the Enterprise Executive Summary... 2 Introduction... 3 Defining Big Data... 3 The Importance of Big Data... 4 Building a Big Data Platform... 5
NoSQL for SQL Professionals William McKnight
NoSQL for SQL Professionals William McKnight Session Code BD03 About your Speaker, William McKnight President, McKnight Consulting Group Frequent keynote speaker and trainer internationally Consulted to
An Industrial Perspective on the Hadoop Ecosystem. Eldar Khalilov Pavel Valov
An Industrial Perspective on the Hadoop Ecosystem Eldar Khalilov Pavel Valov agenda 03.12.2015 2 agenda Introduction 03.12.2015 2 agenda Introduction Research goals 03.12.2015 2 agenda Introduction Research
The 3 questions to ask yourself about BIG DATA
The 3 questions to ask yourself about BIG DATA Do you have a big data problem? Companies looking to tackle big data problems are embarking on a journey that is full of hype, buzz, confusion, and misinformation.
Where is... How do I get to...
Big Data, Fast Data, Spatial Data Making Sense of Location Data in a Smart City Hans Viehmann Product Manager EMEA ORACLE Corporation August 19, 2015 Copyright 2014, Oracle and/or its affiliates. All rights
Aligning Your Strategic Initiatives with a Realistic Big Data Analytics Roadmap
Aligning Your Strategic Initiatives with a Realistic Big Data Analytics Roadmap 3 key strategic advantages, and a realistic roadmap for what you really need, and when 2012, Cognizant Topics to be discussed
Big Data Technology ดร.ช ชาต หฤไชยะศ กด. Choochart Haruechaiyasak, Ph.D.
Big Data Technology ดร.ช ชาต หฤไชยะศ กด Choochart Haruechaiyasak, Ph.D. Speech and Audio Technology Laboratory (SPT) National Electronics and Computer Technology Center (NECTEC) National Science and Technology
Converged, Real-time Analytics Enabling Faster Decision Making and New Business Opportunities
Technology Insight Paper Converged, Real-time Analytics Enabling Faster Decision Making and New Business Opportunities By John Webster February 2015 Enabling you to make the best technology decisions Enabling
How Transactional Analytics is Changing the Future of Business A look at the options, use cases, and anti-patterns
How Transactional Analytics is Changing the Future of Business A look at the options, use cases, and anti-patterns Table of Contents Abstract... 3 Introduction... 3 Definition... 3 The Expanding Digitization
ANALYTICS BUILT FOR INTERNET OF THINGS
ANALYTICS BUILT FOR INTERNET OF THINGS Big Data Reporting is Out, Actionable Insights are In In recent years, it has become clear that data in itself has little relevance, it is the analysis of it that
Testing 3Vs (Volume, Variety and Velocity) of Big Data
Testing 3Vs (Volume, Variety and Velocity) of Big Data 1 A lot happens in the Digital World in 60 seconds 2 What is Big Data Big Data refers to data sets whose size is beyond the ability of commonly used
Data Warehousing in the Age of Big Data
Data Warehousing in the Age of Big Data Krish Krishnan AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD * PARIS SAN DIEGO SAN FRANCISCO SINGAPORE SYDNEY TOKYO Morgan Kaufmann is an imprint of Elsevier
Oracle s Big Data solutions. Roger Wullschleger. <Insert Picture Here>
s Big Data solutions Roger Wullschleger DBTA Workshop on Big Data, Cloud Data Management and NoSQL 10. October 2012, Stade de Suisse, Berne 1 The following is intended to outline
Big Data, Why All the Buzz? (Abridged) Anita Luthra, February 20, 2014
Big Data, Why All the Buzz? (Abridged) Anita Luthra, February 20, 2014 Defining Big Not Just Massive Data Big data refers to data sets whose size is beyond the ability of typical database software tools
Manifest for Big Data Pig, Hive & Jaql
Manifest for Big Data Pig, Hive & Jaql Ajay Chotrani, Priyanka Punjabi, Prachi Ratnani, Rupali Hande Final Year Student, Dept. of Computer Engineering, V.E.S.I.T, Mumbai, India Faculty, Computer Engineering,
TRAINING PROGRAM ON BIGDATA/HADOOP
Course: Training on Bigdata/Hadoop with Hands-on Course Duration / Dates / Time: 4 Days / 24th - 27th June 2015 / 9:30-17:30 Hrs Venue: Eagle Photonics Pvt Ltd First Floor, Plot No 31, Sector 19C, Vashi,
Microsoft Big Data Solutions. Anar Taghiyev P-TSP E-mail: [email protected];
Microsoft Big Data Solutions Anar Taghiyev P-TSP E-mail: [email protected]; Why/What is Big Data and Why Microsoft? Options of storage and big data processing in Microsoft Azure. Real Impact of Big
Hadoop. http://hadoop.apache.org/ Sunday, November 25, 12
Hadoop http://hadoop.apache.org/ What Is Apache Hadoop? The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using
An Oracle White Paper November 2010. Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics
An Oracle White Paper November 2010 Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics 1 Introduction New applications such as web searches, recommendation engines,
Client Overview. Engagement Situation. Key Requirements
Client Overview Our client is one of the leading providers of business intelligence systems for customers especially in BFSI space that needs intensive data analysis of huge amounts of data for their decision
Large scale processing using Hadoop. Ján Vaňo
Large scale processing using Hadoop Ján Vaňo What is Hadoop? Software platform that lets one easily write and run applications that process vast amounts of data Includes: MapReduce offline computing engine
Using Tableau Software with Hortonworks Data Platform
Using Tableau Software with Hortonworks Data Platform September 2013 2013 Hortonworks Inc. http:// Modern businesses need to manage vast amounts of data, and in many cases they have accumulated this data
www.pwc.com/oracle Next presentation starting soon Business Analytics using Big Data to gain competitive advantage
www.pwc.com/oracle Next presentation starting soon Business Analytics using Big Data to gain competitive advantage If every image made and every word written from the earliest stirring of civilization
#mstrworld. Tapping into Hadoop and NoSQL Data Sources in MicroStrategy. Presented by: Trishla Maru. #mstrworld
Tapping into Hadoop and NoSQL Data Sources in MicroStrategy Presented by: Trishla Maru Agenda Big Data Overview All About Hadoop What is Hadoop? How does MicroStrategy connects to Hadoop? Customer Case
Data Mining in the Swamp
WHITE PAPER Page 1 of 8 Data Mining in the Swamp Taming Unruly Data with Cloud Computing By John Brothers Business Intelligence is all about making better decisions from the data you have. However, all
Surfing the Data Tsunami: A New Paradigm for Big Data Processing and Analytics
Surfing the Data Tsunami: A New Paradigm for Big Data Processing and Analytics Dr. Liangxiu Han Future Networks and Distributed Systems Group (FUNDS) School of Computing, Mathematics and Digital Technology,
Big Data and New Paradigms in Information Management. Vladimir Videnovic Institute for Information Management
Big Data and New Paradigms in Information Management Vladimir Videnovic Institute for Information Management 2 "I am certainly not an advocate for frequent and untried changes laws and institutions must
A Brief Outline on Bigdata Hadoop
A Brief Outline on Bigdata Hadoop Twinkle Gupta 1, Shruti Dixit 2 RGPV, Department of Computer Science and Engineering, Acropolis Institute of Technology and Research, Indore, India Abstract- Bigdata is
BIG DATA TOOLS. Top 10 open source technologies for Big Data
BIG DATA TOOLS Top 10 open source technologies for Big Data We are in an ever expanding marketplace!!! With shorter product lifecycles, evolving customer behavior and an economy that travels at the speed
