Survey on-big Data Analysis and Mining



Similar documents
Survey on Big Data and Mining Algorithm

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014

Big Data Analytic and Mining with Machine Learning Algorithm

Transforming the Telecoms Business using Big Data and Analytics

Big Data and Analytics: Getting Started with ArcGIS. Mike Park Erik Hoel

Keywords Big Data; OODBMS; RDBMS; hadoop; EDM; learning analytics, data abundance.

How To Scale Out Of A Nosql Database

Managing Cloud Server with Big Data for Small, Medium Enterprises: Issues and Challenges

How To Handle Big Data With A Data Scientist

Chapter 7. Using Hadoop Cluster and MapReduce

Role of Social Networking in Marketing using Data Mining

Sunnie Chung. Cleveland State University

The 4 Pillars of Technosoft s Big Data Practice

Are You Ready for Big Data?

ISSN: CONTEXTUAL ADVERTISEMENT MINING BASED ON BIG DATA ANALYTICS

Big Data Explained. An introduction to Big Data Science.

How to use Big Data in Industry 4.0 implementations. LAURI ILISON, PhD Head of Big Data and Machine Learning

BIG DATA What it is and how to use?

BIG DATA ANALYTICS REFERENCE ARCHITECTURES AND CASE STUDIES

Big Data: Opportunities & Challenges, Myths & Truths 資 料 來 源 : 台 大 廖 世 偉 教 授 課 程 資 料

Big Data and Analytics: Challenges and Opportunities

COMP9321 Web Application Engineering

How To Make Sense Of Data With Altilia

Are You Ready for Big Data?

Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop

Is a Data Scientist the New Quant? Stuart Kozola MathWorks

DATAMEER WHITE PAPER. Beyond BI. Big Data Analytic Use Cases

Executive Summary... 2 Introduction Defining Big Data The Importance of Big Data... 4 Building a Big Data Platform...

Manifest for Big Data Pig, Hive & Jaql

Big Data Analytics for Space Exploration, Entrepreneurship and Policy Opportunities. Tiffani Crawford, PhD

An Overview of Knowledge Discovery Database and Data mining Techniques

BIG DATA IN BUSINESS ENVIRONMENT

Applications for Big Data Analytics

Big Data. Fast Forward. Putting data to productive use

Getting to Know Big Data

BIG DATA TECHNOLOGY. Hadoop Ecosystem

BEYOND BI: Big Data Analytic Use Cases

Big Systems, Big Data

Volume 3, Issue 6, June 2015 International Journal of Advance Research in Computer Science and Management Studies

Information Management course

5 Keys to Unlocking the Big Data Analytics Puzzle. Anurag Tandon Director, Product Marketing March 26, 2014

International Journal of Advanced Engineering Research and Applications (IJAERA) ISSN: Vol. 1, Issue 6, October Big Data and Hadoop

Big Data and Data Science: Behind the Buzz Words

Software Engineering for Big Data. CS846 Paulo Alencar David R. Cheriton School of Computer Science University of Waterloo

International Journal of Innovative Research in Computer and Communication Engineering

Volume 3, Issue 8, August 2015 International Journal of Advance Research in Computer Science and Management Studies

Using Data Mining for Mobile Communication Clustering and Characterization

Big Data Analytics. Lucas Rego Drumond

Big Data and Healthcare Payers WHITE PAPER

BIG DATA & ANALYTICS. Transforming the business and driving revenue through big data and analytics

Alexander Nikov. 5. Database Systems and Managing Data Resources. Learning Objectives. RR Donnelley Tries to Master Its Data

White Paper: Hadoop for Intelligence Analysis

AGENDA. What is BIG DATA? What is Hadoop? Why Microsoft? The Microsoft BIG DATA story. Our BIG DATA Roadmap. Hadoop PDW

ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat

BIG DATA TRENDS AND TECHNOLOGIES

Mining With Big Data Using HACE

Big Data Analytics. Prof. Dr. Lars Schmidt-Thieme

Improving Data Processing Speed in Big Data Analytics Using. HDFS Method

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Keywords Big Data, NoSQL, Relational Databases, Decision Making using Big Data, Hadoop

An Oracle White Paper October Oracle: Big Data for the Enterprise

Managing Big Data with Hadoop & Vertica. A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database

HUAWEI Advanced Data Science with Spark Streaming. Albert Bifet

Big Data Mining: Challenges and Opportunities to Forecast Future Scenario

Hadoop. Sunday, November 25, 12

Hortonworks & SAS. Analytics everywhere. Page 1. Hortonworks Inc All Rights Reserved

Convergence of Big Data and Cloud

The basic data mining algorithms introduced may be enhanced in a number of ways.

Data Refinery with Big Data Aspects

Keywords Big Data Analytic Tools, Data Mining, Hadoop and MapReduce, HBase and Hive tools, User-Friendly tools.

Tutorial: Big Data Algorithms and Applications Under Hadoop KUNPENG ZHANG SIDDHARTHA BHATTACHARYYA

Chapter 6. Foundations of Business Intelligence: Databases and Information Management

SURVEY REPORT DATA SCIENCE SOCIETY 2014

Next presentation starting soon Business Analytics using Big Data to gain competitive advantage

Hadoop Beyond Hype: Complex Adaptive Systems Conference Nov 16, Viswa Sharma Solutions Architect Tata Consultancy Services

Taking Data Analytics to the Next Level

Hadoop. MPDL-Frühstück 9. Dezember 2013 MPDL INTERN

The future: Big Data, IoT, VR, AR. Leif Granholm Tekla / Trimble buildings Senior Vice President / BIM Ambassador

Where is... How do I get to...

Introduction. A. Bellaachia Page: 1

Big data for the Masses The Unique Challenge of Big Data Integration

The Next Wave of Data Management. Is Big Data The New Normal?

The Internet of Things and Big Data: Intro

Introduction to Data Mining

Data processing goes big

Massive Cloud Auditing using Data Mining on Hadoop

NoSQL and Hadoop Technologies On Oracle Cloud

Apache Hadoop: The Big Data Refinery

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

Oracle s Big Data solutions. Roger Wullschleger. <Insert Picture Here>

A Brief Outline on Bigdata Hadoop

Exploiting Data at Rest and Data in Motion with a Big Data Platform

Converged, Real-time Analytics Enabling Faster Decision Making and New Business Opportunities

Data Mining in the Swamp

What do Big Data & HAVEn mean? Robert Lejnert HP Autonomy

How To Use Big Data Effectively

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Big Data With Hadoop

Sanjeev Kumar. contribute

DATA MINING TECHNIQUES SUPPORT TO KNOWLEGDE OF BUSINESS INTELLIGENT SYSTEM

Transcription:

Survey on-big Data Analysis and Mining Abstract A data stream is an ordered sequence of instances that can be read only once or a small number of times using limited computing and storage capabilities. Many application works on stream data like web search, network traffic, telephone call etc. In this application data is continuously changing based on time. In this paper we will discuss future trends of data mining that are used for analysis and prediction of Big data.we will discuss challenges while performing mining on Big data. Stream data is also refer as real time data. Real time data generated through internet,per second millions of data generated,so how to manage and analyze this data,we discuss in this paper. Keywords Network Traffic, Computation, Stream Data I. INTRODUCTION Now a days the quantity of data created per second is very large. data stream real time analysis is required to manage this large data,through proper analysis we can get crucial data,through this we can predict network traffic,intrusion related activity,weather. log records or click-streams in web exploring, manufacturing processes, call detail records, email, blogging, twitter posts and others [1].data generated from stream is just snapshot of stream data. Snapshot is based on time interval. stream data mining algorithm processing based time and space constraint. the main of algorithms is usage of resources (resources can be Memory and time).in stream database,to perform stream mining we have to consider,accuracy,amount of space,time required to learn from training examples for getting prediction.data is large, and growing,there are important patterns and trends in the data.we don t fully know where to look or how to find them. Big data analysis is most important because the data is continuously changing based on interval of time to store big data most of companies are using cloud setup. II. CHARACTERISTICS OF BIG DATA The size of data which can be considered to be big data is constantly varying factor and newer tools are continuously being developed to handle this big data.big data can be described by following characteristics, Anita Chaudhari 1, Neeta Patil 2, Pratap Sakhare 3 University of Mumbai, SJCET, Palghar, India Volume: The quantity of data generated is very important in this context.it is the size of data which determines value and potential of data under consideration and whether it is can actually considered as big data or not. Variety: The next aspect og big data is variety. means that the category to which big data belongs to is also a very essential fact that need to be know by data analyst.this information is very important for people, those will analyze this data.through this they will understand importance of big data. Velocity: Velocity refers to speed of generation of data and how fast data is generated and process to meet the demands of buisiness. Variability: Because of this characteristics,analysis of data is very difficult.this refers to inconsistency can be present in data at time.due to this the process required for analysis should manage and handle big data effectively Complexity: The data collected from different sources,this data needs to be linked,collected and correlated with each other so throgh this data we will able to derive information that data wants to represent.[1] The major sources of big data are from the following, A. Archives Archives are mainly maintained by organizations, to show the function of a particular person or organization functions. Accumulation of archives sometimes does not fit into the traditional storage systems and need systems with high processing capabilities. This voluminous archive contributes to big data. B. Media Users generate images, videos, audios, live streams, podcasts and so on contributes for big data. C. Business applications Huge volumes of data are generated from business applications as part of project management, marketing automation, productivity, customer relation management (CRM), enterprise resource planning (ERP) content management systems, procurement, human resource (HR), storage, talent management, Google Docs, intranets, portals and so on. These data contributes to big data. 348

D. Public web Many organizations under government sector, weather, competitive, traffic regulatory compliance, health care services, economic, census, public finance, stock, open source intelligence (OSINT), the world bank, electronic data gathering analysis and retrieval (Edgar), Wikipedia and so on uses web services for communication. These data contributes to big data E. Social Media Nowadays users rely on social media sites such as twitter, linkedln, facebook, tumblr, blog, slideshare, youtube, google+, instagram, flickr, pinterest, vimeo, wordpress and so on for the creation and exchange of user generated contents. These social networking sites contribute to big data. F. Data Storage Data storage in SQL, NoSQL, Hadoop, doc repository, file systems and so on also contributes to big data. G. Sensor Data Accumulation of large quantitative data sets from distributed sensors are now becoming widely available online from medical devices, smart electric meters, car sensors, road cameras, satellites, traffic recording devices, processors found within vehicles, video games, cable boxes or household appliances, assembly lines, cell towers and jet engines, air conditioning units, refrigerators, trucks, farm machinery and so on. This contributes to big data[2]. III. EXAMPLES OF BIG DATA One of the fundamental characteristics of the Big Data is the huge volume of data represented by heterogeneous and diverse dimensionality. This is because different information collectors prefer their own schemata or protocols for data recording, and the nature of different applications also results in diverse data representations. Example, each single human being in a biomedical world can be represented by using simple demographic information suchas gender, age, family disease history,blood group,other medical chek-up reports.. For X- ray examination and CT scan of each individual, images or videos are used to represent the results because they provide visual information for doctors to carry detailed examinations. For a DNA or genomic-related test, microarray expression images and sequences are used to represent the genetic code information because this is the way that our current techniques acquire the data. Under such circumstances, the heterogeneous features refer to the different types of representations for the same individuals, and the diverse features refer to the variety of the features involved to represent each single observation.[7,8] Imagine that different organizations of doctors may have their own schema to represent each patient, the data heterogeneity and diverse dimensionality issues become major challenges if we are trying to enable data aggregation by combining data from all sources. Other examples of big data are Calls,text,tweet,net surf, browse through various web site each day and exchange massages via several means. social media usage,several people use a social media for exchanging the information via several forms,also forms a part of big data. Transactions made through cards for various payment issues in large numbers every second across the world also constitutes the big data. These application are facing the problems at the time of analysis.[2] When data collected from web it comes from many sources.most of the web sites are maintain data on single sever.somtimes it is not possible to maintain all data at single server. If data is collected on single server,it does not rely on other server,but here the main issue is security,enormous amount of data is venerable to attack and malfunction.so the copy of server should be maintain on other server,there should be replicas present to serve services,if main server goes down.for example facebook,google,twitter gives the nonstop services to their customers.[7] Facebook,twiter are social media through this people can communicate,but when user is registering for these web sites they has to enter details such as gender, address,education,mobile number,hobbies,e-mail address and so on.these data fields are used to categorize each individuals. These data fields represents individual as a separate entity. In daily life,generally friend circles are formed based on similar hobbies and through biological realtion.the same concept for forming the friend is applied in cyberworld.as facebook and twitter mainly characterized by social function such as friends connections and followers in twitter.[2] IV. BIG DATA (STREAM DATA) OPEN SOURCE SOFTWARE Big Data is general term that is used to refer data,only for large size of data we called it as big data.the main issue is our normal data mining algorithm will not able to manage this big data. 349

Instead of defining Big Data as datasets of a concrete large size, for example in the order of magnitude of petabytes, the definition is related to the fact that the dataset is too big to be managed without using new algorithms or technologies. There are two main strategies for dealing with big data: sampling and using distributed systems. In sampling,we obtain a smaller data set with the same structure.sampling is based in the fact that if the dataset is too large and we can not use all the examples, we can obtain an approximate solution using a subset of the examples. A good sampling method will try to select the best instances, to have a good performance using a small quantity of memory and time.[3] Different sampling techniques are available to sample stream data. An alternative to sampling is the use of probabilistic techniques. Facebook uses friendship links to compute distance between two nodes through distance. Millions of user accessing a face book but they managed to compute the average distance between two users on Facebook only using one machine.when we are operating on big data that time memory required to store data is main constraint but,to store the big data does not required big machine,it needs big intelligence. A MapReduce program is composed of a Map() procedure that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() procedure that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).mapreduce is a programming model and an associated implementation for processing and generating large data sets with a parallel, distributed, algorithm on a cluster. The map-reduce methodology started in Google. Hadoop is a open-source implementation of map-reduce started in Yahoo! Then twitter,ebay. Big data phenomenon is intrinsically related to the open source software. Large companies as Facebook, Yahoo!, Twitter, LinkedIn benefit and contribute working on open source projects. Big Data infrastructure deals with Hadoop. Hadoop is an open-source software framework for storing and processing big data in a distributed fashion on large clusters of commodity hardware. Essentially, it accomplishes two tasks: massive data storage and faster processing. Open-source software. Open source software differs from commercial software due to the broad and open network of developers that create and manage the programs. Traditionally, it's free to download, use and contribute to, though more and more commercial versions of Hadoop are becoming available. Framework. In this case, it means everything you need to develop and run your software applications is provided programs, tool sets, connections, etc. Distributed. Data is divided and stored across multiple computers, and computations can be run in parallel across multiple connected machines. Massive storage. The Hadoop framework can store huge amounts of data by breaking the data into blocks and storing it on clusters of lower-cost commodity hardware. Faster processing. How? Hadoop processes large amounts of data in parallel across clusters of tightly connected lowcost computers for quick results. 1) Big Data Analysis Platforms and Tools There are many open source software for big data analysis, Apache Hadoop:=Apache Hadoop is an open source framework for distributed storage and processing of large sets of data on commodity hardware. Hadoop enables businesses to quickly gain insight from massive amounts of structured and unstructured data. Apache pig: A platform for processing and analyzing large data sets. Pig consists of a high-level language (Pig Latin) for expressing data analysis programs paired with the MapReduce framework for processing these programs. Apache HBase: A column-oriented NoSQL data storage system that provides random real-time read/write access to big data for user applications. non-relational columnar distributed database designed to run on top of Hadoop Distributed Filesystem (HDFS). It is written in Java and modeled after Google s BigTable. Apache S4: platform for processing continuous data streams. S4 is designed specifically for managing data streams. S4 apps are designed combining streams and processing elements in real time. Apache Storm: Storm is a distributed real-time computation system for processing fast, large streams of data adding reliable real-time data processing capabilities to Apache Hadoop 2.x. developed by Nathan Marz at Twitter.[2,3] Big data mining Open source software: R:- R is a language and environment for statistical computing and graphics. Vowpal Wabbit: open source project started at Yahoo! Research and continuing at Microsoft Research to design a fast, scalable, useful learning algorithm. 350

VWis able to learn from terafeature datasets. It can exceed the throughput of any single machine network interface when doing linear learning, via parallel learning and also uses different optimization technique. MOA:-MOA( Massive On-Line Analysis) is a framework for data stream mining. It includes tools for evalution and collection of machine learning algorithm. It has implementation of classification, regression, Clustering, frequent pattern mining and frequent graph mining. Related to the WEKA project it also implemented in JAVA. It includes a collection of offline and online as well as tools for evaluation: classification and clustering. Easy to design, extend and run experiments. The goal of MOA framework for running experiments in data stream min ing context by proving,storable setting for data streams for repeatable experiments, set of existing algorithm and measure from literature for comparison, an easily extendable framework for new streams, algorithms and evaluation methods. Appache Mahout:Scalable machine learning and data mining open source software based mainly in Hadoop. It has implementations of a wide range of machine learning and data mining algorithms: clustering, classification, collaborative filtering and frequent pattern mining. More specific to Big Graph mining we found the following open source tools: Pegasus: big graph mining system built on top of MapReduce. It allows to find patterns and anomalies in massive real-world graphs. Graph Lab: high-level graph-parallel system built without using MapReduce. GraphLab computes over dependent records which are stored as vertices in a large distributed data-graph. Algorithms in GraphLab are expressed as vertex-programs which are executed in parallel on each vertex and can interact with neighboring vertices.[3] V. BIG DATA MINING ALGORITHM A. Decision tree induction classification algorithms In the initial stage different Decision Tree Learning was used to analyze the big data. In decision tree induction algorithms, tree structure has been widely used to represent classification models. Most of these algorithms follow a greedy top down recursive partition strategy for the growth of the tree. Decision tree classifiers break a complex decision into collection of simpler decision. Hall. et al. [8] proposed learning rules for a large set of training data. The work proposed by Hall et al generated a single decision system from a large and independent subset of data. An efficient decision tree algorithm based on rainforest frame work was developed for classifying large data set [6]. B. Evolutionary based classification algorithms Evolutionary algorithms use domain independent technique to explore large spaces finding consistently good optimization solutions. There are different types of evolutionary algorithms such as genetic algorithms, genetic, programming, evolution strategies, evolutionary programming and so on. Among these, genetic algorithms were mostly used for mining classification rules in large data sets [3]. C. Partitioning based clustering algorithms In partitioning based algorithms, the large data sets are divided into a number of partitions, where each partition represents a cluster. K-means is one such partitioning based method to divide large data sets into number of clusters. Fuzzy- CMeans is a partition based clustering algorithm based on Kmeans to divide big data into several clusters[1] D. Hierarchical based clustering algorithms In hierarchical based algorithms large data are organized in a hierarchical manner based on the medium of proximity. The initial or root cluster gradually divides into several clusters. It follows a top down or bottom up strategy to represent the clusters. Birch algorithm is one such algorithm based on hierarchical clustering.to handle streaming data in real time, a novel algorithm for extracting semantic content were defined in Hierarchical clustering for concept mining.this algorithm was designed to be implemented in hardware, to handle data at very high rates. After that the techniques of self-organizing feature map (SOM) networks and learning vector quantization (LVQ) networks were discussed in Hierarchical Artificial Neural Networks for Recognizing High Similar Large Data Sets. SOM consumes input in an unsupervised manner whereas LVQ in supervised manner. It subdivides large data sets into smaller ones thus improving the overall computation time needed to process the large data set. E. Density based clustering algorithms In density based algorithms clusters are formed based on the data objects regions of density, connectivity and boundary. A cluster grows in any direction based on the density growth. DENCLUE is one such algorithm based on density based clustering. 351

F. Grid based clustering algorithms In grid base algorithms space of data objects are divided into number of grids for fast processing. OptiGrid algorithm is one such algorithm based on optimal grid partitioning. G. Model based clustering algorithms In model based clustering algorithms clustering is mainly performed by probability distribution. Expectation- Maximization is one such model based algorithm to estimate the maximum likelihood parameters of statistical models. In 2013, a new algorithm called scalable Visual Assessment of Tendency (svat) algorithm was developed to provide high scalable clustering in big data sets. Afterwards a distributed ensemble classifier algorithm was developed in the field based on the popular Random Forests for big data. This proposed algorithm makes use of Map Reduce for improving the efficiency and stochastic aware random forests for reducing randomness. Later in the field, a mixed visual or numerical clustering algorithm for big data called ClusiVAT was developed to provide fast clustering.[3,5] VI. CHALLANGES WITH BIG DATA We are awash in a flood of data today. In a broad range of application areas, data is being collected at unprecedented scale. Decisions that previously were based on guesswork, or on painstakingly constructed models of reality, can now be made based on the data itself. Such Big Data analysis now drives nearly every aspect of our modern society, including mobile services, retail, manufacturing, financial services, life sciences, and physical sciences. As Big Data applications are featured with autonomous sources and decentralized controls, aggregating distributed data sources to a centralized site for mining is systematically prohibitive due to the potential transmission cost and privacy concerns. On the other hand, although we can always carry out mining activities at each distributed site, the biased view of the data collected at each site often leads to biased decisions or models, just like the elephant and blind men case. Under such a circumstance, a Big Data mining system has to enable an information exchange and fusion mechanism to ensure that all distributed sites (or information sources) can work together to achieve a global optimization goal. Model mining and correlations are the key steps to ensure that models or patterns discovered from multiple information sources can be consolidated to meet the global mining objective. 352 More specifically, the global mining can be featured with a two-step (local mining and global correlation) process, at data, model, and at knowledge levels. At the data level, each local site can calculate the data statistics based on the local data sources and exchange the statistics between sites to achieve a global data distribution view. At the model or pattern level, each site can carry out local mining activities, with respect to the localized data, to discover local patterns. By exchanging patterns between multiple sources, new global patterns can be synthetized by aggregating patterns acrossall sites. At the knowledge level, model correlation analysis investigates the relevance between models generated from different data sources to determine how relevant the data sources are correlated with each other, and how to form accurate decisions based on models built from autonomous sources.[4] The rise of Big Data is driven by the rapid increasing of complex data and their changes in volumes and in nature [6]. Documents posted on WWW servers, Internet backbones, social networks, communication networks, and transportation networks, and so on are all featured with complex data. While complex dependency structures underneath the data raise the difficulty for our learning systems, they also offer exciting opportunities that simple data representations are incapable of achieving. For example, researchers have successfully used Twitter, a well-known social networking site, to detect events such as earthquakes and major social activities, with nearly real time speed and very high accuracy. In addition, by summarizing the queries users submitted to the search engines, which are all over the world, it is now possible to build an early warning system for detecting fast spreading flu outbreaks. Making use of complex data is a major challenge for Big Data applications, because any two parties in a complex network are potentially interested to each other with a social connection. Such a connection is quadratic with respect to the number of nodes in the network, so a million node network may be subject to one trillion connections. For a large social network site, like Facebook, the number of active users has already reached 1 billion, and analyzing such an enormous network is a big challenge for Big Data mining. If we take daily user actions/interactions into consideration, the scale of difficulty will be even more astonishing. Complex heterogeneous data types.: In Big Data, data types include structured data, unstructured data, and semistructured data, and so on. Specifically, there are tabular data (relational databases), text, hyper-text, image, audio and video data, and so on.

The existing data models include key-value stores, big table clones, document databases, and graph databases, which are listed in an ascending order of the complexity of these data models. Traditional data models are incapable of handling complex data in the context of Big Data. Currently, there is no acknowledged effective and efficient data model to handle Big Data. Complex intrinsic semantic associations in data. News on the web, comments on Twitter, pictures on Flicker, and clips of video on YouTube may discuss about an academic award winning event at the same time. There is no doubt that there are strong semantic associations in these data. Mining complex semantic associations from text-imagevideo data will significantly help improve application system performance such as search engines or recommendation systems. However, in the context of Big Data, it is a great challenge to efficiently describe semantic features and to build semantic association models to bridge the semantic gap of various heterogeneous data sources. Complex relationship networks in data. In the context of Big Data, there exist relationships between individuals. On the Internet, individuals are webpages and the pages linking to each other via hyperlinks form a complex network. There also exist social relationships between individuals forming complex social networks, such as big relationship data from Facebook, Twitter, LinkedIn, and other social media, including call detail records (CDR), devices and sensors information GPS and geocoded map data, massive image files transferred by the Manage File Transfer protocol, web text and click-stream data, scientific information, e-mail, and so on. To deal with complex relationship networks, emerging research efforts have begun to address the issues of structure-and-evolution, crowds-and-interaction, and information-andcommunication.[4,6] VII. CONCLUSION Analysis of large volumes of data is required to create business intelligence. Analysis is required to enhance growth in business and scientific research. In this paper we are facing many technical challenges for analysis of large volume of data.. The challenges includes obvious issues of scale, but also heterogeneity, lack of structure, errorhandling, privacy, timeliness, provenance, and visualization, at all stages of the analysis pipeline from data acquisition to result interpretation. These technical challenges are common across a large variety of application domains, and therefore not cost-effective to address in the context of one domain alone. Obvious Different solutions are required to analyze the big data. REFERENCES [1] Xindong Wu,Xingquan Zhu,Wei Ding, Data Mining with Big Data,IEEE Transactions On Knowledge And Data Engineering, Vol. 26, No. 1, January 2014 [2] Albert Bifet Mining Big Data in Real Time Informatica 37 (2013) 15 20 [3] Nawsher Khan, Ibrar Yaqoob,1 Ibrahim Abaker Targio Hashem,1 Zakira Inayat, Big Data: Survey, Technologies, Opportunities, and Challenges The Scientific World Journal Volume 2014 (2014), Article ID 712826, 18 pages [4] Challenges and Opportunities with Big Data A community white paper developed by leading researchers across the United States [5] Sherin A, Dr S Uma, Sarnya K, Sarnya Vani M, Survey On Big Data Mining Platforms, Algorithms And Challenges, International Journal of Computer Science & Engineering Technology (IJCSET) [6] Bharti Thakur, Manish Mann Data Mining for Big Data: A Review,International Journal of Advanced Research in Computer Science and Software Engineering, Volume 4, Issue 5, May 2014 ISSN: 2277 128X [7] B. Pang and L. Lee. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2):1{135, 2008. [8] M. Helft, Google Uses Searches to Track Flu s Spread, The New York Times http://www.nytimes.com/2008/11/12/technology/ internet/12flu.html. 2008. 353