Real Time Opinion Mining of Twitter Data



Similar documents
Associate Professor, Department of CSE, Shri Vishnu Engineering College for Women, Andhra Pradesh, India 2

ISSN: (Online) Volume 3, Issue 4, April 2015 International Journal of Advance Research in Computer Science and Management Studies

International Journal of Advancements in Research & Technology, Volume 3, Issue 2, February ISSN

Deploying Hadoop with Manager

Big Data With Hadoop

Unified Batch & Stream Processing Platform

Getting Started with Hadoop. Raanan Dagan Paul Tibaldi

Hadoop MapReduce and Spark. Giorgio Pedrazzi, CINECA-SCAI School of Data Analytics and Visualisation Milan, 10/06/2015

Manifest for Big Data Pig, Hive & Jaql

Lambda Architecture. Near Real-Time Big Data Analytics Using Hadoop. January Website:

Chapter 7. Using Hadoop Cluster and MapReduce

Hadoop Technology for Flow Analysis of the Internet Traffic

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook

W H I T E P A P E R. Deriving Intelligence from Large Data Using Hadoop and Applying Analytics. Abstract

Real World Hadoop Use Cases

Hadoop IST 734 SS CHUNG

Enhancing Massive Data Analytics with the Hadoop Ecosystem

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

Hadoop Ecosystem B Y R A H I M A.

Sentiment Analysis using Hadoop Sponsored By Atlink Communications Inc

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

Lecture 32 Big Data. 1. Big Data problem 2. Why the excitement about big data 3. What is MapReduce 4. What is Hadoop 5. Get started with Hadoop

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

Hadoop and Map-Reduce. Swati Gore

A Brief Outline on Bigdata Hadoop

Hadoop Architecture. Part 1

Internals of Hadoop Application Framework and Distributed File System

Big Data and Hadoop with components like Flume, Pig, Hive and Jaql

MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012

A Brief Introduction to Apache Tez

Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview

How To Handle Big Data With A Data Scientist

Apache Hadoop. Alexandru Costan

Hadoop 101. Lars George. NoSQL- Ma4ers, Cologne April 26, 2013

Managing Cloud Server with Big Data for Small, Medium Enterprises: Issues and Challenges

Implement Hadoop jobs to extract business value from large and varied data sets

Large scale processing using Hadoop. Ján Vaňo

NETWORK TRAFFIC ANALYSIS: HADOOP PIG VS TYPICAL MAPREDUCE

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Big Data and Hadoop with Components like Flume, Pig, Hive and Jaql

ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat

Mr. Apichon Witayangkurn Department of Civil Engineering The University of Tokyo

Architectural patterns for building real time applications with Apache HBase. Andrew Purtell Committer and PMC, Apache HBase

Data Refinery with Big Data Aspects

Distributed Computing and Big Data: Hadoop and MapReduce

Hadoop & Spark Using Amazon EMR

Developing Scalable Smart Grid Infrastructure to Enable Secure Transmission System Control

A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS

Hadoop Job Oriented Training Agenda

Hadoop implementation of MapReduce computational model. Ján Vaňo

Hadoop Evolution In Organizations. Mark Vervuurt Cluster Data Science & Analytics

Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop

Capitalize on Big Data for Competitive Advantage with Bedrock TM, an integrated Management Platform for Hadoop Data Lakes

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

BigData. An Overview of Several Approaches. David Mera 16/12/2013. Masaryk University Brno, Czech Republic

Integrating VoltDB with Hadoop

Information Architecture

BIG DATA IN THE CLOUD : CHALLENGES AND OPPORTUNITIES MARY- JANE SULE & PROF. MAOZHEN LI BRUNEL UNIVERSITY, LONDON

IMPLEMENTING PREDICTIVE ANALYTICS USING HADOOP FOR DOCUMENT CLASSIFICATION ON CRM SYSTEM

Big Data and Hadoop. Sreedhar C, Dr. D. Kavitha, K. Asha Rani

Finding Insights & Hadoop Cluster Performance Analysis over Census Dataset Using Big-Data Analytics

Keywords Big Data; OODBMS; RDBMS; hadoop; EDM; learning analytics, data abundance.

Testing Big data is one of the biggest

Hadoop. MPDL-Frühstück 9. Dezember 2013 MPDL INTERN

BIG DATA What it is and how to use?

Log Mining Based on Hadoop s Map and Reduce Technique

DATA MINING WITH HADOOP AND HIVE Introduction to Architecture

Analysing Large Web Log Files in a Hadoop Distributed Cluster Environment

Detection of Distributed Denial of Service Attack with Hadoop on Live Network

Big Data and Market Surveillance. April 28, 2014

Up Your R Game. James Taylor, Decision Management Solutions Bill Franks, Teradata

Converged, Real-time Analytics Enabling Faster Decision Making and New Business Opportunities

IMPROVED FAIR SCHEDULING ALGORITHM FOR TASKTRACKER IN HADOOP MAP-REDUCE

Complete Java Classes Hadoop Syllabus Contact No:

W H I T E P A P E R. Building your Big Data analytics strategy: Block-by-Block! Abstract

What s Cooking in KNIME

Volume 3, Issue 6, June 2015 International Journal of Advance Research in Computer Science and Management Studies

Suresh Lakavath csir urdip Pune, India

What's New in SAS Data Management

Hadoop in the Hybrid Cloud

NoSQL and Hadoop Technologies On Oracle Cloud

Big Data Open Source Stack vs. Traditional Stack for BI and Analytics

How To Use A Data Center With A Data Farm On A Microsoft Server On A Linux Server On An Ipad Or Ipad (Ortero) On A Cheap Computer (Orropera) On An Uniden (Orran)

Hadoop: The Definitive Guide

Data processing goes big

<Insert Picture Here> Big Data

Recognization of Satellite Images of Large Scale Data Based On Map- Reduce Framework

White Paper. How Streaming Data Analytics Enables Real-Time Decisions

Hadoop Big Data for Processing Data and Performing Workload

Big Data: Tools and Technologies in Big Data

Big Systems, Big Data

BIG DATA ANALYTICS For REAL TIME SYSTEM

INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE

Transcription:

Real Time Opinion Mining of Data Narahari P Rao #1, S Nitin Srinivas #2 and Prashanth C M *3 # B.E, Dept of CS&E, SCE, Bangalore, India * Prof & HOD Dept of CS&E, SCE, Bangalore, India Abstract Social Networking sites provides tremendous impetus for Big Data in mining people s opinion. Public API s catered by sites such as provides us with useful data for perusing writer s attitude apropos of a particular topic, product etc. To discern people s opinion, tweets are tagged into positive, negative or neutral indicators. This paper provides an effective mechanism to perform opinion mining by designing a end to end pipeline with the help of Apache Flume,Apache, Apache Oozie and Apache Hive. To make this process near real time we study the workaround of ignoring Flume tmp files and removing default wait condition from Oozie job configuration. The underlying architecture employed here is not restricted only to opinion mining but also has a gamut of applications. This paper explores few of the use cases that can be developed into actual working models. Keywords Opinion Mining, Big Data, Real time Tweet Analysis, Oozie workaround. I. INTRODUCTION Opinions are subjective expressions that outline people s, appraisals, feelings or sentiments toward entities, events and their properties. Recently there has been a massive escalation in use of Social Networking sites such as to express people s opinions. Impelled by this growth, companies, media, review groups are progressively seeking ways to mine for information about what people think and feel about a particular product or service. data is a valuable source of information for marketing intelligence and trend analysis in all industries. generates gigantic data that cannot be handled manually hence the requirement of automatic categorization. Tweets are unambiguous short texts messages that are up to a maximum of 140 characters. These texts are polarized based on the nature of the comment. Focus of this paper is to provide an automated mechanism for collecting, aggregating, streaming and analyzing tweets in near real time environment and a glimpse of two of its use case scenarios. II.RELATED WORKS Opinion mining is one of the most popular trends in today s world. Lot of research and Literature surveys are being done in this sector. Bo Pang and Lillian Lee are pioneers in this field. Current works in this field which uses a mathematical approach using algorithms for opinion polarity are based on a classifier trained using a collection of annotated text data. Before training, data is preprocessed so as to extract only the main content.some of the classification methods have been proposed are Naïve Bayes, Support Vector Machines, K-Nearest Neighbors etc. Continuous research is being done to determine most efficient method for opinion mining. Streaming API source III.OUR APPROACH A. Data From provides us with a Streaming API which will be employed to obtain a constant stream of tweets enabling us to collect and analyze user opinion. The Streaming API works by making a request for a specific type of data which is filtered by keyword, a user, geographic area etc. Once connection to the API is established via the Streaming API, data collection takes place. The tweets collected will be encoded in JavaScript Object Notation (JSON). JSON provides us with a way to encode this data. The whole tweet is regarded as a dictionary consisting of various fields. The fields may be contributors (indicates users who have authored the tweet), coordinates (Represents the geographic location of the Tweet as reported by client application), favorite_count (No. of times the tweet has been favorited ), text (actual text of the tweet) and several other fields. Flume Hive Map reduce Fig. 1 Workflow Gathering Data with Apache Flume To automate the movement of tweets from the API to, without our manual intervention, Flume is used. Apache Flume is a reliable and distributed system for effectively gathering and moving large amounts of data from various sources to a common storage area. Major components of flume are source, memory channel and the sink. Memory channel Fig. 2 Components of Flume. Oozie sink www.ijcsit.com 2923

source is an event-driven source that uses 4j library for accessing streaming API. Tweets are collected and aggregated into fundamental units of data called as an event. An event incorporates a byte payload and an optional header. The coordination of event-flows from the streaming API to is undertaken by Agent. The acquired tweets are stored into one or more memory channels. A memory channel is a temporary storage that uses an in-memory queue to retain events until they are ingested by the sink. Using memory channel, tweets are processed in batches that can be configured to hold a constant number of tweets. To procure tweets for a given keyword filter query is used. Sink writes events to a pre configured location. This system makes use of the -sink that deposits tweets into. B. Hadoop Hadoop is a open source framework for processing and storing large datasets over a cluster.it is used in handling large and complex data which may be structured,unstructured or semi-structured that does not fit into tables. data falls into the category of semistructured data which can be best stored and analyzed using Hadoop and it s underlying file system. C. Hadoop Distributed File System Hadoop Distributed File System () is a distributed file system which rests on top of the native file system and is written in java. It is highly fault tolerant and is designed for commodity hardware. has a high throughput access to application and is suitable for applications with large amount of data. The master-server architecture of having single name node helps in regulating the file system access. Secondary Name Node Name Node Fig. 3 Hadoop architecture Requests from file system clients are handled by the data nodes. Data is stored as Input splits (blocks) on the underlying file system. The replication factor is set as 3 by default in order to maintain redundancy of data. In this case, huge amounts of tweets are collected, stored and analyzed. D. MapReduce MapReduce is one of the two major components of Hadoop. It is a programming paradigm designed to assist computing which involves data intensive operations. MapReduce comprises of two distinct jobs that Hadoop programs perform. MapReduce jobs are controlled by a software daemon known as the JobTracker. The JobTracker resides on a master node. The JobTacker initiates the map() and reduce() jobs in the data nodes where the TaskTracker daemon resides. MapReduce requires a Java programmer. Other than very basic applications, MapReduce requires multiple stages leading to cumbersome codes. It s users have to reinvent common functionalities such as joining and querying which are provided by Hive as inbuilt functions. The tweets are queried using Hive whose Execution engine in turn generates Map and Reduce jobs in order to query out the parts of the tweets which the user is interested in. E. Hive After congregating the tweets into they are analyzed by queries using Hive. Apache Hive data warehouse software facilitates querying and managing large datasets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. In opinion mining system, hive is used to query out interested part of the tweets which can be an opinion, comments related to a specific topic or a trending hash tag. API loads the with tweets which are represented as JSON blobs. Processing twitter data in relational database such as SQL requires significant transformations due to nested data structures. Hive facilitates an interface that provides easy access to tweets using HiveQL that supports nested data structures. Hive compiler converts the HiveQL queries into map reduce jobs. Partition feature in hive allows tweet tables to split into different directories. By constructing queries that includes partitions, hive can determine the partition comprising the result. The location of twitter tables are explicitly specified in Hive External Table which are partitioned. Hive uses SerDe (Serializer- Deserializer) interface in determining record processing steps. Deserializer interface intakes string of tweets and translates it into a Java Object that Hive can manipulate on. The Serializer interface intakes a java Object that Hive has worked on and converts it into required data to be written on. F. Oozie Once the Tweets are loaded into, it is staged for querying by creating an external table that is partitioned in Hive. The continuous stream of tweets from the API will create new files that needs to be partitioned regularly after a specific time interval.to automate the periodic process of adding partitions to our table as the new data comes in, Apache Oozie is used. Oozie runs workflow jobs along with actions that in turn run Hadoop jobs. A workflow comprises of actions arranged in a control dependency Direct Acyclic Graph (DAG). Control dependency from one action to another denotes that the second action can run only after the completion of the first action. The Coordination-application job of Oozie schedules the execution of recurring hive queries, adding new twitter data to Hive table every hour. www.ijcsit.com 2924

IV. OOZIE HURDLE The configuration of the Oozie workflow will only add a new partition to the Hive table when a new hour rolls forward. Thus there is a wait event built-into the Oozie workflow. The wait condition occurs due to a locked tmp file generated by Flume. Due to this locked file Hive is restricted from querying the Hive external table while the tmp file is being written. This wait event forbids the process from being real time. A u t h o r i t y R N N P P V. ENHANCEMENT To eliminate the wait condition and to make the process work in real-time following steps can be employed: a) Create a custom package to ignore Flume temp files: public class FileFilterExcludeTmpFiles implements PathFilter { public boolean accept(path p) { String name = p.getname(); return!name.endswith(.tmp ); //This line ignores the files ending with.tmp } } b) Removing Oozie wait condition:the Oozie workflow by default has defined a readyindicator that acts as a wait event. It directs the workflow to create a new partition only after a period of one hour.by removing the readyindicator the wait condition is overlooked. c) The Hive Configuration File is modified to indicate the location of the new Java class created. VI. RESULTS After streaming the tweets into in real time, hive is used in analyzing the tweets. Tweets are tagged as documents where categories are the hash tags defined in the Flume configuration file. Later the tweets are grouped as positive, negative and neutral based on subjectivity corpus forming a dictionary of words and its polarity. The sample example shown in Fig 4 is obtained by mining for tweets with hash tag bigdata. TABLE I. OPINION COUNT POSITIVE 729 NEGATIVE 150 NEUTRAL 665 Fig. 4 Sample of Tabular result Data Visualization The sample output can be visualized by bubble plot analysis using data visualization tool such as knime. Good-Bad rating Fig. 5 Bubble plot N-neutral, P-positive, R-negative Good-Bad rating attribute indicates the overall polarity of tweets. Higher the authority attributes higher prominence for the rating. VII.TIME EFFICIENCY Time efficiency is an important feature of this project. Lower response time is achieved by use of custom Java package created. This reduces the overhead time of one hour for each data set. Also the use of Hadoop ensures the distributed processing and it also lowers the access time. Latency in fig.6 denotes the overhead time for accessing Hive external tables for querying and analyzing. The overhead time of one hour is reduced significantly to few minutes. Hence overall the time efficiency increases owing to the above mentioned factors. Time in minutes 60 45 30 15 Before enhancement Latency Fig. 6 Latency graph X.APPLICATION After enhancement ASSISTING COLLABORATIVE DECISION MAKING: Collaborative decision-making is a situation faced when individuals collectively make a choice from the alternatives before them. Real Time Opinion Mining System can be used as a platform for making group decisions by taking into account the opinions of individuals in real time. www.ijcsit.com 2925

Individual Opinions Streaming API Opinion Repository Flume Hive Tweet collection phase Flume Hive Positive/ Negative response Segregated tweets Group Decision Verification phase Fig. 7 Use case workflow The individual opinions are aggregated together in opinion repository and are streamed into by Flume. Using HiveQL, opinions are classified into positive and negative responses which assist group decision making. CORRUPTION VIGILANCE SYSTEM Corruption is a serious issue adversely affecting the Indian economy. In the year 2014, India was ranked 85 th out of 175 countries in the Corruption Perception Index. The biggest problem of Corruption is that it is not being exposed as and when it happens. As a result of which the offenders evade punishment with ease. We propose a system which uses tweets from users to expose corruption in real time. The underlying architecture of this application can be visualized as shown in the Fig. 8.The workflow scheme depicts a scenario where a person witnessing an act of corruption tweets with a unique hash tag comprising the details of the offender. The tweet is then procured into the employing Flume and streaming API in real time. The collected tweet is queried for details such as the name of the offender, location and is stored in. This completes the tweet collection phase. The offender s identity queried, stored in the is now validated for authenticity to detect pseudonymous complaints concluding verification phase in the work flow. In the final phase, a table holding details of the incident as tweeted by the user is stored and disclosed to the public. Once developed into a working model, this system can inspire a social movement involving public to provide a helping hand for the government in curtailing acts of corruption. Accumulation and disclosure Offender Identity Validation Database Suspected offenders list Fig. 8 Use case workflow XI. SCOPE FOR FUTURE WORK A. Deployment in Cloud: It is an arduous task for average system to process, analyze and store huge amount of data sets. Hence there is a requirement of powerful machines which are made available through cloud platforms (IAAS or VPC).Deploying the whole system in cloud provides hassle-free access to it. B. Secure e-voting System: A secure mechanism is required for expressing people s consensus on policy initiatives and electoral procedures in countries that follow direct democracy. Direct democracy involves people s opinion in government decision making by conducting regular referendums involving people casting their votes in polling booths. These referendums require physical presence of voters in polling stations in frequent intervals causing disruption in their daily routines. Whereas a secure model having similar architecture can provide a mechanism to cast people s vote from their home. www.ijcsit.com 2926

XII.CONCLUSION Opinion Mining is a very wide branch for research. We have covered some of its important aspects. The same architecture could be used for a variety of applications designed to look at data, such as identifying spam accounts, or identifying clusters of keywords. Taking the system even further, the general architecture can also be expanded to other social media platform usages like Facebook, movie reviews, personal blogs, etc. Evidently, taking into account all the constraints, this method is one of the most efficient ways to perform opinion mining in real time. REFERENCES [1] Apporv Agarwal, Jasneet Singh Sabarwal, End to End Sentiment Analysis of Data [2] A. Pak and P. Parouek, as a corpus for sentiment analysis and opinion mining, in Proceedings of LREC, vol. 2010. [3] Sentimental Analysis, Inc. [Online]. Available: http://www.cs.uic.edu/~liub/fbs/sentiment-analysis [4] CDH-googlegroup https://groups.google.com/a/cloudera.org/forum/?fromgroups=#!top ic/cdh-user/fwh80lehyxk [5] T. White, "The Hadoop Distributed Filesystem," Hadoop: The Definitive Guide, pp. 41-73, GravensteinHighwaNorth, Sebastopol: O Reilly Media, Inc., 2010. [6] Theresa Wilson, Joanna Moore, Efthymios Kouloumpis, Sentiment Analysis The Good, the Bad and the OMG [7] Knime_Socialmedia_whitepaper https://tech.knime.org/files/knime_social_media_white_paper.pdf www.ijcsit.com 2927