A Look Into the World of Reddit with Neural Networks

Similar documents
Package syuzhet. February 22, 2015

PhD in Computer Science and Engineering Bologna, April Machine Learning. Marco Lippi. Marco Lippi Machine Learning 1 / 80

Building a Question Classifier for a TREC-Style Question Answering System

A Distributed Sentiment Analysis Development Environment

The multilayer sentiment analysis model based on Random forest Wei Liu1, Jie Zhang2

Sentiment analysis on tweets in a financial domain

CS 229, Autumn 2011 Modeling the Stock Market Using Twitter Sentiment Analysis

Data Mining Yelp Data - Predicting rating stars from review text

VCU-TSA at Semeval-2016 Task 4: Sentiment Analysis in Twitter

Challenges of Cloud Scale Natural Language Processing

Sense Making in an IOT World: Sensor Data Analysis with Deep Learning

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Applying Deep Learning to Car Data Logging (CDL) and Driver Assessor (DA) October 22-Oct-15

Chapter 6. The stacking ensemble approach

Predict Influencers in the Social Network

Employer Health Insurance Premium Prediction Elliott Lui

Machine Learning with MATLAB David Willingham Application Engineer

Predicting Flight Delays

Sentiment analysis using emoticons

Chapter 8. Final Results on Dutch Senseval-2 Test Data

Scalable Developments for Big Data Analytics in Remote Sensing

Author Gender Identification of English Novels

Making Sense of the Mayhem: Machine Learning and March Madness

Machine Learning Capacity and Performance Analysis and R

HYBRID PROBABILITY BASED ENSEMBLES FOR BANKRUPTCY PREDICTION

CIKM 2015 Melbourne Australia Oct. 22, 2015 Building a Better Connected World with Data Mining and Artificial Intelligence Technologies

The Role of Size Normalization on the Recognition Rate of Handwritten Numerals

Predicting the Stock Market with News Articles

Forecasting Trade Direction and Size of Future Contracts Using Deep Belief Network

Knowledge Discovery from patents using KMX Text Analytics

Data Mining. Nonlinear Classification

Gated Neural Networks for Targeted Sentiment Analysis

Sentiment Analysis. D. Skrepetos 1. University of Waterloo. NLP Presenation, 06/17/2015

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Semi-Supervised Learning for Blog Classification

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski

Customer Classification And Prediction Based On Data Mining Technique

Sentiment Analysis of Twitter Feeds for the Prediction of Stock Market Movement

Identifying Focus, Techniques and Domain of Scientific Papers

Data Mining Algorithms Part 1. Dejan Sarka

Crowdfunding Support Tools: Predicting Success & Failure

Long Short Term Memory Networks for Anomaly Detection in Time Series

Machine Learning CS Lecture 01. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Identifying At-Risk Students Using Machine Learning Techniques: A Case Study with IS 100

Spam Detection A Machine Learning Approach

Sequence to Sequence Weather Forecasting with Long Short-Term Memory Recurrent Neural Networks

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Data Mining Methods: Applications for Institutional Research

A Decision Support Approach based on Sentiment Analysis Combined with Data Mining for Customer Satisfaction Research

Lecture 9: Introduction to Pattern Analysis

Applications of Deep Learning to the GEOINT mission. June 2015

Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris

Data Mining - Evaluation of Classifiers

Machine Learning over Big Data

Neural Networks and Support Vector Machines

Less naive Bayes spam detection

Machine Learning Introduction

Sentiment Analysis Tool using Machine Learning Algorithms

Automated Content Analysis of Discussion Transcripts

A STUDY REGARDING INTER DOMAIN LINKED DOCUMENTS SIMILARITY AND THEIR CONSEQUENT BOUNCE RATE

Learning to Process Natural Language in Big Data Environment

Sentiment analysis of Twitter microblogging posts. Jasmina Smailović Jožef Stefan Institute Department of Knowledge Technologies

A Property & Casualty Insurance Predictive Modeling Process in SAS

Data Mining Part 5. Prediction

DeepPlaylist: Using Recurrent Neural Networks to Predict Song Similarity

Sentiment Analysis for Movie Reviews

Journée Thématique Big Data 13/03/2015

Programming Exercise 3: Multi-class Classification and Neural Networks

Bagged Ensemble Classifiers for Sentiment Classification of Movie Reviews

The Operational Value of Social Media Information. Social Media and Customer Interaction

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

An Introduction to Data Mining

Recurrent Neural Networks

HT2015: SC4 Statistical Data Mining and Machine Learning

Predicting Soccer Match Results in the English Premier League

Machine Learning using MapReduce

Low Cost Correction of OCR Errors Using Learning in a Multi-Engine Environment

Microblog Sentiment Analysis with Emoticon Space Model

Introduction to Machine Learning Using Python. Vikram Kamath

Classifying Manipulation Primitives from Visual Data

Sentiment Analysis on Twitter with Stock Price and Significant Keyword Correlation. Abstract

Introduction to Pattern Recognition

E-commerce Transaction Anomaly Classification

Active Learning SVM for Blogs recommendation

Lecture 6: Classification & Localization. boris. ginzburg@intel.com

End-to-End Sentiment Analysis of Twitter Data

Data Mining Techniques for Prognosis in Pancreatic Cancer

Predict the Popularity of YouTube Videos Using Early View Data

Predictive Data modeling for health care: Comparative performance study of different prediction models

Ensemble Methods. Knowledge Discovery and Data Mining 2 (VU) ( ) Roman Kern. KTI, TU Graz

Leveraging Ensemble Models in SAS Enterprise Miner

L25: Ensemble learning

Sentiment analysis: towards a tool for analysing real-time students feedback

Lecture 8 February 4

Music Mood Classification

Invited Applications Paper

Transcription:

A Look Into the World of Reddit with Neural Networks Jason Ting Institute of Computational and Mathematical Engineering Stanford University Stanford, CA 9435 jmting@stanford.edu Abstract Creating, placing, and characterizing social media comments and reactions is a challenging problem. This is particularly true for reddit.com, a highly trafficked social media website with thousands of posts per day. Each post has an associated comment thread, and users of Reddit can vote the comments up or down, generating a net score, or Karma, for each comment. Users aspire to collect this Karma, and these comments build the community on Reddit. When reading the content of the comment thread, however, it is often unclear why some comments succeed and receive high Karma while other comments lose Karma. This project explores using Recursive Neural Networks to give better insight into Reddit communities through the sentiment analysis of Reddit comments and Recurrent Neural Networks on Reddit comments to characterize the voting patterns of Reddit users and determine the Karma strength of Reddit comments through identifying comments with positive and negative Karma. Introduction Reddit is an entertainment, social networking, and news website where registered community members can submit content, such as text posts or direct links to websites. Content entries are organized by topics of interest called subreddits. Users can then vote submissions up or down to determine their position on the site s page based on the net positive votes that the post accumulated called Karma. Furthermore users can make comments on these submissions and vote on these comments to also determine their position on the site s page for Karma. The website has gained massive popularity over the years, receiving over 7.5 billion total page views and 17 million unique visitors a month. Moreover, Reddit users vote over 24 million times on the website each day [1]. The enormous popularity of Reddit makes it a fascinating website for social analysis. With such a large number of users writing comments and responses to posts along with voting on content, there are plenty of insightful patterns of social interactions characterized through the posted comments and the Karma associated with each post [2]. Since these users upvote on content they believe is good and downvote content they believe is bad, comments with positive Karma can be associated with positive sentiment and comments with negative Karma can be associated with negative sentiment to the users in their respective subreddit. The primary motivation of this project is to gain better insight on what brings people together in online communities through sentiment analysis of subreddit comments through using Recursive Neural Networks (RecNN) and how users perceive sentiment in the context of their own community through Recurrent Neural Networks (RNN) to classify what comments have net positive or negative Karma in their respective community. 1

Background/Related Work While using neural networks has not been been applied to analyzing Reddit comments before, there has been past work using Reddit s data to analyze the website. Lakkaraju [3] sought to evaluate Reddit submissions based on the title of the submission. This paper studied how the title, submission times, and community choices of image submissions affect the success of the content by investigating resubmitted images on Reddit. The language model used include modeling good and bad words, LDA, parts of speech tagging, length of title, sentiment analysis, and Weighted Jaccard similarity. Using this language model they were able to account for how the success of each submission was influenced by its title, and how well that title was targeted to the community. Although using language models is a important factor to the success of the post, the quality of the content, submission time, and the community contributed greatly to the success of the post, especially for the posts that accumulated a lot of Karma. Experiment Data The Reddit API and code are open sourced so Reddit s comments can be readily scraped. Initially the comment found on Reddit was going to be scraped continuously over the period of a week, but the dataset took fresh comments off r/new and had no voting data associated with the post. Furthermore there is a rate limiting issues with Reddit, where the website would only permit users to grab data every two seconds. To resolve these issues, I ran a MapReduce [4] job that uses multiple computers to fetch data from Reddit simultaneously along with utilizing the the praw library [5] to interact with the Reddit API. Using this method the dataset consists of approximately 1 million comments from Reddit taken from the top 2 posts of 1 popular subreddits. Comments longer than a set threshold were filtered out. Some time series analysis of the scraped comments are shown in Figure 1. 4 Frequency of Comment Time 3 Vote Decay Function over Time 35 25 3 2 Number of Comments 25 2 15 Expected Number of Votes 15 1 5 1 5 5 5 1 15 2 25 3 35 4 Comment Time (in hours from parent post's creation) 1 5 1 15 2 Time of Comment (in hours from parent post's creation) (a) (b) Number of Comments Frequency of Negative Comment Time 7 6 5 4 3 2 1 5 1 15 2 25 3 35 4 Comment Time (in hours from parent post's creation) Figure 1: Time series analysis of Reddit comments. Figure 1(a) shows the frequency of comments over time, where the distribution is a Gaussian distribution with a right tail and the number of posts on a thread peaks 4 hours after the post is created. Figure 1(b) shows the vote decay over time, where the comments posted within the first hour performed the best. Figure 1(c) shows the frequency of comments with nergative Karma over time, where distribution is a Gaussian distribution with a right tail that is similar to the shape of Figure 1(a). (c) 2

Based on Figure 1(a) and Figure(b), Reddit users most frequently comment soon after the post is created, whose comments posted early after the post was created in general does the best. Just as Lakkaraju [3] concluded that submission time significantly contributes to the success of a post, comment post time significantly contributes to the success of the comment, making it difficult to build a model that relies entirely on the comment content to predict the total number of Karma a post will receive. However Figure 1(c) shows that the distribution of the posts that receive negative Karma is similar to the distribution shown in Figure 1(a), therefore controlling the time component along with other confounding factors, thus permitting sentiment driven models to classify comments based on positive and negative Karma. Evaluation Metric The evaluation metric used for this project is the accuracy of what comments are classified correctly along with a confusion matrix of what comments are misclassified. Cross Validation is used to train the model and where I assigned 7% of the data to the training set, 15% of the data to the validation set, and 15% of the data to the test set. Each model trains on each subreddit separately and the model results are the average performance in the training and test set of the 1 scraped subreddits. Approach These neural network models should capture conventional sentiment of the comments and sentiment as perceived by the users in their respective subreddit. For the former RecNNs have the capacity to learn from data with a nested hierarchy data structure and capture sentiment of a phrase [6], so RecNNs is used for this task. For determining sentiment by the subreddit s context through the positive and negative Karma labels, RNNs are used for this task. Since each comment is associated with the label positive/negative for the full comment, RNNs must handle data with learning long-term dependencies and account for the vanishing gradient problem, where gradient signal gets so small that learning either becomes very slow or stops working altogether. Gated Recurrent Unit and Long-Short Term Memory are models that accounts for these problems and are best suited for the task. RNNs are used from the Theanos library [7] and uses a Cross Entropy loss function. Baseline Model To demonstrate the effectiveness of RNNs, a baseline model is used to classify the Reddit comments. For feature selection I used the bag of words and the supervised learning models Gaussian Naive Bayes and Support Vector Machines (SVM). Recurrent Neural Network RecNNs compute compositional vector representations for phrases of variable length and syntactic type to classify each phrase [6], which can be seen in Figure 2. Sentiment are classified into 5 classes, where the posterior distribution of the labels are defined as the following: y a = softmax(w s a) The data comes from Stanford s CoreNLP Sentiment Treebank [8], which I used Stanford Parser to generate parse trees and binarized them using the TreeBinarizer class within lexparser, collapsing nodes with single children for nodes without labels. Figure 2: A diagram [6] of Recurrent Neural Network for sentiment. The parent vectors are computed from the bottom up with a compositionality function g and use node vectors as features for a classifier at that node. 3

(a) Figure 3: A diagram [9] of the GRU model in (a) and LSTM model in (b). A description of the role of each equation is written above each component. (b) Recursive Neural Network Gated Recurrent Unit Gated Recurrent Unit (GRU) have the capacity to allow the gradient to flow at different strengths depending on the inputs and capture long-term dependencies [1]. GRU can be broken down to four fundamental operational stages. Each stage can be described mathematically with the following four equations whose model is descried in Figure 2(a): Update Gate: z (t) = σ(w (z) x (t) + U (z) h (t 1) ) Reset Gate: r (t) = σ(w (r x (t) + U (r) h (t 1) ) New Memory: h (t) = tanh(r (t) Uh (t 1) + W x (t) ) Hidden State: h (t) = (1 z (t) ) h + z (t) h (t 1) Long-Short Term Memory Long-Short Term Memory (LSTM) is a complex type of neural network model that differs from GRU. LSTM can be broken down to six fundamental operational stages [11]. Each stage can be described mathematically with the following six equations whose model is descried in Figure 2(b): Input Gate: i (t) = σ(w (i) x (t) + U (i) h (t 1) ) Forget Gate: f (t) = σ(w (f x (t) + U (f) h (t 1) ) Output/Exposure Gate: o (t) = σ(w (o x (t) + U (o) h (t 1) ) New Memory Cell: c (t) = tanh(w (c) x (t) + U (c) h (t 1) + W x (t) ) Final memory cell: f (t) c (t 1) + fi(t) c (t) Final hidden state: h (t) = o (t) c (t) Results RecNN Comment Sentiment The results are shown in Figure 4. Each model was tested using coarse hyperparameter tuning to find optimal learning rates and regularization. Then I implemented a two-layer fully connected neural network with a softmax classifier on top. I did coarse hyperparameter tuning over learning rate, regularization, number of units in hidden layer, dropout rate, and number of epochs. I also added a depth level index in the hidden layer. The best model was the 2 layer 35 unit hidden layer model, which achieved 81.26% classification accuracy on the training set and 81.23%, which is significantly better than the baseline model, and a confusion matrix of the best model can be seen in figure 5. Despite this accuracy, the confusion matrix showed that the misclassifications tend to 4

guess comments that are more neutral. 84 RecNN Accuracies Averaged on Subreddits Training Set Test Set 82 8.97 8.87 8.77 8.68 81.26 81.23 8 79.85 79.71 79.12 78.72 79.83 79.45 Accuracy 78 76 74 72 71. 71. 7 Bag of Features 1 L 15 HL 1 L 25 HL 1 L 35 HL 2 L 15 HL 2 L 25 HL 2 2L 35 HL Figure 4: L is Layer and HL is units of hidden layers in the model. 1 191. 627. 52. 618. 7. 26. 2437. 7662. 265. 32..9.8.7.6 Figure 5: Confusion matrix for highest performing RecNN model Sentiment represented in following order from : very negative, negative, neutral, positive, very postive. 2 5. 7413. 23847. 8229. 294..5.4 3. 193. 8278. 347. 3543..3.2 4 1. 231. 221. 53. 6537..1 1 2 3 4. Using the best RecNN sentiment model, a distribution of the sentiment for 1 popular subreddit can be seen in Figure 6. RNN Comment Karma The results are shown in Figure 7. Each model was tested using coarse hyperparameter tuning to find optimal learning rates and regularization. I did coarse hyperparameter tuning over learning rate, regularization, number of units in hidden layer, dropout rate, and number of epochs. The net run with tuned parameters showed higher performance than linear classifiers on the hand-built features. The best model was the LSTM morl with 128 hidden layer units, which achieved 93.6% classification accuracy on the training set and 92.79%, which is significantly better than the baseline model, and a confusion matrix of the best model can be seen in figure 8. The confusion matrix shows the types of misclassifications are mostly even. 5

1 8 Subreddit Sentiment Breakdown Very Positive Postive Neutral Negative Very Negative 6 Percent 4 2 news politics television science gaming pics movies circlejerk aww funny Subreddits Figure 6: Percentage of comment for each class of sentiment for 1 popular subreddits. 94 RNN Accuracies Averaged on Subreddits Training Set Test Set 92 92.12 91.92 92.72 92.53 92.62 92.48 92.192.1 93.6 92.79 92.82 92.63 Accuracy 9 88 87.24 87.82 87.44 87.16 86 Naive Bayes SVM GRU 64 HL GRU 128 HL GRU 256 HL LSTM 164 HL LSTM 128 HL LSTM 256 HL Figure 7: HL is units of hidden layers in the model. Conclusion This project demonstrated that using RecNN for sentiment analysis on Reddit comments is viable and works welll, and GRUs and LTSMs and are reliable methods for measuring the quality of Reddit comments. The highest performing RNN models were able to classify Karma strength of comments 6

1 8897 36215.9.8.7.6 Figure 8: Confusion matrix for highest performing RecNN model Sentiment represented in following order from : Postive and Negative..5.4 1 42513 156895.3.2.1 well despite being re-purposed from classifying sentiment, which indicates that it there is some correlation between Karma and perceived sentiment. The best RecNN model classified the comments sentiment with 81.23% accuracy. The best RNN model classified the comments sentiment with 92.79% accuracy. There are some issue with the approach that must be mentioned. Because Reddit is an internet form, so the comments do not necessarily have to be context driven. For example, many users post a comment as a url for images that serves as their reaction or reference other users, which provides little to no information on sentiment. As a result, these types of comments are noisy to the model and this approach would not work well in subreddits where users frequently post their comment context indirectly. Furthermore, the dataset consists of popular subreddits, so the model may not be generalized to work on less popular subreddits or subreddits with peculiar comments. The next step to train the models on less popular subreddits to test how well the these models work more generally. Furthermore the RNN model can be improved and provide further analysis is to incorporate the other features that contribute significantly to the success of comment into the neural network models, such as time of post, the post title, and content. Through integrating these features the model can become more complex and modular so that it can predict numerical Karma scores accurately. References [1] Reddit. https://www.reddit.com/about, 215. [Online; accessed May-215]. [2] Singer, Philipp, et al. Evolution of reddit: from the front page of the internet to a self-referential community?. Proceedings of the companion publication of the 23rd international conference on World wide web companion. International World Wide Web Conferences Steering Committee, 214. [3] H. Lakkaraju, J. McAuley, and J. Leskovec. Whats in a name? understanding the interplay between titles, content, and communities in social media. In Seventh International AAAI Conference on Weblogs and Social Media, 213. [4] Dean, Jeffrey, and Sanjay Ghemawat. MapReduce: simplified data processing on large clusters. Communications of the ACM 51.1 (28): 17-113. [5] Praw. https://praw.readthedocs.org/, 215. [Online; accessed May-215]. [6] Socher, Richard, et al. Recursive deep models for semantic compositionality over a sentiment treebank. Proceedings of the conference on empirical methods in natural language processing (EMNLP). Vol. 1631. 213. [7] F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley and Y. Bengio. Theano: new features and speed improvements. NIPS 212 deep learning workshop. [8] Manning, Christopher D., Surdeanu, Mihai, Bauer, John, Finkel, Jenny, Bethard, Steven J., and McClosky, David. 214. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 55-6. [9] Chung, Junyoung, et al. Gated feedback recurrent neural networks. arxiv preprint arxiv:152.2367 (215). [1] CS 224D: Deep Learning for NLP, Lecture Notes: Part IV. http://cs224d.stanford.edu [Online; accessed May-215]. [11] Hochreiter, Sepp, and Jrgen Schmidhuber. Long short-term memory. Neural computation 9.8 (1997): 1735-178. 7