SU-FMI: System Description for SemEval-2014 Task 9 on Sentiment Analysis in Twitter
|
|
|
- Rolf May
- 9 years ago
- Views:
Transcription
1 SU-FMI: System Description for SemEval-2014 Task 9 on Sentiment Analysis in Twitter Boris Velichkov, Borislav Kapukaranov, Ivan Grozev, Jeni Karanesheva, Todor Mihaylov, Yasen Kiprov, Georgi Georgiev, Ivan Koychev, Preslav Nakov Abstract We describe the submission of the team of the Sofia University to SemEval-2014 Task 9 on Sentiment Analysis in Twitter. We participated in subtask B, where the participating systems had to predict whether a Twitter message expresses positive, negative, or neutral sentiment. We trained an SVM classifier with a linear kernel using a variety of features. We used publicly available resources only, and thus our results should be easily replicable. Overall, our system is ranked 20th out of 50 submissions (by 44 teams) based on the average of the three 2014 evaluation data scores, with an F1-score of on general tweets, on sarcastic tweets, and on LiveJournal messages. 1 Introduction We describe the submission of the team of the Sofia University, Faculty of Mathematics and Informatics (SU-FMI) to SemEval-2014 Task 9 on Sentiment Analysis in Twitter (Rosenthal et al., 2014). This SemEval challenge had two subtasks: subtask A (term-level) asks to predict the sentiment of a phrase inside a tweet; subtask B (message-level) asks to predict the overall sentiment of a tweet message. Sofia University, [email protected] Sofia University, [email protected] Sofia University, [email protected] Sofia University, [email protected] Sofia University, [email protected] Sofia University, [email protected] Ontotext, [email protected] Sofia University, [email protected] Qatar Computing Research Institute, [email protected] In both subtasks, the sentiment can be positive, negative, or neutral. Here are some examples: positive: Gas by my house hit $3.39!!!! I m going to Chapel Hill on Sat. :) neutral: New York Giants: Game-by-Game Predictions for the 2nd Half of the Season negative: Why the hell does Selma have school tomorrow but Parlier clovis & others don t? negative wall to wall people on the platform at South Norwalk waiting for the 8:08. Thanks for the Sat. Sched. Great sense Below we first describe our preprocessing, features and classifier in Section 2. Then, we discuss our experiments, results and analysis in Section 3. Finally, we conclude with possible directions for future work in Section 4. 2 Method Our approach is inspired by that of the highest scoring team in 2013, that of NRC Canada (Mohammad et al., 2013). In particular, we reused many of their resources. 1 Our system consists of two main submodules, (i) feature extraction in the framework of GATE (Cunningham et al., 2011), and (ii) machine learning using SVM with linear kernels as implemented in LIBLINEAR 2 (Fan et al., 2008). 1 saif/webpages/abstracts/ NRC-SentimentAnalysis.htm 2 cjlin/ liblinear/
2 2.1 Preprocessing We integrated a pipeline of various resources for tweet analysis that are already available in GATE (Bontcheva et al., 2013) such as a Twitter tokenizer, a sentence splitter, a hashtag tokenizer, a Twitter POS tagger, a morphological analyzer, and the Snowball 3 stemmer. We further implemented in GATE some shallow text processing components in order to handle negation contexts, emoticons, elongated words, all-caps words and punctuation. We also added components to find words and phrases contained in sentiment lexicons, as well as to annotate words with word cluster IDs using the lexicon built at CMU, 4 which uses the Brown clusters (Brown et al., 1992) as implemented 5 by (Liang, 2005). 2.2 Features Sentiment lexicon features We used several preexisting lexicons, both manually designed and automatically generated: Minqing Hu and Bing Liu opinion lexicon (Hu and Liu, 2004): 4,783 positive and 2,006 negative terms; MPQA Subjectivity Cues Lexicon (Wilson et al., 2005): 8,222 terms; Macquarie Semantic Orientation Lexicon (MSOL) (Mohammad et al., 2009): 30,458 positive and 45,942 negative terms; NRC Emotion Lexicon (Mohammad et al., 2013): 14,181 terms with specified emotion. For each lexicon, we find in the tweet the terms that are listed in it, and then we calculate the following features: Negative terms count; Positive terms count; Positive negated terms count; Positive/negative terms count ratio; Sentiment of the last token; Overall sentiment terms count cluster_viewer.html 5 brown-cluster We further used the following lexicons: NRC Hashtag Sentiment Lexicon: list of words and their associations with positive and negative sentiment (Mohammad et al., 2013): 54,129 unigrams, 316,531 bigrams, 480,010 pairs, and 78 high-quality positive and negative hashtag terms; Sentiment140 Lexicon: list of words with associations to positive and negative sentiments (Mohammad et al., 2013): 62,468 unigrams, 677,698 bigrams, 480,010 pairs; Stanford Sentiment Treebank: contains 239,231 evaluated words and phrases. If a word or a phrase was found in the tweet, we took the given sentiment label. For the NRC Hashtag Sentiment Lexicon and the Sentiment140 Lexicon, we calculated the following features for unigrams, bigrams and pairs: Sum of positive terms sentiment; Sum of negative terms sentiment; Sum of the sentiment for all terms in the tweet; Sum of negated positive terms sentiment; Negative/positive terms ratio; Max positive sentiment; Min negative sentiment; Max sentiment of a term. We used different features for the two lexicon groups because their contents differ. The first four lexicons provide a discrete sentiment value for each word. In contrast, the following two lexicons offer numeric sentiment scores, which allows for different feature types such as sums and min/max scores. Finally, we manually built a new lexicon with all emoticons we could find, where we assigned to each emoticon a positive or a negative label. We then calculated four features: number of positive and negative emoticons in the tweet, and whether the last token is a positive or a negative emoticon.
3 2.2.2 Tweet-level features We use the following tweet-level features: All caps: the number of words with all characters in upper case; Hashtags: tweet; the number ot hashtags in the Elongated words: the number of words with character repetitions Term-level features We used the following term-level features: Word n-grams: presence or absence of 1- grams, 2-grams, 3-grams, 4-grams, and 5- grams. We add an NGRAM prefix to each n- gram. Unfortunately, the n-grams increase the feature space greatly and contribute to higher sparseness. They also slow down training dramatically. That is why our final submission only includes 1-grams. Character n-grams: presence or absence of one, two, three, four and five-character prefixes and suffixes of all words. We add a PRE or SUF prefix to each character n-gram. Negations: the number of negated contexts. We define a negated context as a segment of a tweet that starts with a negation word (e.g., no, shouldnt) from our custom gazetteer and ends with one of the punctuation marks:,,., :, ;,!,?. A negated context affects the n-gram and the lexicon features: we add a NEG suffix to each word following the negation word, e.g., perfect becomes perfect NEG. Punctuation: the number of contiguous sequences of exclamation marks, of question marks, of either exclamation or question marks, and of both exclamation and question marks. Also, whether the last token contains an exclamation or a question mark (excluding URLs). Stemmer: the stem of each word, excluding URLs. We add a STEM prefix to each stem. Lemmatizer: the lemma of each word, excluding URLs. We add a LEMMA prefix to each lemma. We use the built-in GATE Morphological analyser as our lemmatizer. Word and word bigram clusters: word clusters have been shown to improve the performance of supervised NLP models (Turian et al., 2010). We use the word clusters built by CMU s NLP toolkit, which were produced over a collection of 56 million English tweets (Owoputi et al., 2012) and built using the Percy Liang s HMM-based implementation 6 of Brown clustering (Liang, 2005; Brown et al., 1992), which group the words into 1,000 hierarchical clusters. We use two features based on these clusters: presence/absence of a word in a word cluster; presence/absence of a bigram in a bigram cluster. POS tagging: Social media are generally hard to process using standard NLP tools, which are typically developed with newswire text in mind. Such standard tools are not a good fit for Twitter messages, which are too brief, contain typos and special wordforms. Thus, we used a specialized POS tagger, TwitIE, which is available in GATE (Bontcheva et al., 2013), and which we integrated in our pipeline. It provides (i) a tokenizer specifically trained to handle smilies, user names, URLs, etc., (ii) a normalizer to correct slang and misspellings, and (iii) a POS tagger that uses the Penn Treebank tagset, but is optimized for tweets. Using the TwitIE toolkit, we performed POS tagging and we extracted all POS tag types that we can find in the tweet together with their frequencies as features. 2.3 Classifier For classification, we used the above features and a support vector machine (SVM) classifier as implemented in LIBLINEAR. This is a very scalable implementation of SVM that does not support kernels, and is suitable for classification on large datasets with a large number of features. This is particularly useful for text classification, where the number of features is very large, which means that the data is likely to be linearly separable, and thus using kernels is not really necessary. We scaled the SVM input and we used L2-regularization during training. 6 brown-cluster
4 3 Experiments, Results, Analysis 3.1 Experimental setup At development time, we trained on train-2013, tuned the C value of SVM on dev-2013, and evaluated on test-2013 (Nakov et al., 2013). For our submission, we trained on train-2013+dev-2013, and we evaluated on the 2014 test dataset provided by the organizers. This dataset contains two parts and a total of five datasets: (a) progress test (the Twitter and SMS test datasets for 2013), and (b) new test datasets (from Twitter, from Twitter with sarcasm, and from LiveJournal). We used C=0.012, which was best on development. 3.2 Official results Due to our very late entering in the competition, we have only managed to perform a small number of experiments, and we only participated in subtask B. We were ranked 20th out of 50 submissions; our official results are shown in Table 1. The numbers after our score are the delta to the best solution. We have also included a ranking among 2014 participant systems on the 2013 data sets, released by the organizers. Data Category F1-score (best) Ranking tweets (6.23) 23 sarcasm (9.82) 19 LiveJournal (6.60) 21 tweets (9.79) 29 SMS (8.61) mean (7.55) 20 Table 1: Our submitted system for subtask B. 3.3 Analysis Tables 2 and 3 analyze the impact of the individual features. They show the F1-scores and the loss when a feature or a group of features is removed; we show the impact on all test datasets, both from 2013 and from The exception here is the all + ngrams row, which contains our scores if we had used the n-grams feature group. The features are sorted by their impact on the Twitter2014 test set. We can see that the three most important feature groups are POS tags, word/bigram clusters, and lexicons. We can further see that although the overall lexicon feature group is beneficial, some of the lexicons actually hurt the 2014 score and we would have been better off without them. These are the Sentiment140 lexicon, the Stanford Sentiment Treebank and the NRC Emotion lexicon. The highest gain we get is from the lexicons of Minqing Hu and Bing Liu. It must be noted that using lexicons with good results apparently depends on the context, e.g., the Sentiment140 lexicon seems to be helping a lot with the LiveJournal test dataset, but it hurts the Sarcasm score by a sizeable margin. Another interesting observation is that even though including the n-gram feature group is performing notably better on the Twitter2013 test dataset, it actually worsens performance on all 2014 test sets. Had we included it in our results, we would have scored lower. The negation context feature brings little in regards to regular tweets or LiveJournal text, but it heavily improves our score on the Sarcasm tweets. It is unclear why our results differ so much from those of the NRC-Canada team in 2013 since our features are quite similar. We attribute the difference to the fact that some of the lexicons we use actually hurt our score as we mentioned above. Another difference could be that last year s NRC system uses n-grams, which we have disabled as they lowered our scores. Last but not least, there could be bugs lurking in our feature representation that additionally lower our results. 3.4 Post-submission improvements First, we did more extensive experiments to validate our classifier s C value. We found that the best value for C is actually 0.08 instead of our original proposal Then, we experimented further with our lexicon features and we removed the following ones, which resulted in significant improvement over our submitted version: Sentiment of the last token for NRC Emotion, MSOL, MPQA, and Bing Liu lexicons; Max term positive, negative and sentiment scores for unigrams of Sentiment140 and NRC Sentiment lexicons; Max term positive, negative and sentiment scores for bigrams of Sentiment140 and NRC Sentiment lexicons; Max term positive, negative and sentiment scores for hashtags of Sentiment140 and NRC Sentiment lexicons.
5 Feature Diff SMS2013 SMS2013 delta Twitter2013 Twitter2013 delta submitted features no POS tags no word clusters all lex removed no Hu-Liu lex all + ngrams no NRC #lex no MSOL lex no Stanford lex no negation cntx no encodings no NRC emo lex no Sent140 lex Table 2: Ablation experiments on the 2013 test sets. Feature Diff LiveJournal LJ delta Twitter Twitter delta Sarcasm Sarcasm delta submitted features no POS tags no word clusters all lex removed no Hu-Liu lex all + ngrams no NRC #lex no MSOL lex no Stanford lex no negation cntx no encodings no NRC emo lex no Sent140 lex Table 3: Ablation experiments on the 2014 test sets. The improved scores are shown in Table 4, with the submitted and the best system results. Test Set New F1 Old F1 Best tweets sarcasm LiveJournal tweets SMS mean Table 4: Our post-submission results. 4 Conclusion and Future Work We have described the system built by the team of SU-FMI for SemEval-2014 task 9. Due to our late entering in the competition, we were only ranked 20th out of 50 submissions (from 44 teams). We have made some interesting observations about the impact of the different features. Among the best-performing feature groups were POS-tag counts, word cluster presence and bigrams, the Hu-Liu lexicon and the NRC Hashtag Sentiment lexicon. These had the most sustainable performance over the 2013 and the 2014 test datasets. Others we did not use, seemingly more context dependent, seem to have been more suited for the 2013 test sets like the n-grams feature group. Even though we made some improvements after submitting our initial version, we feel there is more to gain and optimize. There seem to be several low-hanging fruits based on our experiments data, which could add few points to our F1-scores. Going forward, our goal is to extend our experiments with more feature sub- and super-sets and to turn our classifier into a state-of-the-art performer.
6 References Kalina Bontcheva, Leon Derczynski, Adam Funk, Mark Greenwood, Diana Maynard, and Niraj Aswani TwitIE: An open-source information extraction pipeline for microblog text. In Proceedings of the International Conference on Recent Advances in Natural Language Processing, RANLP 13, pages 83 90, Hissar, Bulgaria. Peter Brown, Peter desouza, Robert Mercer, Vincent Della Pietra, and Jenifer Lai Classbased n-gram models of natural language. Computational Linguistics, 18: Hamish Cunningham, Diana Maynard, and Kalina Bontcheva Text Processing with GATE. Gateway Press CA. Eighth International Workshop on Semantic Evaluation, SemEval 14, Dublin, Ireland. Joseph Turian, Lev Ratinov, and Yoshua Bengio Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL 10, pages , Uppsala, Sweden. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT 05, pages , Vancouver, British Columbia, Canada. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin LIBLINEAR: A library for large linear classification. J. Mach. Learn. Res., 9: Minqing Hu and Bing Liu Mining and summarizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 04, pages , New York, NY, USA. Percy Liang Semi-supervised learning for natural language. Master s thesis, Massachusetts Institute of Technology. Saif Mohammad, Cody Dunne, and Bonnie Dorr Generating high-coverage semantic orientation lexicons from overtly marked words and a thesaurus. In Proceedings of the Conference on Empirical Methods in Natural Language Processing: Volume 2, EMNLP 09, pages , Singapore. Saif Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu NRC-Canada: Building the state-of-theart in sentiment analysis of tweets. In Proceedings of the Seventh International Workshop on Semantic Evaluation Exercises, SemEval 13, Atlanta, Georgia, USA. Preslav Nakov, Sara Rosenthal, Zornitsa Kozareva, Veselin Stoyanov, Alan Ritter, and Theresa Wilson SemEval-2013 task 2: Sentiment analysis in Twitter. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation, SemEval 13, pages , Atlanta, Georgia, USA. Olutobi Owoputi, Brendan O Connor, Chris Dyer, Kevin Gimpel, and Nathan Schneider Partof-speech tagging for Twitter: Word clusters and other advances. Technical Report CMU-ML , Carnegie Mellon University. Sara Rosenthal, Alan Ritter, Veselin Stoyanov, and Preslav Nakov SemEval-2014 task 9: Sentiment analysis in Twitter. In Proceedings of the
Improving Twitter Sentiment Analysis with Topic-Based Mixture Modeling and Semi-Supervised Training
Improving Twitter Sentiment Analysis with Topic-Based Mixture Modeling and Semi-Supervised Training Bing Xiang * IBM Watson 1101 Kitchawan Rd Yorktown Heights, NY 10598, USA [email protected] Liang Zhou
IIIT-H at SemEval 2015: Twitter Sentiment Analysis The good, the bad and the neutral!
IIIT-H at SemEval 2015: Twitter Sentiment Analysis The good, the bad and the neutral! Ayushi Dalmia, Manish Gupta, Vasudeva Varma Search and Information Extraction Lab International Institute of Information
CMUQ@Qatar:Using Rich Lexical Features for Sentiment Analysis on Twitter
CMUQ@Qatar:Using Rich Lexical Features for Sentiment Analysis on Twitter Sabih Bin Wasi, Rukhsar Neyaz, Houda Bouamor, Behrang Mohit Carnegie Mellon University in Qatar {sabih, rukhsar, hbouamor, behrang}@cmu.edu
Kea: Expression-level Sentiment Analysis from Twitter Data
Kea: Expression-level Sentiment Analysis from Twitter Data Ameeta Agrawal Computer Science and Engineering York University Toronto, Canada [email protected] Aijun An Computer Science and Engineering
Sentiment Lexicons for Arabic Social Media
Sentiment Lexicons for Arabic Social Media Saif M. Mohammad 1, Mohammad Salameh 2, Svetlana Kiritchenko 1 1 National Research Council Canada, 2 University of Alberta [email protected], [email protected],
VCU-TSA at Semeval-2016 Task 4: Sentiment Analysis in Twitter
VCU-TSA at Semeval-2016 Task 4: Sentiment Analysis in Twitter Gerard Briones and Kasun Amarasinghe and Bridget T. McInnes, PhD. Department of Computer Science Virginia Commonwealth University Richmond,
NILC USP: A Hybrid System for Sentiment Analysis in Twitter Messages
NILC USP: A Hybrid System for Sentiment Analysis in Twitter Messages Pedro P. Balage Filho and Thiago A. S. Pardo Interinstitutional Center for Computational Linguistics (NILC) Institute of Mathematical
TUGAS: Exploiting Unlabelled Data for Twitter Sentiment Analysis
TUGAS: Exploiting Unlabelled Data for Twitter Sentiment Analysis Silvio Amir +, Miguel Almeida, Bruno Martins +, João Filgueiras +, and Mário J. Silva + + INESC-ID, Instituto Superior Técnico, Universidade
Fine-Grained Sentiment Analysis for Movie Reviews in Bulgarian
Fine-Grained Sentiment Analysis for Movie Reviews in Bulgarian Borislav Kapukaranov Faculty of Mathematics and Informatics Sofia University Bulgaria [email protected] Preslav Nakov Qatar Computing
DiegoLab16 at SemEval-2016 Task 4: Sentiment Analysis in Twitter using Centroids, Clusters, and Sentiment Lexicons
DiegoLab16 at SemEval-016 Task 4: Sentiment Analysis in Twitter using Centroids, Clusters, and Sentiment Lexicons Abeed Sarker and Graciela Gonzalez Department of Biomedical Informatics Arizona State University
Sentiment Analysis: a case study. Giuseppe Castellucci [email protected]
Sentiment Analysis: a case study Giuseppe Castellucci [email protected] Web Mining & Retrieval a.a. 2013/2014 Outline Sentiment Analysis overview Brand Reputation Sentiment Analysis in Twitter
A comparison of Lexicon-based approaches for Sentiment Analysis of microblog posts
A comparison of Lexicon-based approaches for Sentiment Analysis of microblog posts Cataldo Musto, Giovanni Semeraro, Marco Polignano Department of Computer Science University of Bari Aldo Moro, Italy {cataldo.musto,giovanni.semeraro,marco.polignano}@uniba.it
Sentiment Classification on Polarity Reviews: An Empirical Study Using Rating-based Features
Sentiment Classification on Polarity Reviews: An Empirical Study Using Rating-based Features Dai Quoc Nguyen and Dat Quoc Nguyen and Thanh Vu and Son Bao Pham Faculty of Information Technology University
Sentiment analysis of Twitter microblogging posts. Jasmina Smailović Jožef Stefan Institute Department of Knowledge Technologies
Sentiment analysis of Twitter microblogging posts Jasmina Smailović Jožef Stefan Institute Department of Knowledge Technologies Introduction Popularity of microblogging services Twitter microblogging posts
End-to-End Sentiment Analysis of Twitter Data
End-to-End Sentiment Analysis of Twitter Data Apoor v Agarwal 1 Jasneet Singh Sabharwal 2 (1) Columbia University, NY, U.S.A. (2) Guru Gobind Singh Indraprastha University, New Delhi, India [email protected],
Sentiment Analysis of Twitter Data
Sentiment Analysis of Twitter Data Apoorv Agarwal Boyi Xie Ilia Vovsha Owen Rambow Rebecca Passonneau Department of Computer Science Columbia University New York, NY 10027 USA {apoorv@cs, xie@cs, iv2121@,
Package syuzhet. February 22, 2015
Type Package Package syuzhet February 22, 2015 Title Extracts Sentiment and Sentiment-Derived Plot Arcs from Text Version 0.2.0 Date 2015-01-20 Maintainer Matthew Jockers Extracts
Sentiment Analysis of Short Informal Texts
Journal of Artificial Intelligence Research 50 (2014) 723 762 Submitted 12/13; published 08/14 Sentiment Analysis of Short Informal Texts Svetlana Kiritchenko Xiaodan Zhu Saif M. Mohammad National Research
Robust Sentiment Detection on Twitter from Biased and Noisy Data
Robust Sentiment Detection on Twitter from Biased and Noisy Data Luciano Barbosa AT&T Labs - Research [email protected] Junlan Feng AT&T Labs - Research [email protected] Abstract In this
Towards SoMEST Combining Social Media Monitoring with Event Extraction and Timeline Analysis
Towards SoMEST Combining Social Media Monitoring with Event Extraction and Timeline Analysis Yue Dai, Ernest Arendarenko, Tuomo Kakkonen, Ding Liao School of Computing University of Eastern Finland {yvedai,
Micro blogs Oriented Word Segmentation System
Micro blogs Oriented Word Segmentation System Yijia Liu, Meishan Zhang, Wanxiang Che, Ting Liu, Yihe Deng Research Center for Social Computing and Information Retrieval Harbin Institute of Technology,
Sentiment analysis on tweets in a financial domain
Sentiment analysis on tweets in a financial domain Jasmina Smailović 1,2, Miha Grčar 1, Martin Žnidaršič 1 1 Dept of Knowledge Technologies, Jožef Stefan Institute, Ljubljana, Slovenia 2 Jožef Stefan International
Sentiment Analysis of Microblogs
Thesis for the degree of Master in Language Technology Sentiment Analysis of Microblogs Tobias Günther Supervisor: Richard Johansson Examiner: Prof. Torbjörn Lager June 2013 Contents 1 Introduction 3 1.1
Sentiment analysis: towards a tool for analysing real-time students feedback
Sentiment analysis: towards a tool for analysing real-time students feedback Nabeela Altrabsheh Email: [email protected] Mihaela Cocea Email: [email protected] Sanaz Fallahkhair Email:
S-Sense: A Sentiment Analysis Framework for Social Media Sensing
S-Sense: A Sentiment Analysis Framework for Social Media Sensing Choochart Haruechaiyasak, Alisa Kongthon, Pornpimon Palingoon and Kanokorn Trakultaweekoon Speech and Audio Technology Laboratory (SPT)
Improving Sentiment Analysis in Twitter Using Multilingual Machine Translated Data
Improving Sentiment Analysis in Twitter Using Multilingual Machine Translated Data Alexandra Balahur European Commission Joint Research Centre Via E. Fermi 2749 21027 Ispra (VA), Italy [email protected]
The multilayer sentiment analysis model based on Random forest Wei Liu1, Jie Zhang2
2nd International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 2016) The multilayer sentiment analysis model based on Random forest Wei Liu1, Jie Zhang2 1 School of
Sentiment analysis for news articles
Prashant Raina Sentiment analysis for news articles Wide range of applications in business and public policy Especially relevant given the popularity of online media Previous work Machine learning based
A Knowledge-Poor Approach to BioCreative V DNER and CID Tasks
A Knowledge-Poor Approach to BioCreative V DNER and CID Tasks Firoj Alam 1, Anna Corazza 2, Alberto Lavelli 3, and Roberto Zanoli 3 1 Dept. of Information Eng. and Computer Science, University of Trento,
Microblog Sentiment Analysis with Emoticon Space Model
Microblog Sentiment Analysis with Emoticon Space Model Fei Jiang, Yiqun Liu, Huanbo Luan, Min Zhang, and Shaoping Ma State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory
Twitter sentiment vs. Stock price!
Twitter sentiment vs. Stock price! Background! On April 24 th 2013, the Twitter account belonging to Associated Press was hacked. Fake posts about the Whitehouse being bombed and the President being injured
Semantic Sentiment Analysis of Twitter
Semantic Sentiment Analysis of Twitter Hassan Saif, Yulan He & Harith Alani Knowledge Media Institute, The Open University, Milton Keynes, United Kingdom The 11 th International Semantic Web Conference
Sentiment Analysis and Topic Classification: Case study over Spanish tweets
Sentiment Analysis and Topic Classification: Case study over Spanish tweets Fernando Batista, Ricardo Ribeiro Laboratório de Sistemas de Língua Falada, INESC- ID Lisboa R. Alves Redol, 9, 1000-029 Lisboa,
Effect of Using Regression on Class Confidence Scores in Sentiment Analysis of Twitter Data
Effect of Using Regression on Class Confidence Scores in Sentiment Analysis of Twitter Data Itir Onal *, Ali Mert Ertugrul, Ruken Cakici * * Department of Computer Engineering, Middle East Technical University,
Search and Data Mining: Techniques. Text Mining Anya Yarygina Boris Novikov
Search and Data Mining: Techniques Text Mining Anya Yarygina Boris Novikov Introduction Generally used to denote any system that analyzes large quantities of natural language text and detects lexical or
Evaluation Datasets for Twitter Sentiment Analysis
Evaluation Datasets for Twitter Sentiment Analysis A survey and a new dataset, the STS-Gold Hassan Saif 1, Miriam Fernandez 1, Yulan He 2 and Harith Alani 1 1 Knowledge Media Institute, The Open University,
Effectiveness of term weighting approaches for sparse social media text sentiment analysis
MSc in Computing, Business Intelligence and Data Mining stream. MSc Research Project Effectiveness of term weighting approaches for sparse social media text sentiment analysis Submitted by: Mulluken Wondie,
Chapter 8. Final Results on Dutch Senseval-2 Test Data
Chapter 8 Final Results on Dutch Senseval-2 Test Data The general idea of testing is to assess how well a given model works and that can only be done properly on data that has not been seen before. Supervised
Sentiment Analysis. D. Skrepetos 1. University of Waterloo. NLP Presenation, 06/17/2015
Sentiment Analysis D. Skrepetos 1 1 Department of Computer Science University of Waterloo NLP Presenation, 06/17/2015 D. Skrepetos (University of Waterloo) Sentiment Analysis NLP Presenation, 06/17/2015
Ngram Search Engine with Patterns Combining Token, POS, Chunk and NE Information
Ngram Search Engine with Patterns Combining Token, POS, Chunk and NE Information Satoshi Sekine Computer Science Department New York University [email protected] Kapil Dalwani Computer Science Department
Subjectivity and Sentiment Analysis of Arabic Twitter Feeds with Limited Resources
Subjectivity and Sentiment Analysis of Arabic Twitter Feeds with Limited Resources Eshrag Refaee and Verena Rieser Interaction Lab, Heriot-Watt University, EH144AS Edinburgh, Untied Kingdom. [email protected],
Word Completion and Prediction in Hebrew
Experiments with Language Models for בס"ד Word Completion and Prediction in Hebrew 1 Yaakov HaCohen-Kerner, Asaf Applebaum, Jacob Bitterman Department of Computer Science Jerusalem College of Technology
CS 229, Autumn 2011 Modeling the Stock Market Using Twitter Sentiment Analysis
CS 229, Autumn 2011 Modeling the Stock Market Using Twitter Sentiment Analysis Team members: Daniel Debbini, Philippe Estin, Maxime Goutagny Supervisor: Mihai Surdeanu (with John Bauer) 1 Introduction
Approaches for Sentiment Analysis on Twitter: A State-of-Art study
Approaches for Sentiment Analysis on Twitter: A State-of-Art study Harsh Thakkar and Dhiren Patel Department of Computer Engineering, National Institute of Technology, Surat-395007, India {harsh9t,dhiren29p}@gmail.com
Research on Sentiment Classification of Chinese Micro Blog Based on
Research on Sentiment Classification of Chinese Micro Blog Based on Machine Learning School of Economics and Management, Shenyang Ligong University, Shenyang, 110159, China E-mail: [email protected] Abstract
Impact of Financial News Headline and Content to Market Sentiment
International Journal of Machine Learning and Computing, Vol. 4, No. 3, June 2014 Impact of Financial News Headline and Content to Market Sentiment Tan Li Im, Phang Wai San, Chin Kim On, Rayner Alfred,
Web Document Clustering
Web Document Clustering Lab Project based on the MDL clustering suite http://www.cs.ccsu.edu/~markov/mdlclustering/ Zdravko Markov Computer Science Department Central Connecticut State University New Britain,
Twitter Sentiment Analysis of Movie Reviews using Machine Learning Techniques.
Twitter Sentiment Analysis of Movie Reviews using Machine Learning Techniques. Akshay Amolik, Niketan Jivane, Mahavir Bhandari, Dr.M.Venkatesan School of Computer Science and Engineering, VIT University,
ANNLOR: A Naïve Notation-system for Lexical Outputs Ranking
ANNLOR: A Naïve Notation-system for Lexical Outputs Ranking Anne-Laure Ligozat LIMSI-CNRS/ENSIIE rue John von Neumann 91400 Orsay, France [email protected] Cyril Grouin LIMSI-CNRS rue John von Neumann 91400
Sentiment Analysis of Movie Reviews and Twitter Statuses. Introduction
Sentiment Analysis of Movie Reviews and Twitter Statuses Introduction Sentiment analysis is the task of identifying whether the opinion expressed in a text is positive or negative in general, or about
Twitter Analytics for Insider Trading Fraud Detection
Twitter Analytics for Insider Trading Fraud Detection W-J Ketty Gann, John Day, Shujia Zhou Information Systems Northrop Grumman Corporation, Annapolis Junction, MD, USA {wan-ju.gann, john.day2, shujia.zhou}@ngc.com
Neuro-Fuzzy Classification Techniques for Sentiment Analysis using Intelligent Agents on Twitter Data
International Journal of Innovation and Scientific Research ISSN 2351-8014 Vol. 23 No. 2 May 2016, pp. 356-360 2015 Innovative Space of Scientific Research Journals http://www.ijisr.issr-journals.org/
FEATURE SELECTION AND CLASSIFICATION APPROACH FOR SENTIMENT ANALYSIS
FEATURE SELECTION AND CLASSIFICATION APPROACH FOR SENTIMENT ANALYSIS Gautami Tripathi 1 and Naganna S. 2 1 PG Scholar, School of Computing Science and Engineering, Galgotias University, Greater Noida,
Joint POS Tagging and Text Normalization for Informal Text
Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015) Joint POS Tagging and Text Normalization for Informal Text Chen Li and Yang Liu University of Texas
SVM Based Learning System For Information Extraction
SVM Based Learning System For Information Extraction Yaoyong Li, Kalina Bontcheva, and Hamish Cunningham Department of Computer Science, The University of Sheffield, Sheffield, S1 4DP, UK {yaoyong,kalina,hamish}@dcs.shef.ac.uk
Sentiment analysis on news articles using Natural Language Processing and Machine Learning Approach.
Sentiment analysis on news articles using Natural Language Processing and Machine Learning Approach. Pranali Chilekar 1, Swati Ubale 2, Pragati Sonkambale 3, Reema Panarkar 4, Gopal Upadhye 5 1 2 3 4 5
Data Mining for Tweet Sentiment Classification
Master Thesis Data Mining for Tweet Sentiment Classification Author: Internal Supervisors: R. de Groot dr. A.J. Feelders [email protected] prof. dr. A.P.J.M. Siebes External Supervisors: E. Drenthen
ARABIC SENTENCE LEVEL SENTIMENT ANALYSIS
The American University in Cairo School of Science and Engineering ARABIC SENTENCE LEVEL SENTIMENT ANALYSIS A Thesis Submitted to The Department of Computer Science and Engineering In partial fulfillment
Combining Lexicon-based and Learning-based Methods for Twitter Sentiment Analysis
Combining Lexicon-based and Learning-based Methods for Twitter Sentiment Analysis Lei Zhang, Riddhiman Ghosh, Mohamed Dekhil, Meichun Hsu, Bing Liu HP Laboratories HPL-2011-89 Abstract: With the booming
Gated Neural Networks for Targeted Sentiment Analysis
Gated Neural Networks for Targeted Sentiment Analysis Meishan Zhang 1,2 and Yue Zhang 2 and Duy-Tin Vo 2 1. School of Computer Science and Technology, Heilongjiang University, Harbin, China 2. Singapore
Forecasting stock markets with Twitter
Forecasting stock markets with Twitter Argimiro Arratia [email protected] Joint work with Marta Arias and Ramón Xuriguera To appear in: ACM Transactions on Intelligent Systems and Technology, 2013,
Content vs. Context for Sentiment Analysis: a Comparative Analysis over Microblogs
Content vs. Context for Sentiment Analysis: a Comparative Analysis over Microblogs Fotis Aisopos $, George Papadakis $,, Konstantinos Tserpes $, Theodora Varvarigou $ L3S Research Center, Germany [email protected]
A Survey on Product Aspect Ranking Techniques
A Survey on Product Aspect Ranking Techniques Ancy. J. S, Nisha. J.R P.G. Scholar, Dept. of C.S.E., Marian Engineering College, Kerala University, Trivandrum, India. Asst. Professor, Dept. of C.S.E., Marian
Some Experiments on Modeling Stock Market Behavior Using Investor Sentiment Analysis and Posting Volume from Twitter
Some Experiments on Modeling Stock Market Behavior Using Investor Sentiment Analysis and Posting Volume from Twitter Nuno Oliveira Centro Algoritmi Dep. Sistemas de Informação Universidade do Minho 4800-058
SENTIMENT ANALYSIS: A STUDY ON PRODUCT FEATURES
University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Dissertations and Theses from the College of Business Administration Business Administration, College of 4-1-2012 SENTIMENT
Twitter Stock Bot. John Matthew Fong The University of Texas at Austin [email protected]
Twitter Stock Bot John Matthew Fong The University of Texas at Austin [email protected] Hassaan Markhiani The University of Texas at Austin [email protected] Abstract The stock market is influenced
How To Analyze Sentiment On A Microsoft Microsoft Twitter Account
Sentiment Analysis on Hadoop with Hadoop Streaming Piyush Gupta Research Scholar Pardeep Kumar Assistant Professor Girdhar Gopal Assistant Professor ABSTRACT Ideas and opinions of peoples are influenced
Computer Aided Document Indexing System
Computer Aided Document Indexing System Mladen Kolar, Igor Vukmirović, Bojana Dalbelo Bašić, Jan Šnajder Faculty of Electrical Engineering and Computing, University of Zagreb Unska 3, 0000 Zagreb, Croatia
An Introduction to Data Mining
An Introduction to Intel Beijing [email protected] January 17, 2014 Outline 1 DW Overview What is Notable Application of Conference, Software and Applications Major Process in 2 Major Tasks in Detail
Bing Liu. Web Data Mining. Exploring Hyperlinks, Contents, and Usage Data. With 177 Figures. ~ Spring~r
Bing Liu Web Data Mining Exploring Hyperlinks, Contents, and Usage Data With 177 Figures ~ Spring~r Table of Contents 1. Introduction.. 1 1.1. What is the World Wide Web? 1 1.2. ABrief History of the Web
II. RELATED WORK. Sentiment Mining
Sentiment Mining Using Ensemble Classification Models Matthew Whitehead and Larry Yaeger Indiana University School of Informatics 901 E. 10th St. Bloomington, IN 47408 {mewhiteh, larryy}@indiana.edu Abstract
An Arabic Twitter Corpus for Subjectivity and Sentiment Analysis
An Arabic Twitter Corpus for Subjectivity and Sentiment Analysis Eshrag Refaee and Verena Rieser Interaction Lab, Heriot-Watt University, EH144AS Edinburgh, Untied Kingdom. [email protected], [email protected]
Sentiment Analysis for Movie Reviews
Sentiment Analysis for Movie Reviews Ankit Goyal, [email protected] Amey Parulekar, [email protected] Introduction: Movie reviews are an important way to gauge the performance of a movie. While providing
SemEval-2015 Task 10: Sentiment Analysis in Twitter
SemEval-2015 Task 10: Sentiment Analysis in Twitter Sara Rosenthal Columbia University [email protected] Preslav Nakov Qatar Computing Research Institute [email protected] Svetlana Kiritchenko National
Probabilistic topic models for sentiment analysis on the Web
University of Exeter Department of Computer Science Probabilistic topic models for sentiment analysis on the Web Chenghua Lin September 2011 Submitted by Chenghua Lin, to the the University of Exeter as
Twitter Sentiment Analysis
Twitter Sentiment Analysis By Afroze Ibrahim Baqapuri (NUST-BEE-310) A Project report submitted in fulfilment of the requirement for the degree of Bachelors in Electrical (Electronics) Engineering Department
EFFICIENTLY PROVIDE SENTIMENT ANALYSIS DATA SETS USING EXPRESSIONS SUPPORT METHOD
EFFICIENTLY PROVIDE SENTIMENT ANALYSIS DATA SETS USING EXPRESSIONS SUPPORT METHOD 1 Josephine Nancy.C, 2 K Raja. 1 PG scholar,department of Computer Science, Tagore Institute of Engineering and Technology,
A Comparative Study on Sentiment Classification and Ranking on Product Reviews
A Comparative Study on Sentiment Classification and Ranking on Product Reviews C.EMELDA Research Scholar, PG and Research Department of Computer Science, Nehru Memorial College, Putthanampatti, Bharathidasan
Looking for Features for Supervised Tweet Polarity Classification
Looking for Features for Supervised Tweet Polarity Classification Buscando Características para Clasificación Supervisada de Polaridad de Tuits Iñaki San Vicente Roncal Elhuyar Fundazioa Zelai Haundi 3,
A Systematic Cross-Comparison of Sequence Classifiers
A Systematic Cross-Comparison of Sequence Classifiers Binyamin Rozenfeld, Ronen Feldman, Moshe Fresko Bar-Ilan University, Computer Science Department, Israel [email protected], [email protected],
Entity-centric Sentiment Analysis on Twitter data for the Potuguese Language
Entity-centric Sentiment Analysis on Twitter data for the Potuguese Language Marlo Souza 1, Renata Vieira 2 1 Instituto de Informática Universidade Federal do Rio Grande do Sul - UFRGS Porto Alegre RS
Introduction to Text Mining. Module 2: Information Extraction in GATE
Introduction to Text Mining Module 2: Information Extraction in GATE The University of Sheffield, 1995-2013 This work is licenced under the Creative Commons Attribution-NonCommercial-ShareAlike Licence
