Comparing IPL2 and Yahoo! Answers: A Case Study of Digital Reference and Community Based Question Answering
|
|
|
- Lydia Francis
- 10 years ago
- Views:
Transcription
1 Comparing and : A Case Study of Digital Reference and Community Based Answering Dan Wu 1 and Daqing He 1 School of Information Management, Wuhan University School of Information Sciences, University of Pittsburgh Abstract In this paper, we took and as the two samples for our case study of digital references and community-based Q&A sites. We examined the services of the two systems based on 00 real questions raised in and their similar questions found in. type, topic classification, answer type, and answer time delay were compared between the similar questions in these two platforms. The result analysis showed that the two systems classify their questions differently, and the types of the questions asked are different too. It took much longer time to obtain answers from, whereas different types of questions in generated dramatically different response time. However, some differences also demonstrated that there is need to consider integrating certain ideas in the two systems. Keywords: digital reference, community-based question and answering,, answers, case study Citation: Wu, D., & He, D. (014). Comparing and : A Case Study of Digital Reference and Community Based Answering. In iconference 014 Proceedings (p ). doi: /14315 Copyright: Copyright is held by the authors. Acknowledgements: This paper is partly supported by National Youth Top-notch Talent Support Program of China. We would like to thank the ischool of Drexel University for providing data. Contact: [email protected], [email protected] 1 Introduction With the wide usages of the Web, people increasingly seek for information online in their professional and personal lives. Web search engines are playing important roles in satisfying people s information needs, but they still suffer limitations for handling people s complex questions and needs. Therefore, asking questions to and obtaining answers from a human being (rather than a machine as in search engines) have also been extended to the Web, and have developed into two commonly used platforms: online digital reference extended from traditional face-to-face library reference services (Jeffrey Pomerantz, Nicholson, Belanger, & Lankes, 004), and community-based question and answering (Q&A) that resembles asking questions among friends (Liu, Bian, & Agichtein, 008; Y. Liu, et al., 008). There are many digital reference services and community-based Q&A systems developed over the years. Both platforms have also drawn a great amount of research interests, which cover areas such as the characteristics and services of digital references (Janes, 00; Lankes, 004; Jeffrey Pomerantz, et al., 004), questions and answers in community-based Q&A sites (Fichman, 011; Gazan, 007, 011), and the comparison of services provided by the two platforms (Wang, 007; Wu & He, 013). Our ultimate research goal is to explore the integration of these two services into one coherent framework so that the strengths of one can compensate the weaknesses of the other. As a preliminary study toward this goal, this note paper, therefore, focuses on exploring the problem space with the following research question: RQ: what are the similarities and differences in terms of question type, topic classification, askers intentions, answer types, and answer time delay between the similar questions really asked in these two platforms?
2 The motivation for us to concentrate on similar questions is that there are existing studies in the literature on examining the connections between the two platforms in general (Wang, 007) and through a set of carefully crafted experiment questions (Wu & He, 013). However, there is no study examining the connections based on truly occurred similar questions on these two sites. We want to emphasis on truly occurred questions because these questions would tell us more accurately about what exactly happening in those platforms. The reason that we paid attention to question type, topic classification, askers intentions, answer types, and answer time delay is because these are important (though incomplete) parameters for examining the possibility of integrating these digital reference services and community-based Q&A services. In this study, we adopted case study as our main research method, and selected and as the two typical cases for collaborative digital reference and community-based question answering respectively. 1 was developed in January 010 by the School of Information Science and Technology of Drexel University by combining IPL (Internet Public Library) and LII (Librarians Internet Index). is one of the most popular English community-based Q&A sites, which has high reputation with both big user populations and active Q&A communities. They each represent a top quality, well-known system in their own type of services. Although we acknowledge the potential questions on the generalizability of our results due to our case study research method on just two samples, we think that the results are still invaluable for a preliminary study. To obtain similar questions from the two platforms, we first selected the 00 questions in the last month (June 1-30, 011) of our transaction logs. Then we developed queries based on each of these 00 questions and searched in for similar questions. We manually judged the similarity between the returned questions and the original questions, and kept only those returned questions that were judged to be similar enough. Next, for each question, we ranked the remaining questions according to their similarity to the original question, and also according to the time that these questions were posted in. To enable us to complete the study, we only sampled up to three similar questions for each question. That is, if there was only up to three similar questions were found, we kept all three questions. When there were more than three similar questions, we retained the one that is most similar, the one with the earliest time in Yahoo, and the one that is closest in time to the question asked in. However, not all questions can find their similar ones in. In total we located 157 questions by the day 15 August, 01. This gave us 157 pairs of similar questions between and. We do acknowledge that the above method is just one of many approaches for finding similar questions between the two platforms. For example, we could start with questions in, and then find similar questions in. However, in practice, since the number of questions in is much smaller than that in, we think that our approach actually would give us higher chances to find more similar questions between and. Comparison and Discussion As stated, we will compare and based on the question type, topic classification, askers intentions, answer types, and answer time delay. They can be classified into question comparison and answer comparison
3 Rank of Type Number Percentage Rank of Number Percentage Exploratory question % % Factual question % % Informational question 36 18% % Navigational question % 35.3% List question % 6 1.3% Definition question 14 7% % Total % % Table 1: Types of and.1 Comparison of s Borrowed ideas from several existing work (Gazan, 011; J. Pomerantz, 005; Voorhees, 00), we classify the questions into six types, which are Factual questions, List questions, Definition questions, Exploratory questions, Informational questions and Navigational questions. Table 1 shows the distribution of the questions from both and. The first impression is that the questions and their similar share similar distributions on many question types. For example, List questions and Definition questions are among the smallest in percentage. However we also notice that the most common question in is Exploratory questions which has 5.5%, and Factual questions and Informational questions are at the second and third. But the most common questions belong to Informational questions (37.6%). Navigational questions are the second, and Exploratory questions are the third. This shows that questions are in general more complex and difficult to answer, whereas questions are most often aim for some ready answered information. Subject Areas of the s Science; History; Literature; Other; Library; Business; General Reference; Humanities; Education; Biography; Geography; Government; Heath; Entertainment/Sport; Sociology; Computers; Internet; Music; Religion; Politics; Hobby; Household/Do-It- Yourself; Psychology; Military Arts & Humanities; Beauty & Style; Business & Finance; Cars & Transportation; Computers & Internet; Consumer Electronics; Dining Out; Education & Reference; Entertainment & Music; Environment; Family & Relationships; Food & Drink; Games & Recreation; Health; Home & Garden; Local Businesses; News & Events; Pets; Politics & Government; Pregnancy & Parenting; Science & Mathematics; Social Science; Society & Culture; Sports; Travel; Products Table : Subject Categories and Numbers in and 677
4 Both and provide classifications to the subject areas of the questions (see Table ). Based on the 157 pairs of similar questions, we compared the classifications of the similar questions in the two systems. Using subject categories as the base, Table 3 shows the number of pairs that have consistent subject label only at the top first level of subject category in, only at the second level category, at both levels and no corresponding label at either level. The results show that majority of the questions have different category labels (106 out of 157), but there are still some questions that are being classified similarly at both levels (13 out of 157) or at least at the top level (3 out of 157). Therefore, it would take consider amount of mapping effort to connect s subject categories with that of, but some subject categories can be used for starting points in the integration. Type Consistent with the 1st Level Category in Consistent with the nd Level category in Exploratory questions Factual questions Informational questions Navigational questions 5 1 List questions Definition questions 1 Total Consistent with Both Levels in Not Consistent with Any Level Category in Table 3: The Classification Comparison between s and s Morris et al. (010) showed that 5% of their respondents used their social networks with the intention to ask for recommendations or other types of opinionated questions, whereas in general these kinds of opinionated questions are not common in library references. We examined the intention behind the 00 questions, and only identified three questions (1.5%) as opinionated questions. In contrast, we found that 54 of the 157 questions (34.4%) were either seeking personal recommendations or subjective opinions. This result confirms Morris et al. s findings. It seems that people often view digital references and online social Q&A systems as two different services. Our further examination of these questions revealed that most of the opinionated questions in are exploratory questions with open-end answers. More study is needed on how to support users such information needs.. Comparison of Depends on whether the returned information contains direct answers or related references/links to look for answers, we divided the answers of the questions from and into five types: direct answers, related references/links, both, no answers, and others. Table 4 shows that majority questions always contain related references/links as part of the answers, which helps to establish the authenticity and authority of the answers. This is due to the professional training of the reference librarians in. In contrast, answers from most often just contain direct answers without any references and links. This is useful for the users who just want to have ready answers, but it is difficult for the users to 678
5 establish the correctness and authority of the answers. This is true for both the selected best answers and non-best answers in. Therefore, users in need better support in determining the answer qualities. There are 19 questions contain answers classified as others. Top common instances of the others include 1) pointing users to the FAQ available in and online, ) failing to find the answer, so describing what searches had been done, and 3) pointing out that third parties (such as original sales staff) should be contacted rather than. Therefore, we can see that the professional training of librarians help the users even when the questions cannot be answered well. There are cases in that even the voted best answers are still wrong or offending. Direct Related References/Links Both No Others Total Best answers of Non-best answers of Table 4: Answer Types of s in and It takes time to answer a question, and the time delay for a question being answered could affect people s impression of Q&A services. usually give back askers one answer, which establish the first answer time. But sometimes, the askers and the librarians may conduct follow-up interactions until the last answer was given back. We view this last reply with an answer as the best answer time in. If there is only one reply from to the user, the first answer time is also the best answer time. usually provide multiple answers, one of which has the earliest answering time (thus the first answer time) and one of which is voted as the best answer (thus the best answer time). In both and, the time difference between the first answer time/the best answer time and the time that the questions was asked is the time delay for the first answer or the best answer. Table 5 shows that the time difference between s first answer time delay and that of its best answers is 765 minutes (close to 13 hours), which is relatively small considering the average time delay for the best answer is minutes and that for the first answer is 3536 minutes (both are roughly 8 days). This means that the follow-up interactions after the initial answers are less common and often short. Therefore, the roughly 8 days delay for obtaining the first answer is the biggest issue in. In contrast,, with its participatory design, requires only in average about 8 hours for producing the first answer and about 15 hours for the best answer. Therefore, has very obvious advantage over on quickly replying to the askers. This is probably a clear angle for the integration of digital reference and community-based Q&A. Time Delay for the Best Answer average max min Time Delay for the First Answer average max min 679
6 Table 5: Time Delay for Obtaining in and We further correlated time delay with question types. Interestingly, as shown in Table 6, different types of questions did not make great difference in. There are only noticeable differences at the delay to the first answers in List questions and Navigational questions. If we have to pick up a question type for to spend the most time, it is Exploratory questions in both first answer and best answer cases. This probably makes sense since Exploratory questions in general need more time to answer. In contrast, different question types make dramatic difference in. Definition questions had the quickest answer time on both first answers and best answers, which took less than hours in average. Exploratory questions took long time to answer, but they were not among longest in (rank number 4 for the best answer and in the first answers), nor in comparable range with (9-11 hours vs. 8-9 days on both the first and the best answers). It is interesting to see that Navigational questions took the longest delay in, which are about 1 hours for the first answers and 4 hours for the best answers. We cannot figure out the reason, so further study is needed. Type Delay to the best answer average time Delay to the first answer average time Definition questions Exploratory questions Factual questions Informational questions List questions Navigational questions Table 6: Time Delay on Different Types We know that many questions in were answered by library science students, which might not be greatly different to the users in in most demographic parameters. However, it is with careful professional training in their studies, mutual help among peers, and close supervision by experienced librarians that the superiority of their answer quality was noticed in the literature (Wu & He, 013). We observed 83 such mutual help and close supervision cases among the 00 questions. This could be a feature that is useful to be maintained in future as well as be implemented in. 3 Conclusion In this paper, we took and as the two samples for our case study of digital references and community-based Q&A sites. We examined the services of the two systems based on 00 real questions raised in and their similar questions found in. Our result analysis show that the two systems classify their questions differently, and the types of the questions asked are different too. It took much longer time to obtain answers from, whereas different types of questions in generated dramatically different response time. However, some differences also demonstrate that there is 680
7 need to consider integrating some of the ideas in the two systems. Our further study lies on studying more cases of digital references and community-based Q&A sites, as well as examining in detail the response quality and time-taken to respond from digital reference and community based Q&A sites, the feedback from the questioners, and the usability of the questions by other users. Ultimately, we want to research on how to borrow insights of community-based question answering to improve digital reference and exploit the development of hybrid tools. 4 References Fichman, P. (011). A comparative assessment of answer quality on four question answering sites. Journal of Information Science, 37(5), Gazan, R. (007). Seekers, sloths and social reference: Homework questions submitted to a questionanswering community. New Review of Hypermedia and Multimedia, 13(), Gazan, R. (011). Social Q&A. Journal of the American Society for Information Science and Technology, 6(1), Janes, J. (00). Digital reference: reference librarians experiences and attitudes. Journal of the American Society for Information Science and Technology, 53(7), Lankes, R. D. (004). The digital reference research agenda. Journal of the American Society for Information Science and Technology, 55(4), Liu, Y., Bian, J., & Agichtein, E. (008). Predicting information seeker satisfaction in community question answering. Paper presented at the Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, Singapore, Singapore. Liu, Y., Li, S., Cao, Y., Lin, C. Y., Han, D., & Yu, Y. (008). Understanding and summarizing answers in community-based question answering services. Paper presented at the COLING '08 Proceedings of the nd International Conference on Computational Linguistics, Manchester, UK. Morris, M. R., Teevan, J., & Panovich, K. (010). What do people ask their social networks, and why?: a survey study of status message q&a behavior. Paper presented at the Proceedings of the 8th international conference on Human factors in computing systems. Pomerantz, J. (005). A Linguistic Analysis of Taxonomies. Journal of the American Society for Information Science and Technology, 56(7), Pomerantz, J., Nicholson, S., Belanger, Y., & Lankes, D. (004). The current state of digital reference: validation of a general digital reference model through a survey of digital reference services. Information Processing and Management, 40(), Voorhees, E. M. (00). Overview of the TREC 00 question answering track. Paper presented at the Proceedings of the Eleventh Text REtrieval Conference (TREC 00). Wang, J. M. (007). Comparative Analysis of Ask-Answer Community and Digital Reference Service. Library Magazine(6), 16-18, 5. Wu, D., & He, D. (013, February 1-15, 013). A study on Q&A services between community-based question answering and collaborative digital reference in two languages. Paper presented at the iconference 013, Fort Worth, TX. 5 Table of Tables Table 1: Types of and Table : Subject Categories and Numbers in and Table 3: The Classification Comparison between s and s Table 4: Answer Types of s in and Table 5: Time Delay for Obtaining in and Table 6: Time Delay on Different Types
Evaluating and Predicting Answer Quality in Community QA
Evaluating and Predicting Answer Quality in Community QA Chirag Shah Jefferey Pomerantz School of Communication & Information (SC&I) School of Information & Library Science (SILS) Rutgers, The State University
Subordinating to the Majority: Factoid Question Answering over CQA Sites
Journal of Computational Information Systems 9: 16 (2013) 6409 6416 Available at http://www.jofcis.com Subordinating to the Majority: Factoid Question Answering over CQA Sites Xin LIAN, Xiaojie YUAN, Haiwei
Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval
Information Retrieval INFO 4300 / CS 4300! Retrieval models Older models» Boolean retrieval» Vector Space model Probabilistic Models» BM25» Language models Web search» Learning to Rank Search Taxonomy!
On the Design of an Advanced Web-Based System for Supporting Thesis Research Process and Knowledge Sharing
Yan, Y., Han, X., Yang, J., & Zhou, Q. (2012). On the design of an advanced web-based system for supporting thesis research process and knowledge sharing. Journal of Educational Technology Development
Building a Question Classifier for a TREC-Style Question Answering System
Building a Question Classifier for a TREC-Style Question Answering System Richard May & Ari Steinberg Topic: Question Classification We define Question Classification (QC) here to be the task that, given
Overview of the TREC 2015 LiveQA Track
Overview of the TREC 2015 LiveQA Track Eugene Agichtein 1, David Carmel 2, Donna Harman 3, Dan Pelleg 2, Yuval Pinter 2 1 Emory University, Atlanta, GA 2 Yahoo Labs, Haifa, Israel 3 National Institute
INTERNATIONAL RELATIONS PROGRAM EXIT INTERVIEW: ANALYSIS OF STUDENT RESPONSES. By: Meghan I. Gottowski and Conor N. Rowell
INTERNATIONAL RELATIONS PROGRAM EXIT INTERVIEW: ANALYSIS OF STUDENT RESPONSES By: Meghan I. Gottowski and Conor N. Rowell Supervised by Dr. Diane E. Schmidt, California State University, Chico July 12,
An Empirical Study of Application of Data Mining Techniques in Library System
An Empirical Study of Application of Data Mining Techniques in Library System Veepu Uppal Department of Computer Science and Engineering, Manav Rachna College of Engineering, Faridabad, India Gunjan Chindwani
On the Feasibility of Answer Suggestion for Advice-seeking Community Questions about Government Services
21st International Congress on Modelling and Simulation, Gold Coast, Australia, 29 Nov to 4 Dec 2015 www.mssanz.org.au/modsim2015 On the Feasibility of Answer Suggestion for Advice-seeking Community Questions
The Design Study of High-Quality Resource Shared Classes in China: A Case Study of the Abnormal Psychology Course
The Design Study of High-Quality Resource Shared Classes in China: A Case Study of the Abnormal Psychology Course Juan WANG College of Educational Science, JiangSu Normal University, Jiangsu, Xuzhou, China
Chapter 5 Use the technological Tools for accessing Information 5.1 Overview of Databases are organized collections of information.
Chapter 5 Use the technological Tools for accessing Information 5.1 Overview of Databases are organized collections of information. Ex: Online Catalogs. 1. Structure of a Database - Contains record and
Advanced Blackboard 9.1 Features
Advanced Blackboard 9.1 Features Asynchronous Communication Tools Discussion Board Blogs Journals Wikis Assessment Assignments Grade Center LinkMaker Wimba Tests, Surveys and Question Pools Asynchronous
An Investigation on Learning of College Students and the Current Application Situation of the Web-based Courses
2011 International Conference on Computer Science and Information Technology (ICCSIT 2011) IPCSIT vol. 51 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V51.127 An Investigation on Learning
Site-Specific versus General Purpose Web Search Engines: A Comparative Evaluation
Panhellenic Conference on Informatics Site-Specific versus General Purpose Web Search Engines: A Comparative Evaluation G. Atsaros, D. Spinellis, P. Louridas Department of Management Science and Technology
Architecture of an Ontology-Based Domain- Specific Natural Language Question Answering System
Architecture of an Ontology-Based Domain- Specific Natural Language Question Answering System Athira P. M., Sreeja M. and P. C. Reghuraj Department of Computer Science and Engineering, Government Engineering
Analyzing Chinese-English Mixed Language Queries in a Web Search Engine
Analyzing Chinese-English Mixed Language Queries in a Web Search Engine Hengyi Fu School of Information Florida State University 142 Collegiate Loop, FL 32306 [email protected] Shuheng Wu School of Information
A Rule-Based Short Query Intent Identification System
A Rule-Based Short Query Intent Identification System Arijit De 1, Sunil Kumar Kopparapu 2 TCS Innovation Labs-Mumbai Tata Consultancy Services Pokhran Road No. 2, Thane West, Maharashtra 461, India 1
Usability Test Script
Usability Test Script Session 1 [date & time] [participant s name and location] [investigator s names] Step 1. Introducing yourself & read instructions Hi, my name is [ ] and I m going to be walking you
Frequency Matters. The keys to optimizing email send frequency
The keys to optimizing email send frequency Email send frequency requires a delicate balance. Send too little and you miss out on sales opportunities and end up leaving money on the table. Send too much
Glossary of Library Terms
Glossary of Library Terms A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Term A Abstract Advanced search B Basic search Bibliography Boolean operators C Call number Catalogue Definition An abstract
JKUAT LIBRARY USER GUIDE
WI-2-65-5-1 JOMO KENYATTA UNIVERSITY OF AGRICULTURE AND TECHNOLOGY LIBRARY DEPARTMENT JKUAT LIBRARY USER GUIDE Contents Page Introduction.... 2 Public catalogue.....2 Print Information Resources...2 Electronic
Using web blogs as a tool to encourage pre-class reading, post-class. reflections and collaboration in higher education
July 2010, Volume 7, No.7 (Serial No.68) US-China Education Review, ISSN 1548-6613, USA Using web blogs as a tool to encourage pre-class reading, post-class reflections and collaboration in higher education
"The Use of Internet by Library and Information Science Distance Learners of Annamalai University, India"
ABSTRACT "The Use of Internet by Library and Information Science Distance Learners of Annamalai University, India" Dr. S. Ravi Professor and Head Library and Information Science Wing Mentor, Commonwealth
Running head: RESEARCH PROPOSAL GUIDELINES: APA STYLE. APA Style: An Example Outline of a Research Proposal. Your Name
Research Proposal Guidelines: APA Style - 1 Running head: RESEARCH PROPOSAL GUIDELINES: APA STYLE APA Style: An Example Outline of a Research Proposal Your Name School of Health & Applied Human Sciences
Question Routing by Modeling User Expertise and Activity in cqa services
Question Routing by Modeling User Expertise and Activity in cqa services Liang-Cheng Lai and Hung-Yu Kao Department of Computer Science and Information Engineering National Cheng Kung University, Tainan,
Construction of Library Management Information System
Construction of Library Management Information System Lian-feng Zhang, Rui-jin Zhou, Li-ping Sui, and Guo-qing Wu Henan Institute of Science and Technology Xin-xiang, China [email protected] Abstract. Library
The Val Garland School of Make-up PROSPECTUS
The Val Garland School of Make-up PROSPECTUS Introduction Val Garland and her team have created looks for every major fashion magazine and fashion brand in the world, from Vogue to i-d, MAC to Dior. Now
Usability Test Plan Docutek ERes v. 5.0.05 University of Texas Libraries
Usability Test Plan Docutek ERes v. 5.0.05 University of Texas Libraries Prepared for: INF 385p Usability (Dr. Randolph Bias) Prepared by: Laura Bright, Rachael Gilg and Heather Nodler Date: March 24,
Antecedents and Consequences of Consumer s Dissatisfaction of Agro-food Products and Their Complaining through Electronic Means
Antecedents and Consequences of Consumer s Dissatisfaction of Agro-food Products and Their Complaining through Electronic Means Costas Assimakopoulos 1 1 Department of Business Administration, Alexander
An Analysis of Twitter Users vs. Non-Users. An Insight Report Presentation Using DeepProfile Micro-Segmentation January 2014
An Analysis of Twitter Users vs. Non-Users An Insight Report Presentation Using DeepProfile Micro-Segmentation January 2014 CivicScience s DeepProfile Project Goal To compare the key demographic, psychographic,
1 o Semestre 2007/2008
Departamento de Engenharia Informática Instituto Superior Técnico 1 o Semestre 2007/2008 Outline 1 2 3 4 5 Outline 1 2 3 4 5 Exploiting Text How is text exploited? Two main directions Extraction Extraction
Recommender Systems: Content-based, Knowledge-based, Hybrid. Radek Pelánek
Recommender Systems: Content-based, Knowledge-based, Hybrid Radek Pelánek 2015 Today lecture, basic principles: content-based knowledge-based hybrid, choice of approach,... critiquing, explanations,...
70 % MKT 13 % MKT % MKT 16 GUIDE TO SEO
70 % MKT 13 % MKT 16 % MKT GUIDE TO SEO Overview Search engines are essentially directories that contain and organize much of the information available on the internet. The three largest search engines
SAMPLE INTERVIEW QUESTIONS
SAMPLE INTERVIEW QUESTIONS Before you start an interview, make sure you have a clear picture of the criteria and standards of performance that will make or break the job, and limit your questions to those
Term extraction for user profiling: evaluation by the user
Term extraction for user profiling: evaluation by the user Suzan Verberne 1, Maya Sappelli 1,2, Wessel Kraaij 1,2 1 Institute for Computing and Information Sciences, Radboud University Nijmegen 2 TNO,
Job Hunting for Graduates Career Basics Series
Job Hunting for Graduates Career Basics Series Careers & Employability, Student Life Lower Ground Floor (Level A), University House, University of Salford, M5 4WT. t: 0161 295 5088 e: [email protected]
Factsheet: Market research
Factsheet: Market research A close understanding of the local childcare market and your customers needs is essential in order for your childcare business to succeed. Performing market research on potential
Finding the Right Facts in the Crowd: Factoid Question Answering over Social Media
Finding the Right Facts in the Crowd: Factoid Question Answering over Social Media ABSTRACT Jiang Bian College of Computing Georgia Institute of Technology Atlanta, GA 30332 [email protected] Eugene
Enhanced data mining analysis in higher educational system using rough set theory
African Journal of Mathematics and Computer Science Research Vol. 2(9), pp. 184-188, October, 2009 Available online at http://www.academicjournals.org/ajmcsr ISSN 2006-9731 2009 Academic Journals Review
5 Point Social Media Action Plan.
5 Point Social Media Action Plan. Workshop delivered by Ian Gibbins, IG Media Marketing Ltd ([email protected], tel: 01733 241537) On behalf of the Chambers Communications Sector Introduction: There
The 11 Components of a Best-In-Class 360 Assessment
White Paper LEADERSHIP DEVELOPMENT The 11 Components of a Best-In-Class 360 Assessment Crucial elements for your 360 assessment 360-degree assessments are the backbone of most corporations leadership development
Comparing Tag Clouds, Term Histograms, and Term Lists for Enhancing Personalized Web Search
Comparing Tag Clouds, Term Histograms, and Term Lists for Enhancing Personalized Web Search Orland Hoeber and Hanze Liu Department of Computer Science, Memorial University St. John s, NL, Canada A1B 3X5
An International Comparison of the Career of Social Work by Students in Social Work
Acta Medicina et Sociologica Vol 5., 2014 5 An International Comparison of the Career of Social Work by Students in Social Work Gergely Fábián*, Thomas R. Lawson**, Mihály Fónai***, János Kiss*, Eric R.
Sentiment Analysis on Big Data
SPAN White Paper!? Sentiment Analysis on Big Data Machine Learning Approach Several sources on the web provide deep insight about people s opinions on the products and services of various companies. Social
Big Data Analytics of Multi-Relationship Online Social Network Based on Multi-Subnet Composited Complex Network
, pp.273-284 http://dx.doi.org/10.14257/ijdta.2015.8.5.24 Big Data Analytics of Multi-Relationship Online Social Network Based on Multi-Subnet Composited Complex Network Gengxin Sun 1, Sheng Bin 2 and
Search Result Optimization using Annotators
Search Result Optimization using Annotators Vishal A. Kamble 1, Amit B. Chougule 2 1 Department of Computer Science and Engineering, D Y Patil College of engineering, Kolhapur, Maharashtra, India 2 Professor,
Automated Collaborative Filtering Applications for Online Recruitment Services
Automated Collaborative Filtering Applications for Online Recruitment Services Rachael Rafter, Keith Bradley, Barry Smyth Smart Media Institute, Department of Computer Science, University College Dublin,
Oracle Business Intelligence Commenting & Annotations Solution Brief
Oracle Business Intelligence Commenting & Annotations Solution Brief Collaborative BI Provides: Enhanced application functionality Increased user adoption Social engagement Additional ROI metrics for BI
Best-Answer Selection Criteria in a Social Q&A site from the User- Oriented Relevance Perspective
Best-Answer Selection Criteria in a Social Q&A site from the User- Oriented Relevance Perspective Soojung Kim College of Information Studies, University of Maryland, College Park, MD 20742. [email protected].
Exploring Learner s Patterns of Using the Online Course Tool in University Classes. Yoshihiko Yamamoto and Akinori Usami
Exploring Learner s Patterns of Using the Online Course Tool in University Classes Yoshihiko Yamamoto and Akinori Usami 54 Abstract Online course tools such as WebCT or Manaba+R are popularly used in university
How To Be Successful With Social Media And Email Marketing
Brought to you by: ExtremeDigitalMarketing.com B2B Social Media + Email Marketing: Rock Solid Strategies For Doing It Right! Businesses Connecting With Businesses Through The Power Of Social Media! By
DEVELOPING A COMPREHENSIVE E-MARKETING STRATEGY USING 3 POPULAR ONLINE CHANNELS
DEVELOPING A COMPREHENSIVE E-MARKETING STRATEGY USING 3 POPULAR ONLINE CHANNELS Your Brand & Your Goals Three Popular Online Channels Search Engine Optimization (SEO) Article Marketing & Link Building
Context Aware Predictive Analytics: Motivation, Potential, Challenges
Context Aware Predictive Analytics: Motivation, Potential, Challenges Mykola Pechenizkiy Seminar 31 October 2011 University of Bournemouth, England http://www.win.tue.nl/~mpechen/projects/capa Outline
Downloaded from UvA-DARE, the institutional repository of the University of Amsterdam (UvA) http://hdl.handle.net/11245/2.122992
Downloaded from UvA-DARE, the institutional repository of the University of Amsterdam (UvA) http://hdl.handle.net/11245/2.122992 File ID Filename Version uvapub:122992 1: Introduction unknown SOURCE (OR
Who can I contact with questions? Contact Interlibrary Loan by email at [email protected] or by phone at (765) 285-1323.
What is? is a service that allows you to request items from libraries outside of BSU. As a distance education student, you can also request items be delivered to you from the BSU Libraries. Page 2 Who
Hitwise Mobile Client Release FAQ February 2013
Hitwise Mobile Client Release FAQ February 2013 About Hitwise Mobile Mobile is one of the most dynamic aspects of the Internet today. Understanding the way consumers use the Internet from mobile devices
Taxonomies in Practice Welcome to the second decade of online taxonomy construction
Building a Taxonomy for Auto-classification by Wendi Pohs EDITOR S SUMMARY Taxonomies have expanded from browsing aids to the foundation for automatic classification. Early auto-classification methods
The Effect of Social Media on College Students Today
The Effect of Social Media on College Students Today Sierra Johnson TWC- 521: Principles of writing with Technology Spring 2013 Social networking has become an everyday thing to many Americans, all ranging
MARKETING RESEARCH AND MARKET INTELLIGENCE (MRM711S) FEEDBACK TUTORIAL LETTER SEMESTER `1 OF 2016. Dear Student
MARKETING RESEARCH AND MARKET INTELLIGENCE (MRM711S) FEEDBACK TUTORIAL LETTER SEMESTER `1 OF 2016 Dear Student Assignment 1 has been marked and this serves as feedback on the assignment. I have included
Sample Reporting. Analytics and Evaluation
Sample Reporting Analytics and Evaluation This sample publication is provided with the understanding that company names and related example reporting are solely illustrative and the content does not constitute
9. Children, Technology and Gambling
9. Children, Technology and Gambling This document is part of a series of Building a Stronger South Australia policy initiatives from the Government of South Australia. 1. Future Fund 2. Jobs and Skills
COMMUNITY QUESTION ANSWERING (CQA) services, Improving Question Retrieval in Community Question Answering with Label Ranking
Improving Question Retrieval in Community Question Answering with Label Ranking Wei Wang, Baichuan Li Department of Computer Science and Engineering The Chinese University of Hong Kong Shatin, N.T., Hong
Deloitte Millennial Innovation survey
Deloitte Millennial Innovation survey S U M M A R Y O F G L O B A L F I N D I N G S 19 th December 2012 1 Research Approach WHO? Millennials born January 1982 onwards Degree educated In full-time employment
Computational Advertising Andrei Broder Yahoo! Research. SCECR, May 30, 2009
Computational Advertising Andrei Broder Yahoo! Research SCECR, May 30, 2009 Disclaimers This talk presents the opinions of the author. It does not necessarily reflect the views of Yahoo! Inc or any other
Log Analysis of Academic Digital Library: User Query Patterns
Log Analysis of Academic Digital Library: User Query Patterns Hyejung Han 1, Wooseob Jeong 1 and Dietmar Wolfram 1 1 University of Wisconsin - Milwaukee Abstract This study analyzed user queries submitted
A self-directed Virtual Learning Environment: Mi propio jefe (My own boss)
A self-directed Virtual Learning Environment: Mi propio jefe (My own boss) Diego Ernesto Leal Fonseca, Gerardo Tibaná Herrera ([email protected], [email protected]) LIDIE Research and Development
What is a Domain Name?
What is a Domain Name? First of all, let s just make sure you know what a domain name is. www.google.com www.amazon.com www.youtube.com These are domain names. It s simply the name of your site... www.yoursite.com.
Domain Classification of Technical Terms Using the Web
Systems and Computers in Japan, Vol. 38, No. 14, 2007 Translated from Denshi Joho Tsushin Gakkai Ronbunshi, Vol. J89-D, No. 11, November 2006, pp. 2470 2482 Domain Classification of Technical Terms Using
Automated Text Analytics. Testing Manual Processing against Automated Listening
Automated Text Analytics Testing Manual Processing against Automated Listening Contents Executive Summary... 3 Why Is Manual Analysis Inaccurate and Automated Text Analysis on Target?... 3 Text Analytics
Searching Questions by Identifying Question Topic and Question Focus
Searching Questions by Identifying Question Topic and Question Focus Huizhong Duan 1, Yunbo Cao 1,2, Chin-Yew Lin 2 and Yong Yu 1 1 Shanghai Jiao Tong University, Shanghai, China, 200240 {summer, yyu}@apex.sjtu.edu.cn
Party Secretaries in Chinese Higher Education Institutions, Who Are They?
Journal of International Education and Leadership Volume 2 Issue 2 Summer 2012 http://www.jielusa.og/home/ ISSN: 2161-7252 Party Secretaries in Chinese Higher Education Institutions, Who Are They? Hua
Broadcast Yourself. User Guide
Broadcast Yourself User Guide 2015 American Institute of CPAs. All rights reserved. DISCLAIMER: The contents of this publication do not necessarily reflect the position or opinion of the American Institute
