Mining Text Data: An Introduction



Similar documents
Clustering Technique in Data Mining for Text Documents

Medical Information-Retrieval Systems. Dong Peng Medical Informatics Group

TF-IDF. David Kauchak cs160 Fall 2009 adapted from:

Research and Implementation of View Block Partition Method for Theme-oriented Webpage

Blog Post Extraction Using Title Finding

Introduction to IR Systems: Supporting Boolean Text Search. Information Retrieval. IR vs. DBMS. Chapter 27, Part A

Introduction to Information Retrieval

Search and Information Retrieval

W. Heath Rushing Adsurgo LLC. Harness the Power of Text Analytics: Unstructured Data Analysis for Healthcare. Session H-1 JTCC: October 23, 2015

1 o Semestre 2007/2008

Web Document Clustering

dm106 TEXT MINING FOR CUSTOMER RELATIONSHIP MANAGEMENT: AN APPROACH BASED ON LATENT SEMANTIC ANALYSIS AND FUZZY CLUSTERING

Recommender Systems: Content-based, Knowledge-based, Hybrid. Radek Pelánek

Introduction. A. Bellaachia Page: 1

Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval

A LANGUAGE INDEPENDENT WEB DATA EXTRACTION USING VISION BASED PAGE SEGMENTATION ALGORITHM

Homework 2. Page 154: Exercise Page 145: Exercise 8.3 Page 150: Exercise 8.9

An Information Retrieval using weighted Index Terms in Natural Language document collections

Linear Algebra Methods for Data Mining

SPATIAL DATA CLASSIFICATION AND DATA MINING

Why is Internal Audit so Hard?

Web Mining. Margherita Berardi LACAM. Dipartimento di Informatica Università degli Studi di Bari

Introduction to Information Retrieval

Text Mining in JMP with R Andrew T. Karl, Senior Management Consultant, Adsurgo LLC Heath Rushing, Principal Consultant and Co-Founder, Adsurgo LLC

Advas A Python Search Engine Module

So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02)

SEARCH ENGINE WITH PARALLEL PROCESSING AND INCREMENTAL K-MEANS FOR FAST SEARCH AND RETRIEVAL

uclust - A NEW ALGORITHM FOR CLUSTERING UNSTRUCTURED DATA

Latent Semantic Indexing with Selective Query Expansion Abstract Introduction

Information Visualization of Attributed Relational Data

Self Organizing Maps for Visualization of Categories

How To Improve Performance In A Database

Interactive Recovery of Requirements Traceability Links Using User Feedback and Configuration Management Logs

Eng. Mohammed Abdualal

Finding Advertising Keywords on Web Pages. Contextual Ads 101

Investigating Clinical Care Pathways Correlated with Outcomes

The Open Source Knowledge Discovery and Document Analysis Platform

Technologies & Applications

Technical Report. The KNIME Text Processing Feature:

Social Media Mining. Data Mining Essentials

CSE 233. Database System Overview

Clustering & Visualization

Visualizing an Auto-Generated Topic Map

Integrated Library Systems (ILS) Glossary

Finding Advertising Keywords on Web Pages

Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016

Search Result Optimization using Annotators

Information Retrieval Elasticsearch

Optimization of Internet Search based on Noun Phrases and Clustering Techniques

Search engine ranking

Big Data: Rethinking Text Visualization

Slide 7. Jashapara, Knowledge Management: An Integrated Approach, 2 nd Edition, Pearson Education Limited Nisan 14 Pazartesi

Motivation. Korpus-Abfrage: Werkzeuge und Sprachen. Overview. Languages of Corpus Query. SARA Query Possibilities 1

A Workbench for Prototyping XML Data Exchange (extended abstract)

Search Engines. Stephen Shaw 18th of February, Netsoc

A Semantic Portal for the International Affairs Sector

Topic Maps Visualization

CSE 132A. Database Systems Principles

Exam in course TDT4215 Web Intelligence - Solutions and guidelines -

PSG College of Technology, Coimbatore Department of Computer & Information Sciences BSc (CT) G1 & G2 Sixth Semester PROJECT DETAILS.

Microsoft Services Exceed your business with Microsoft SharePoint Server 2010

Web Design Specialist

isecure: Integrating Learning Resources for Information Security Research and Education The isecure team

Secure semantic based search over cloud

Data Warehousing and OLAP Technology for Knowledge Discovery

Flattening Enterprise Knowledge

Web Content Mining and NLP. Bing Liu Department of Computer Science University of Illinois at Chicago

T HE I NFORMATION A RCHITECTURE G LOSSARY

ProteinQuest user guide

Client Overview. Engagement Situation. Key Requirements

Clustering Connectionist and Statistical Language Processing

A FUZZY BASED APPROACH TO TEXT MINING AND DOCUMENT CLUSTERING

Hierarchical Data Visualization

Distributed Computing and Big Data: Hadoop and MapReduce

Outline. CIW Web Design Specialist. Course Content

MyOra 3.0. User Guide. SQL Tool for Oracle. Jayam Systems, LLC

Inner Classification of Clusters for Online News

Chapter 3 Data Warehouse - technological growth

Data Mining & Knowledge Discovery: Personalization and Profiling Technologies

Large-Scale Data Sets Clustering Based on MapReduce and Hadoop

Building Data Cubes and Mining Them. Jelena Jovanovic

Machine Learning using MapReduce

Visualization Techniques in Data Mining

Transcription:

Bölüm 10. Metin ve WEB Madenciliği http://ceng.gazi.edu.tr/~ozdemir Mining Text Data: An Introduction Data Mining / Knowledge Discovery Structured Data Multimedia Free Text Hypertext HomeLoan ( Frank Rizzo bought <a href>frank Rizzo Loanee: Frank Rizzo his home from Lake </a> Bought Lender: MWF View Real Estate in <a hef>this home</a> Agency: Lake View 1992. from <a href>lake Amount: $200,000 He paid $200,000 View Real Estate</a> Term: 15 years under a15-year loan In <b>1992</b>. ) Loans($200K,[map],...) from MW Financial. <p>... 1

Text Databases and IR Text databases (document databases) Large collections of documents from various sources: news articles, research papers, books, digital libraries, e-mail messages, and Web pages, library database, etc. Data stored is usually semi-structured structured Traditional information retrieval techniques become inadequate for the increasingly vast amounts of text data Information retrieval A field developed in parallel with database systems Information is organized into (a large number of) documents Information retrieval problem: locating relevant documents based on user input, such as keywords or example documents Information Retrieval Typical IR systems Online library catalogs Online document management systems Information retrieval vs. database systems Some DB problems are not present in IR, e.g., update, transaction management, complex objects Some IR problems are not addressed well in DBMS, e.g., unstructured documents, approximate search using keywords and relevance 2

Basic Measures for Text Retrieval Relevant Relevant & Retrieved Retrieved All Documents Precision: the percentage of retrieved documents that are in fact relevant to the query (i.e., correct responses) { Relevant} { Retrieved} precision { Retrieved } Recall: the percentage of documents that are relevant to the query and were, in fact, retrieved { Relevant} { Retrieved} recall { Relevant} Information Retrieval Techniques Basic Concepts A document can be described by a set of representative keywords called index terms. Different index terms have varying relevance when used to describe document contents. This effect is captured through the assignment of numerical weights to each index term of a document. (e.g.: frequency, tf-idf) DBMS Analogy Index Terms Attributes Weights Attribute Values 3

Information Retrieval Techniques Index Terms (Attribute) Selection: Stop list Word stem Terms Documents: Frequency Matrices Information Retrieval Techniques Information Retrieval Models: Boolean Model Vector Model 8 4

Boolean Model Consider that index terms are either present or absent in a document As a result, the index term weights are assumed to be all binaries A query is composed of index terms linked by three connectives: not, and, and or e.g.: car and repair, plane or airplane The Boolean model predicts that each document is either relevant or non-relevant based on the match of a document to the query Keyword-Based Retrieval A document is represented by a string, which can be identified by a set of keywords Queries may use expressions of keywords E.g., car and repair shop, tea or coffee, DBMS but not Oracle Queries and retrieval should consider synonyms, e.g., repair and maintenance Major difficulties of the model Synonymy: A keyword T does not appear anywhere in the document, even though the document is closely related to T, T e.g., data mining i Polysemy: The same keyword may mean different things in different contexts, e.g., mining 5

Information Retrieval Techniques Information Retrieval Models: Boolean Model Vector Model Similarity-Based Retrieval in Text Data Finds similar documents based on a set of common keywords Answer should be based on the degree of relevance based on the nearness of the keywords, relative frequency of the keywords, etc. Basic techniques Stop list Set of words that are deemed irrelevant, even though they may appear frequently E.g., a, the, of, for, to, with, etc. Stop lists may vary when document set varies 6

Similarity-Based Retrieval in Text Data Word stem Several words are small syntactic variants of each other since they share a common word stem E.g., drug, drugs, drugged A term frequency table Each entryfrequent_table(i, j) = # of occurrences of the word t i in document d j Usually, the ratio instead of the absolute number of occurrences is used Similarity metrics: measure the closeness of a document to a query (a set of keywords) Relative term occurrences Cosine distance: v1 v2 sim( v1, v2) v v 1 2 Vector Space Model Represent a doc by a term vector Term: basic concept, e.g., word or phrase Each term defines one dimension N terms define a N-dimensional space Element of vector corresponds to term weight E.g., d = (x 1,,x N ), x i is importance of term i New document is assigned to the most likely category based on vector similarity. il it 7

VS Model: Illustration Starbucks C 2 Category 2 Category 3 C 3 new doc Java Microsoft C 1 Category 1 What VS Model Does Not Specify How to select terms to capture basic concepts Word stopping e.g. a, the, always, along Word stemming e.g. computer, computing, computerize => compute How to assign weights Not all words are equally important: Some are more indicative than others eg e.g. algebra vs. science How to measure the similarity 8

How to Assign Weights Two-fold heuristics based on frequency TF (Term frequency) More frequent within a document more relevant to semantics IDF (Inverse document frequency) Less frequent among documents more discriminative e.g. algebra vs. science TF Weighting Weighting: More frequent => more relevant to topic e.g. query vs. commercial Raw TF= f(t,d): how many times term t appears in doc d Normalization: Document length varies => relative frequency preferred e.g., g, Maximum frequency normalization 9

IDF Weighting Ideas: Less frequent among documents more discriminative Formula: n total number of docs k # docs with term t appearing (the DF document frequency) TF-IDF Weighting TF-IDF weighting : weight(t, d) = TF(t, d) * IDF(t) Frequent within doc high tf high weight Selective among docs high idf high weight VS model Each selected term represents one dimension Each doc is represented by a feature vector Its t-term coordinate of document d is the TF-IDF weight This is more reasonable Just for illustration Many complex and more effective weighting variants exist in practice 10

How to Measure Similarity? Given two document Similarity definition dot product normalized dot product (or cosine) Illustrative Example doc1 doc2 text mining search engine text travel text map travel Sim(newdoc,doc1)=4.8*2.4+4.5*4.5 Sim(newdoc,doc2)=2.4*2.4 Sim(newdoc,doc3)=0 Newdoc: text mining To whom is newdoc more similar? TF*IDF doc3 government president congress text mining travel map search engine govern president congress IDF(faked) 2.4 4.5 2.8 3.3 2.1 5.4 2.2 3.2 4.3 doc1 2(4.8) 1(4.5) 1(2.1) 1(5.4) doc2 1(2.4 ) 2 (5.6) 1(3.3) doc3 1 (2.2) 1(3.2) 1(4.3) newdoc 1(2.4) 1(4.5) 11

WEB Mining Huge Complex Dynamic Only a small portion is relevant to a user Layout Structure Compared to plain text, a web page is a 2D presentation Rich visual effects created by different term types, formats, separators, blank areas, colors, pictures, etc Different parts of a page are not equally important Title: CNN.com International H1: IAEA: Iran had secret nuke agenda H3: EXPLOSIONS ROCK BAGHDAD TEXT BODY (with position and font type): The International Atomic Energy Agency has concluded that Iran has secretly produced small amounts of nuclear materials including low enriched uranium and plutonium that could be used to develop nuclear weapons according to a confidential report obtained by CNN Hyperlink: URL: http://www.cnn.com/... Anchor Text: AI oaeda Image: URL: http://www.cnn.com/image/... Alt & Caption: Iran nuclear Anchor Text: CNN Homepage News 12

Web Page Block Better Information Unit Web Page Blocks Importance = Low Importance = Med Importance = High Motivation for VIPS (VIsion-based Page Segmentation) Problems of treating a web page as an atomic unit Web page usually contains not only pure content Noise: navigation, decoration, interaction, Multiple topics Different parts of a page are not equally important Web page has internal structure Two-dimension logical structure & Visual layout presentation > Free text document < Structured document Layout the 3 rd dimension of Web page 1 st dimension: content 2 nd dimension: hyperlink 13

Is DOM a Good Representation of Page Structure? Page segmentation using DOM Extract structural tags such as P, TABLE, UL, TITLE, H1~H6, etc DOM is more related content display, does not necessarily reflect semantic structure How about XML? A long way to go to replace the HTML 27 VIPS Algorithm Motivation: In many cases, topics can be distinguished with visual clues. Such as position, distance, font, color, etc. Goal: Extract the semantic structure of a web page based on its visual presentation. Procedure: Top-down partition the web page based on the separators Result A tree structure, each node in the tree corresponds to a block in the page. Each node will be assigned a value (Degree of Coherence) to indicate how coherent of the content in the block based on visual perception. Each block will be assigned an importance value Hierarchy or flat 14

VIPS Algorithm The VIPS algorithm makes full use of page layout feature: It first extracts all the suitable blocks from the html DOM tree, then it tries to find the separators between these extracted blocks. Separators denote the horizontal or vertical lines in a web page that visually cross with no blocks. Based on these separators, the semantic structure for the web page is constructed. VIPS algorithm employs a top-down approach, which is very effective. VIPS: An Example A hierarchical structure of layout block A Degree of Coherence (DOC) is defined for each block Show the intra coherence of the block DoC of child block must be no less than its parent s The Permitted Degree of Coherence (PDOC) can be pre-defined to achieve different granularities for the content structure The segmentation will stop only when all the blocks DoC is no less than PDoC The smaller the PDoC, the coarser the content structure would be 15

Example of Web Page Segmentation ( DOM Structure ) ( VIPS Structure ) Can be applied on web image retrieval Surrounding text extraction 16