Information Retrieval Document Parsing Basic indexing pipeline Documents to be indexed. Friends, Romans, countrymen. Tokenizer Token stream. Friends Romans Countrymen Linguistic modules Modified tokens. friend roman countryman Indexer Inverted index. 1
Parsing a document What character set is in use? Plain ASCII, UTF-8, UTF-16, What format is it in? pdf/word/excel/html? What language is it in? Each of these is a classification problem, with many complications Tokenization: Issues Chinese/Japanese no spaces between words: Not always guaranteed a unique tokenization Dates/amounts in multiple formats フォーチュン500 社 は 情 報 不 足 のため 時 間 あた$500K( 約 6,000 万 円 ) Katakana Hiragana Kanji Romaji What about DNA sequences? ACCCGGTACGCAC... Definition of Tokens What you can search!! 2
Case folding Reduce all letters to lower case Many exceptions e.g., General Motors USA vs. usa Morgen will ich in MIT Is this the German mit? Stemming Reduce terms to their roots language dependent e.g., automate(s), automatic, automation all reduced to automat. e.g., casa, casalinga, casata, casamatta, casolare, casamento, casale, rincasare, case reduced to cas Originally used to reduce the dictionary size, now 3
Porter s algorithm Commonest algorithm for stemming English Conventions + 5 phases of reductions phases applied sequentially each phase consists of a set of commands sample convention: Of the rules in a compound command, select the one that applies to the longest suffix. sses ss, ies i, ational ate, tional tion Full morphologial analysis modest benefit!! Thesauri Handle synonyms and polysemy Hand-constructed equivalence classes e.g., car = automobile e.g., macchina = automobile = spider For each word it specifies a list of correlated words (usually, synonyms, polysemic or phrases for complex concepts). Co-occurrence Pattern: BT (broader term), NT (narrower term) Vehicle (BT) Car Fiat 500 (NT) How to use it in SE?? 4
Dmoz Directory 5
Yahoo! Directory Information Retrieval Statistical Properties of Documents 6
Statistical properties of texts Token are not distributed uniformly They follow the so called Zipf Law Few tokens are very frequent A middle sized set has medium frequency Many are rare The first 100 tokens sum up to 50% of the text Many of these tokens are stopwords An example of Zipf curve 7
Zipf s law log-log plot The Zipf Law, in detail K-th most frequent term has frequency approximately 1/k; or the product of the frequency (f) of a token and its rank (r) is almost a constant r * f = c T f = c T / r f = c T / r s = 1.5 2.0 s General Law Scale-invariant: f(br) = b s * f(r) 8
Distribution vs Cumulative distr Power-law with smaller exponent Sum after the k-th element is f k k/(s-1) Sum up to the k-th element is f k k Consequences of Zipf Law Do exist many not frequent tokens that do not discriminate. These are the so called stop words English: to, from, on, and, the,... Italian: a, per, il, in, un, Do exist many tokens that occur once in a text and thus are poor to discriminate (error?). English: Calpurnia Italian: Precipitevolissimevolmente (o, paklo) Words with medium frequency Words that discriminate 9
Other statistical properties of texts The number of distinct tokens grows as The so called Heaps Law ( T β where β<1) Hence the token length is Ω(log T ) Interesting words are the ones with Medium frequency (Luhn) Frequency vs. Term significance (Luhn) 10