Word Meaning & Word Sense Disambiguation

Similar documents
Natural Language Processing. Part 4: lexical semantics

Building a Question Classifier for a TREC-Style Question Answering System

Comparing methods for automatic acquisition of Topic Signatures

ANALYSIS OF LEXICO-SYNTACTIC PATTERNS FOR ANTONYM PAIR EXTRACTION FROM A TURKISH CORPUS

Combining Contextual Features for Word Sense Disambiguation

Comparing Ontology-based and Corpusbased Domain Annotations in WordNet.

An Integrated Approach to Automatic Synonym Detection in Turkish Corpus

Intro to Linguistics Semantics

Sense-Tagging Verbs in English and Chinese. Hoa Trang Dang

Chapter 8. Final Results on Dutch Senseval-2 Test Data

Clever Search: A WordNet Based Wrapper for Internet Search Engines

Automatic assignment of Wikipedia encyclopedic entries to WordNet synsets

The study of words. Word Meaning. Lexical semantics. Synonymy. LING 130 Fall 2005 James Pustejovsky. ! What does a word mean?

Knowledge-Based WSD on Specific Domains: Performing Better than Generic Supervised WSD

AN APPROACH TO WORD SENSE DISAMBIGUATION COMBINING MODIFIED LESK AND BAG-OF-WORDS

Identifying Focus, Techniques and Domain of Scientific Papers

Semantic Clustering in Dutch

TERMINOGRAPHY and LEXICOGRAPHY What is the difference? Summary. Anja Drame TermNet

Word Sense Disambiguation for Text Mining

Cross-lingual Synonymy Overlap

2. SEMANTIC RELATIONS

Search and Data Mining: Techniques. Text Mining Anya Yarygina Boris Novikov

Simple Features for Chinese Word Sense Disambiguation

SEMANTIC RESOURCES AND THEIR APPLICATIONS IN HUNGARIAN NATURAL LANGUAGE PROCESSING

An Efficient Database Design for IndoWordNet Development Using Hybrid Approach

Word Sense Disambiguation as an Integer Linear Programming Problem

Building the Multilingual Web of Data: A Hands-on tutorial (ISWC 2014, Riva del Garda - Italy)

Clustering of Polysemic Words

Package wordnet. January 6, 2016

Exploiting Comparable Corpora and Bilingual Dictionaries. the Cross Language Text Categorization

DISA at ImageCLEF 2014: The search-based solution for scalable image annotation

Analyzing survey text: a brief overview

Module Catalogue for the Bachelor Program in Computational Linguistics at the University of Heidelberg

Towards Robust High Performance Word Sense Disambiguation of English Verbs Using Rich Linguistic Features

Semantic analysis of text and speech

Exploiting Strong Syntactic Heuristics and Co-Training to Learn Semantic Lexicons

DanNet Teaching and Research Perspectives at CST

Extraction of Hypernymy Information from Text

Efficient Techniques for Improved Data Classification and POS Tagging by Monitoring Extraction, Pruning and Updating of Unknown Foreign Words

Optimizing the relevancy of Predictions using Machine Learning and NLP of Search Query

Testing Data-Driven Learning Algorithms for PoS Tagging of Icelandic

Using Knowledge Extraction and Maintenance Techniques To Enhance Analytical Performance

How the Computer Translates. Svetlana Sokolova President and CEO of PROMT, PhD.

Applying Co-Training Methods to Statistical Parsing. Anoop Sarkar anoop/

An empirical study of semantic similarity in WordNet and Word2Vec. A Thesis

Clustering Connectionist and Statistical Language Processing

Taxonomies for Auto-Tagging Unstructured Content. Heather Hedden Hedden Information Management Text Analytics World, Boston, MA October 1, 2013

Textual Data Analysis tools for Word Sense Disambiguation

Presented to The Federal Big Data Working Group Meetup On 07 June 2014 By Chuck Rehberg, CTO Semantic Insights a Division of Trigent Software

A clustering-based Approach for Unsupervised Word Sense Disambiguation

Free Construction of a Free Swedish Dictionary of Synonyms

3 Paraphrase Acquisition. 3.1 Overview. 2 Prior Work

COLLOCATION TOOLS FOR L2 WRITERS 1

C o p yr i g ht 2015, S A S I nstitute Inc. A l l r i g hts r eser v ed. INTRODUCTION TO SAS TEXT MINER

Sentiment Analysis of Movie Reviews and Twitter Statuses. Introduction

PoS-tagging Italian texts with CORISTagger

What Is This, Anyway: Automatic Hypernym Discovery

A Method for Automatic De-identification of Medical Records

Report on the embedding and evaluation of the second MT pilot

A Sentiment Analysis Model Integrating Multiple Algorithms and Diverse. Features. Thesis

Context Sensitive Paraphrasing with a Single Unsupervised Classifier

Construction of Thai WordNet Lexical Database from Machine Readable Dictionaries

Sentiment analysis on news articles using Natural Language Processing and Machine Learning Approach.

Ngram Search Engine with Patterns Combining Token, POS, Chunk and NE Information

Word sense disambiguation through associative dictionaries

A Mixed Trigrams Approach for Context Sensitive Spell Checking

Refining the most frequent sense baseline

Tibetan-Chinese Bilingual Sentences Alignment Method based on Multiple Features

The Open University s repository of research publications and other research outputs. PowerAqua: fishing the semantic web

Speech and Language Processing

Chapter 2 The Information Retrieval Process

Hybrid Strategies. for better products and shorter time-to-market

Overview of MT techniques. Malek Boualem (FT)

Visualizing WordNet Structure

Methods and Tools for Encoding the WordNet.Br Sentences, Concept Glosses, and Conceptual-Semantic Relations

Learning Domain Ontologies from Document Warehouses and Dedicated Web Sites

Machine Learning Approach To Augmenting News Headline Generation

Domain Specific Word Extraction from Hierarchical Web Documents: A First Step Toward Building Lexicon Trees from Web Corpora

Web opinion mining: How to extract opinions from blogs?

Exploiting Redundancy in Natural Language to Penetrate Bayesian Spam Filters

Introduction to WordNet: An On-line Lexical Database. (Revised August 1993)

CINDOR Conceptual Interlingua Document Retrieval: TREC-8 Evaluation.

A Mapping of CIDOC CRM Events to German Wordnet for Event Detection in Texts

Comparing Topiary-Style Approaches to Headline Generation

Special Topics in Computer Science

Open Domain Information Extraction. Günter Neumann, DFKI, 2012

Domain Independent Knowledge Base Population From Structured and Unstructured Data Sources

Collecting Polish German Parallel Corpora in the Internet

Automatic Speech Recognition and Hybrid Machine Translation for High-Quality Closed-Captioning and Subtitling for Video Broadcast

Semi-Supervised Maximum Entropy Based Approach to Acronym and Abbreviation Normalization in Medical Texts

Topics in Computational Linguistics. Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence Alignment

Domain Adaptive Relation Extraction for Big Text Data Analytics. Feiyu Xu

Self-Monitoring in Social Networks

Word Completion and Prediction in Hebrew

Transaction-Typed Points TTPoints

Interactive Dynamic Information Extraction

ASSOCIATING COLLOCATIONS WITH DICTIONARY SENSES

Stock Market Prediction Using Data Mining

Semantic Relatedness Metric for Wikipedia Concepts Based on Link Analysis and its Application to Word Sense Disambiguation

Parsing Technology and its role in Legacy Modernization. A Metaware White Paper

Transcription:

Word Meaning & Word Sense Disambiguation CMSC 723 / LING 723 / INST 725 MARINE CARPUAT marine@cs.umd.edu

Today Representing word meaning Word sense disambiguation as supervised classification Word sense disambiguation without annotated examples

Drunk gets nine year in violin case. http://www.ling.upenn.edu/ beatrice/humor/headlines.html

How do we know that a word (lemma) has distinct senses? Linguists often design tests for this purpose e.g., zeugma combines distinct senses in an uncomfortable way Which flight serves breakfast? Which flights serve Tuscon? *Which flights serve breakfast and Tuscon?

Where can we look up the meaning of words? Dictionary?

Word Senses Word sense = distinct meaning of a word Same word, different senses Homonyms (homonymy): unrelated senses; identical orthographic form is coincidental Polysemes (polysemy): related, but distinct senses Metonyms (metonymy): stand in, technically, a subcase of polysemy Different word, same sense Synonyms (synonymy)

Homophones: same pronunciation, different orthography, different meaning Examples: would/wood, to/too/two Homographs: distinct senses, same orthographic form, different pronunciation Examples: bass (fish) vs. bass (instrument)

Relationship Between Senses IS-A relationships From specific to general (up): hypernym (hypernymy) From general to specific (down): hyponym (hyponymy) Part-Whole relationships wheel is a meronym of car (meronymy) car is a holonym of wheel (holonymy)

WordNet: a lexical database for English https://wordnet.princeton.edu/ Includes most English nouns, verbs, adjectives, adverbs Electronic format makes it amenable to automatic manipulation: used in many NLP applications WordNets generically refers to similar resources in other languages

WordNet: History Research in artificial intelligence: How do humans store and access knowledge about concept? Hypothesis: concepts are interconnected via meaningful relations Useful for reasoning The WordNet project started in 1986 Can most (all?) of the words in a language be represented as a semantic network where words are interlinked by meaning? If so, the result would be a large semantic network

Synonymy in WordNet WordNet is organized in terms of synsets Unordered set of (roughly) synonymous words (or multi-word phrases) Each synset expresses a distinct meaning/concept

WordNet: Example Noun {pipe, tobacco pipe} (a tube with a small bowl at one end; used for smoking tobacco) {pipe, pipage, piping} (a long tube made of metal or plastic that is used to carry water or oil or gas etc.) {pipe, tube} (a hollow cylindrical shape) {pipe} (a tubular wind instrument) {organ pipe, pipe, pipework} (the flues and stops on a pipe organ) Verb {shriek, shrill, pipe up, pipe} (utter a shrill cry) {pipe} (transport by pipeline) pipe oil, water, and gas into the desert {pipe} (play on a pipe) pipe a tune {pipe} (trim with piping) pipe the skirt What do you think of the sense granularity?

The Net Part of WordNet {conveyance; transport} hyperonym {vehicle} hyperonym {bumper} {motor vehicle; automotivevehicle} meronym hyperonym {car dor} {car window} {car; auto; automobile; machine; motorcar} {armrest} meronym hyperonym hyperonym {cruiser; squadcar; patrol car; policecar; prowl car} {cab; taxi; hack; taxicab; } {car miror} meronym meronym {hinge; flexiblejoint} {dorlock}

WordNet 3.0: Size Part of speech Word form Synsets Noun 117,798 82,115 Verb 11,529 13,767 Adjective 21,479 18,156 Adverb 4,481 3,621 Total 155,287 117,659 http://wordnet.princeton.edu/

Word Sense From WordNet: Noun {pipe, tobacco pipe} (a tube with a small bowl at one end; used for smoking tobacco) {pipe, pipage, piping} (a long tube made of metal or plastic that is used to carry water or oil or gas etc.) {pipe, tube} (a hollow cylindrical shape) {pipe} (a tubular wind instrument) {organ pipe, pipe, pipework} (the flues and stops on a pipe organ) Verb {shriek, shrill, pipe up, pipe} (utter a shrill cry) {pipe} (transport by pipeline) pipe oil, water, and gas into the desert {pipe} (play on a pipe) pipe a tune {pipe} (trim with piping) pipe the skirt

WORD SENSE DISAMBIGUATION

Word Sense Disambiguation Task: automatically select the correct sense of a word Input: a word in context Output: sense of the word Can be framed as lexical sample (focus on one word type at a time) or all-words (disambiguate all content words in a document) Motivated by many applications: Information retrieval Machine translation

How big is the problem? Most words in English have only one sense 62% in Longman s Dictionary of Contemporary English 79% in WordNet But the others tend to have several senses Average of 3.83 in LDOCE Average of 2.96 in WordNet Ambiguous words are more frequently used In the British National Corpus, 84% of instances have more than one sense Some senses are more frequent than others

Ground Truth Which sense inventory do we use?

Existing Corpora Lexical sample line-hard-serve corpus (4k sense-tagged examples) interest corpus (2,369 sense-tagged examples) All-words SemCor (234k words, subset of Brown Corpus) Senseval/SemEval (2081 tagged content words from 5k total words)

Evaluation Intrinsic Measure accuracy of sense selection wrt ground truth Extrinsic Integrate WSD as part of a bigger end-to-end system, e.g., machine translation or information retrieval Compare WSD

Baseline Performance Baseline: most frequent sense Equivalent to take first sense in WordNet Does surprisingly well! 62% accuracy in this case!

Upper Bound Performance Upper bound Fine-grained WordNet sense: 75-80% human agreement Coarser-grained inventories: 90% human agreement possible

WSD as Supervised Classification Training Testing training data? unlabeled document label 1 label 2 label 3 label 4 supervised machine learning algorithm Feature Functions Classifier label 1? label 2? label 3? label 4?

WSD Approaches Depending on use of manually created knowledge sources Knowledge-lean Knowledge-rich Depending on use of labeled data Supervised Semi- or minimally supervised Unsupervised

Simplest WSD algorithm: Lesk s Algorithm Intuition: note word overlap between context and dictionary entries Unsupervised, but knowledge rich The bank can guarantee deposits will eventually cover future tuition costs because it invests in adjustable-rate mortgage securities. WordNet

Lesk s Algorithm Simplest implementation: Count overlapping content words between glosses and context Lots of variants: Include the examples in dictionary definitions Include hypernyms and hyponyms Give more weight to larger overlaps (e.g., bigrams) Give extra weight to infrequent words

WSD Accuracy Generally Supervised approaches yield ~70-80% accuracy However depends on actual word, sense inventory, amount of training data, number of features, etc.

WORD SENSE DISAMBIGUATION: MINIMIZING SUPERVISION

Minimally Supervised WSD Problem: annotations are expensive! Solution 1: Bootstrapping or co-training (Yarowsky 1995) Start with (small) seed, learn classifier Use classifier to label rest of corpus Retain confident labels, add to training set Learn new classifier Repeat Heuristics (derived from observation): One sense per discourse One sense per collocation

One Sense per Discourse A word tends to preserve its meaning across all its occurrences in a given discourse Gale et al. 1992 8 words with two-way ambiguity, e.g. plant, crane, etc. 98% of the two-word occurrences in the same discourse carry the same meaning Krovetz 1998 Heuristic true mostly for coarse-grained senses and for homonymy rather than polysemy Performance of one sense per discourse measured on SemCor is approximately 70%

One Sense per Collocation A word tends to preserve its meaning when used in the same collocation Strong for adjacent collocations Weaker as the distance between words increases Evaluation: 97% precision on words with two-way ambiguity Again, accuracy depends on granularity: 70% precision on SemCor words

Yarowsky s Method: Stopping Stop when: Error on training data is less than a threshold No more training data is covered

Yarowsky s Method: Discussion Advantages Accuracy is about as good as a supervised algorithm Bootstrapping: far less manual effort Disadvantages Seeds may be tricky to construct Works only for coarse-grained sense distinctions Snowballing error with co-training

WSD with 2 nd language as supervision Problems: annotations are expensive! What s the proper sense inventory? How fine or coarse grained? Application specific? Observation: multiple senses translate to different words in other languages Use the foreign language as the sense inventory Added bonus: annotations for free! (Using machine translation data)

Today Representing word senses & word relations WordNet Word Sense Disambiguation Lesk Algorithm Supervised classification Minimizing supervision Co-training: combine 2 views of data Use parallel text Next: general techniques for learning without annotated training examples If needed review probability, expectations