Tree Edit Distance for Recognizing Textual Entailment: Estimating the Cost of Insertion

Similar documents
The Role of Sentence Structure in Recognizing Textual Entailment

Building a Question Classifier for a TREC-Style Question Answering System

Identifying Focus, Techniques and Domain of Scientific Papers

An Information Retrieval using weighted Index Terms in Natural Language document collections

CINDOR Conceptual Interlingua Document Retrieval: TREC-8 Evaluation.

MIRACLE at VideoCLEF 2008: Classification of Multilingual Speech Transcripts

VCU-TSA at Semeval-2016 Task 4: Sentiment Analysis in Twitter

Interactive Dynamic Information Extraction

Topics in Computational Linguistics. Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence Alignment

Natural Language to Relational Query by Using Parsing Compiler

Clustering Connectionist and Statistical Language Processing

Semantic Mapping Between Natural Language Questions and SQL Queries via Syntactic Pairing

6-1. Process Modeling

Finding Advertising Keywords on Web Pages. Contextual Ads 101

Protein Protein Interaction Networks

Knowledge Discovery from patents using KMX Text Analytics

Taxonomy learning factoring the structure of a taxonomy into a semantic classification decision

Using News Articles to Predict Stock Price Movements

Experiments in Web Page Classification for Semantic Web

Florida International University - University of Miami TRECVID 2014

HYBRID PROBABILITY BASED ENSEMBLES FOR BANKRUPTCY PREDICTION

Language Interface for an XML. Constructing a Generic Natural. Database. Rohit Paravastu

Sentiment Analysis of Movie Reviews and Twitter Statuses. Introduction

Construction of Thai WordNet Lexical Database from Machine Readable Dictionaries

Data Quality Mining: Employing Classifiers for Assuring consistent Datasets

Predict the Popularity of YouTube Videos Using Early View Data

A typology of ontology-based semantic measures

NATURAL LANGUAGE QUERY PROCESSING USING PROBABILISTIC CONTEXT FREE GRAMMAR

Clustering of Polysemic Words

Search and Information Retrieval

Chapter 2 The Information Retrieval Process

POSBIOTM-NER: A Machine Learning Approach for. Bio-Named Entity Recognition

ANALYSIS OF LEXICO-SYNTACTIC PATTERNS FOR ANTONYM PAIR EXTRACTION FROM A TURKISH CORPUS

Generating SQL Queries Using Natural Language Syntactic Dependencies and Metadata

An innovative application of a constrained-syntax genetic programming system to the problem of predicting survival of patients

Feature vs. Classifier Fusion for Predictive Data Mining a Case Study in Pesticide Classification

Incorporating Window-Based Passage-Level Evidence in Document Retrieval

Financial Trading System using Combination of Textual and Numerical Data

Adding New Level in KDD to Make the Web Usage Mining More Efficient. Abstract. 1. Introduction [1]. 1/10

Visualizing WordNet Structure

Learning Translation Rules from Bilingual English Filipino Corpus

Data quality in Accounting Information Systems

TREC 2003 Question Answering Track at CAS-ICT

Open Domain Information Extraction. Günter Neumann, DFKI, 2012

CIRGIRDISCO at RepLab2014 Reputation Dimension Task: Using Wikipedia Graph Structure for Classifying the Reputation Dimension of a Tweet

Simple Language Models for Spam Detection

Transition-Based Dependency Parsing with Long Distance Collocations

Overview. Evaluation Connectionist and Statistical Language Processing. Test and Validation Set. Training and Test Set

Web Mining. Margherita Berardi LACAM. Dipartimento di Informatica Università degli Studi di Bari

Enriching the Crosslingual Link Structure of Wikipedia - A Classification-Based Approach -

Mining the Software Change Repository of a Legacy Telephony System

How the Computer Translates. Svetlana Sokolova President and CEO of PROMT, PhD.

Extension of Decision Tree Algorithm for Stream Data Mining Using Real Data

Social Media Mining. Data Mining Essentials

IDENTIFYING BANK FRAUDS USING CRISP-DM AND DECISION TREES

A Sentiment Analysis Model Integrating Multiple Algorithms and Diverse. Features. Thesis

Author Gender Identification of English Novels

DATA MINING TECHNOLOGY. Keywords: data mining, data warehouse, knowledge discovery, OLAP, OLAM.

Web Document Clustering

ANALYTICS IN BIG DATA ERA

dm106 TEXT MINING FOR CUSTOMER RELATIONSHIP MANAGEMENT: AN APPROACH BASED ON LATENT SEMANTIC ANALYSIS AND FUZZY CLUSTERING

Introducing diversity among the models of multi-label classification ensemble

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

Spam Detection A Machine Learning Approach

Machine Learning Approach To Augmenting News Headline Generation

COMP3420: Advanced Databases and Data Mining. Classification and prediction: Introduction and Decision Tree Induction

The multilayer sentiment analysis model based on Random forest Wei Liu1, Jie Zhang2

Approaches of Using a Word-Image Ontology and an Annotated Image Corpus as Intermedia for Cross-Language Image Retrieval

Automated Content Analysis of Discussion Transcripts

Data Mining Practical Machine Learning Tools and Techniques

Final Project Report

Beating the MLB Moneyline

ON INTEGRATING UNSUPERVISED AND SUPERVISED CLASSIFICATION FOR CREDIT RISK EVALUATION

Identifying At-Risk Students Using Machine Learning Techniques: A Case Study with IS 100

International Journal of Scientific & Engineering Research, Volume 4, Issue 11, November ISSN

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Discovering process models from empirical data

Enhancing the relativity between Content, Title and Meta Tags Based on Term Frequency in Lexical and Semantic Aspects

Learning and Inference over Constrained Output

Bisecting K-Means for Clustering Web Log data

Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary

Managing Variability in Software Architectures 1 Felix Bachmann*

Artificial Neural Network, Decision Tree and Statistical Techniques Applied for Designing and Developing Classifier

BOOSTING - A METHOD FOR IMPROVING THE ACCURACY OF PREDICTIVE MODEL

COURSE RECOMMENDER SYSTEM IN E-LEARNING

Context Sensitive Paraphrasing with a Single Unsupervised Classifier

3 Paraphrase Acquisition. 3.1 Overview. 2 Prior Work

Background knowledge-enrichment for bottom clauses improving.

Data Mining Yelp Data - Predicting rating stars from review text

Dublin City University at CLEF 2004: Experiments with the ImageCLEF St Andrew s Collection

Facilitating Business Process Discovery using Analysis

Doctoral Consortium 2013 Dept. Lenguajes y Sistemas Informáticos UNED

Software Defect Prediction Modeling

Tekniker för storskalig parsning

Support Vector Machines with Clustering for Training with Very Large Datasets

PiQASso: Pisa Question Answering System

Professor Anita Wasilewska. Classification Lecture Notes

Recognition. Sanja Fidler CSC420: Intro to Image Understanding 1 / 28

Index Terms Domain name, Firewall, Packet, Phishing, URL.

Transcription:

Tree Edit Distance for Recognizing Textual Entailment: Estimating the Cost of Insertion Milen Kouylekov 1,2 and Bernardo Magnini 1 ITC-irst, Centro per la Ricerca Scientifica e Tecnologica 1 University of Trento 2 38050, Povo, Trento, Italy milen@kouylekov.net,magnini@itc.it Abstract The focus of our participation in PASCAL RTE2 was estimating the cost of the information of the hypothesis which is missing in the text and can not be matched with entailment rules. We have tested different system settings for calculating the importance of the words of the hypothesis and investigated the possibility of combining them with machine learning algorithm. 1 Introduction For our participation in the first edition of the PASCAL Recognizing Textual Challenge 1 (Pascal RTE1) (Kouleykov and Magnini 2005) we have implemented an approach based on Tree Edit Distance (TED) algorithm, applied to the dependency trees of the text (T) and hypothesis (H), for recognizing textual entailment. We estimated that the probability of an entailment relation between T and H is related to the ability to show that the whole content of H can be mapped into the content of T. We investigated resources for entailment rules, defined in (Dagan and Glickman 2004) as language expressions with syntactic analysis and optional variables replacing subparts of the structure. We have experimented the TED approach with three linguistic resources: (i) a non-annotated document collection, from which we have estimated the relevance of words; (ii) a database of similarity relations among words estimated over a corpus of dependency trees; (iii) Word- Net, from which we have extracted entailment rules, based on lexical relations. The experiments we have carried out show that using such resources coupled with the edit distance algorithm can be used for successfully recognizing textual entailment. This year our focus was estimating the cost of the information of the H which is missing in the T that can not be matched with entailment rules. We have tested different system settings for calculating the importance of the words of the hypothesis and investigated the possibility of combining them with machine learning algorithm. Our hypothesis was that different approaches, for calculation the edit cost, can perform complementary. The paper is organized as follows. In Section 2 we review some of the relevant approaches proposed by groups participating in the PASCAL-RTE challenge. Section 3 presents the Tree Edit Distance algorithm we have adopted and its application to dependency trees. Section 4 describes the architecture of the system. Section 5 presents experimental settings and the results we have obtained while Section 6 contains a general discussion and describes some directions for future work. 2 Relevant Approaches The most basic inference technique used by participants at PASCAL-RTE is the degree of overlap between T and H. Such overlap is computed using a number of different approaches, ranging from statistic measures like idf, deep syntactic processing and semantic reasoning. The difficulty of the task explains the poor performance of all the systems, which achieved accuracy between 50-60%. In the rest of the Section we briefly mention some of the

systems which are relevant to the approach we describe in this paper. A similar approach to recognizing textual entailment is implemented in a system participating in PASCAL-RTE (Herrera et al. 2005), which relies on dependency parsing and extracts lexical rules from WordNet. A decision tree based algorithm is used to separate the positive from the negative examples. In (Bayer et al. 2005) the authors describe two systems for recognizing textual entailment. The first one is based on deep syntactic processing. Both T and H are parsed and converted into a logical form. An event-oriented statistical inference engine is used to separate the TRUE from FALSE pairs. The second system is based on statistical machine translation models. A method for recognizing textual entailment based on graph matching is described in (Raina et al. 2005). To handle language variability problems the system uses a maximum entropy coreference classifier and calculates term similarities using WordNet. 3 Tree Edit Distance on Dependency Trees We adopted a tree edit distance algorithm applied to the syntactic representations (i.e. dependency trees) of both T and H. A similar use of tree edit distance has been presented by (Punyakanok et al. 2004) for a Question Answering system, showing that the technique outperforms a simple bag-of-word approach. While the cost function they presented is quite simple, for the RTE challenge we tried to elaborate more complex and task specific measures. According to our approach, T entails H if there exists a sequence of transformations applied to T such that we can obtain H with an overall cost below a certain threshold. The underlying assumption is that pairs that exhibit an entailment relation have a low cost of transformation. The kind of transformations we can apply (i.e. deletion, insertion and substitution) are determined by a set of predefined entailment rules, which also determine a cost for each edit operation. We have implemented the tree edit distance algorithm described in (Zhang and Shasha 1990) and apply it to the dependency trees derived from T and H. Edit operations are defined at the level of single nodes of the dependency tree (i.e. transformations on subtrees are not allowed in the current implementation). Since the (Zhang and Shasha 1990) algorithm does not consider labels on edges, while dependency trees provide them, each dependency relation R from a node A to a node B has been re-written as a complex label B-R concatenating the name of the destination node and the name of the relation. All nodes except the root of the tree are relabeled in this way. The algorithm is directional: we aim to find the best (i.e. less costly) sequence of edit operation that transform T (the source) into H (the target). According to the constraints described above, the following transformations are allowed: Insertion: insert a node from the dependency tree of H into the dependency tree of T. When a node is inserted it is attached with the dependency relation of the source label. Deletion: delete a node N from the dependency tree of T. When N is deleted all its children are attached to the parent of N. It is not required to explicitly delete the children of N as they are going to be either deleted or substituted on a following step. Substitution: change the label of a node N1 in the source tree (the dependency tree of T) into a label of a node N2 of the target tree (the dependency tree of H). Substitution is allowed only if the two nodes share the same part-of-speech. In case of substitution the relation attached to the substituted node is changed with the relation of the new node. 4 System Architecture The system is composed of the following modules, showed in Figure 1: (i) a text processing module, for the preprocessing of the input T/H pair; (ii) a matching module, which performs the mapping between T and H; (iii) a cost module, which computes the cost of the edit operations. 4.1 Text Processing Module The text processing module creates a syntactic representation of a T/H pair and relies on a sentence splitter and a syntactic parser.for parsing we used Minipar, a principle-based English parser (Lin 1998a)

where Rel 1 (w), in the current version of the system, is computed on a document collection as the inverse document frequency (idf) of w, a measure commonly used in Information Retrieval. If N is the number of documents in a text collection and N w is the number of documents of the collection that contain w then the idf of w is given by the formula: idf(w) = log N N w (3) Figure 1: System Architecture which has high processing speed and good precision. 4.2 Matching module The matching module implements the edit distance algorithm described in Section 3 and finds the best sequence (i.e. sequence with lowest cost) of edit operations between the dependency trees obtained from T and H. The entailment score of a given pair is calculated in the following way: score(t,h) = ed(t,h) ed(,h) (1) where ed(t,h) is the function that calculates the edit distance cost between T and H and ed(,h) is the cost of inserting the entire tree H. 4.3 Cost Module The matching module makes requests to the cost module in order to receive the cost of single edit operations needed to transform T into H. We have different cost strategies for the three edit operations. Insertion. The intuition underlying insertion is that its cost is proportional to the relevance of the word w to be inserted (i.e. inserting an informative word has an higher cost than inserting a less informative word). More precisely: Cost[ed(,w)] = Rel(w) (2) The most frequent words (e.g. stop words) have a zero cost of insertion. We have considered also measures for calculating the relevance of a word proportional to its position in the dependency tree of the hypothesis. The words with higher position in the tree (i.e. closer to the root of the tree), or with more children are considered more relevant to the meaning expressed by a certain phrase. Accordingly, two alternative measures for calculating the cost of an insertion are: Rel(w) = #children of w (4) Rel(w) = 10 #parents of w (5) were #children(w) is the number of children of w and #parents(w) is the number of the parents of w in the dependency tree of the hypothesis. The maximum possible depth of a dependency trees estimated on the development set is 10. Substitution. The cost of substituting a word w 1 with a word w 2 can be estimated considering the semantic entailment between the words. The more the two words are entailed, the less the cost of substituting one word with the other. We have used the following formula: Cost[ed(w 1,w 2 )] = (6) Ins(w 2 ) (1 Ent(w 1,w 2 )) where Ins(w 2 ) is calculated using (4) and Ent(w 1,w 2 ) can be approximated with a variety of relatedness functions between w 1 and w 2. There are two crucial issues for the definition of an effective function for lexical entailment: first, it is necessary to have a database of entailment relations

with enough coverage; second, we have to estimate a quantitative measure for such relations. We have defined a set of entailment rules over the WordNet relations among synsets, with their respective probabilities. If A and B are synsets in WordNet 2.0, then we derived an entailment rule in the following cases: A is hypernym of B; A is synonym of B; A entails B; A pertains to B. For all the relations between the synsets of two words, the probability of entailment is estimated with the following formula: Ent wordnet (w 1,w 2 ) = 1 S w1 1 S w2 (7) where S wi is the number of senses of w i ; 1 S w1 is the probability that w i is in the sense which participates in the relation; Ent wordnet (w 1,w 2 ) is the joined probability. The proposed formula is simplistic and does not take in to account the frequency of senses and the length of the relation chain between the synsets. Deletion. In the PASCAL-RTE2 dataset H is typically shorter than T. As a consequence, we expect that much more deletions are necessary to transform T into H than insertions or substitutions. Given this bias toward deletion, in the current version of the system we set the cost of deletion to 0. Deleted words influence the meaning of already matched words. This requires that the evaluation of the cost of the deleted word is done after the matching is finished. In the future we plan to implement a module that calculates the cost of the deletion separately from the matching module. An example of mapping between two dependency trees is depicted in Figure 2. The tree on the left is the text: Edward VIII became King in January of 1936 and abdicated in December. The tree on the right corresponds to the hypothesis: King Edward VIII abdicated in December 1936. The algorithm finds as the best mapping the subtree with root abdicated. The verb became is substituted by the verb abdicated because it exists an entailment rule between them extracted from one of the resources. Lines connect the nodes that are exactly matched and nodes that are substitutions (became-abdicated) for which an entailment rule is used. They represent Figure 2: Example the minimal cost match. Nodes in the text that do not participate in a mapping are removed. The lexical modifier 1936 of the noun December is inserted. 5 Experiments and Results In this section we report on the dataset, the experiments and the results we have obtained. 5.1 Experiments We have ran 6 systems with different settings. In all the systems variants we have tested we used the following settings for substitution and deletion: Deletion: always 0 Substitution: 0 if w 1 = w 2, WordNet based rules score (with score > 0.2), infinite in all other cases. The settings correspond to the substitution and deletion functions of the best system reported in (Kouleykov and Magnini 2005). We made experiments with the following system settings: System 1: Insertion as IDF In this configuration, considered as a baseline for the Tree Edit Distance approach, the cost of the insertions is set to the idf of the word to be inserted. In this configuration the system needs a non-annotated corpus. The corpus we used contains 4.5 million news documents from the CLEF-QA (Cross Language evaluation Forum) and TREC (Text Retrieval Conference) collections. System 2: Fixed Insert cost In this configuration we wanted to fix the insertion cost and compare the system performance against the baseline strategy based on idf calculated on a local corpus. The cost was fixed to 200. System 3: Number of Parents. In this configuration we used the number of parents formula de-

scribed in Section 4 for calculating the insertion cost. System 4: Number of Children. In this configuration we used the number of children formula described in Section 4 for calculating the insertion cost. System 5: Number of Children + Number of Parents In this configuration we used the sum of the number of children formula and number of parents formula described in Section 4 for insertion. For systems 1-5 an entailment relation is assigned to an T-H pair if the overall cost of the transformation is below a certain threshold, empirically estimated on the training data for each task of the training set. Such estimation is a simple learning algorithm with two features: the task of the example and the calculated distance. System 6: Combined In this configuration we used the distances calculated by all previous systems as features of the sequential minimal optimization (SMO) algorithm, described in (Smola and Scholkopf 1998) and implemented in (Witten and Frank 2005), for training a support vector classifier. We use this run to test whether different approaches, for calculation the edit cost, can perform in complementary manner. The feature vector for a T-H pair contains the distances calculated by each system and the task to which the pair belongs. An entailment relation is assigned to an T-H pair if the the example is classified as positive. 5.2 Results Table 1 reports the accuracy calculated on the development and test set using only the distance calculated by each separate system. The results of the baseline system on the test set represent the first submitted run. The combined system results are the results from the second submitted run. Results show that the combined run performs better than the other systems. Combining different approaches for estimating the edit operation cost brings improvement to the overall performance of the system. The different systems are performing complementary. Some T-H pairs are correctly assigned TRUE or FALSE because majority of the systems are classifying them as TRUE or FALSE. The small difference in the performance is due to the comparative performance of used systems. In order to obtain optimal results, the system must run with a different set of cost functions on the different tasks of the dataset. It is important to notice that the System 3, based on number of parents as insertion out-performs the baseline System 1 which is using a corpus for estimating idf for the cost of insertion. This shows that using IDF for estimating the cost of the insertion operation is not necessary to obtain good results. Results show that some of the systems over fit to the training set. The distance calculated by the system 2 depends on the average number of the inserted words. Thus, the lower performance on the training set is explained by the different value of this number for the two sets. The baseline system produces the most stable(not over fitting) results performance on the development and training sets. Table 2 represents the results obtained by the two submitted runs. Our system performs well on the Summarization task. The traditional summarization systems generate the report using words from the text they process. Because of that, it was easy to distinguish the positive examples from the negative in the development and the test set. The main problem for the systems is represented by the Information Extraction task. Traditional IE systems approach the problem in a linear manner in contrast to our parser based approach. In contrast to the other three tasks, recognizing entailment for IE requires a large resource of complex entailment rules. The simple lexical entailment rules used in this version of the system can not address sufficiently the problem. Although the combined run performs better then the baseline system it has lower precision. This is due to the different algorithms used to calculate the the overall score for it. A more careful combination of systems with respect to each task can improve the results. 6 Discussion and Future Work We have presented an approach for recognizing textual entailment based on tree edit distance applied to the dependency trees of T and H. We have also

System 1 System 2 System 3 System 4 System 5 System 6 development 0.581 0.591 0.600 0.579 0.598 0.637 ten fold cross-validation 0.578 0.560 0.590 0.579 0.590 0.613 test 0.572 0.570 0.582 0.541 0.571 0.605 Table 1: Accuracy for different systems on the training set IE IR QA SUM Total run1(baseline) accuracy 0.5050 0.5500 0.5650 0.6700 0.5725 precision 0.5095 0.4658 0.4658 0.7067 0.5249 run2(combined) accuracy 0.5200 0.6000 0.6000 0.7000 0.6050 precision 0.4978 0.5352 0.5352 0.5240 0.5046 Table 2: System Performance investigated different ways of calculating the cost functions for the edit distance algorithm. In the future we plan to extend the usage of Word- Net as an entailment resource. Entailment rules found in entailment and paraphrasing resources can also be used. A drawback of the tree edit distance approach presented is that it is not able to observe the whole tree, but only the subtree of the processed node. For example, the cost of the insertion of a subtree in H could be smaller if the same subtree is deleted from T at a prior or later stage. A context sensitive extension of the insertion and deletion module will increase the performance of the system. In this direction, the negative examples (examples that don t have entailment relation) in the development set on which the system reports small distance can be used fro extracting context dependent rules that estimate the cost of the deletion operation. In the future we plan to develop evolutionary algorithm to combine the different functions for calculating the insertion and deletion costs. References Samuel Bayer, John Burger, Lisa Ferro, John Henderson and Alexander Yeh. MITRE s Submissions to the EU Pascal RTE Challenge In Proceedings of PASCAL Workshop on Recognizing Textual Entailment Southampton, UK, 2005 Ido Dagan and Oren Glickman. Generic applied modeling of language variability In Proceedings of PASCAL Workshop on Learning Methods for Text Understanding and Mining Grenoble, 2004 Jesus Herrera, Anselmo Peñas and Felisa Verdejo. Textual Entailment Recognition Based on Dependency Analysis and WordNet In Proceedings of PAS- CAL Workshop on Recognizing Textual Entailment Southampton, UK, 2005 Milen Kouleykov and Bernardo Magnini Combining Lexical Resources with Tree Edit Distance for Recognizing Textual Entailment Proceedings of the First PASCAL Recognizing Textual Entailment Workshop, LNAI, Springer, 2005 Dekang Lin. Dependency-based evaluation of MINIPAR In Proceedings of the Workshop on Evaluation of Parsing Systems at LREC-98. Granada, Spain, 1998 Vasin Punyakanok, Dan Roth and Wen-tau Yih. Mapping Dependencies Trees: An Application to Question Answering Proceedings of AI & Math, 2004 Rajat Raina, Aria Haghighi, Christopher Cox, Jenny Finkel, Jeff Michels, Kristina Toutanova Bill Mac- Cartney, Marie-Catherine de Marneffe, Christopher D. Manning and Andrew Y. Ng. Robust Textual Inference using Diverse Knowledge Sources In Proceedings of PASCAL Workshop on Recognizing Textual Entailment Southampton, UK, 2005 Alex J. Smola, Bernhard Scholkopf A Tutorial on Support Vector Regression. NeuroCOLT2 Technical Report Series - NC2-TR-1998-030, 1998 Kaizhong Zhang,Dennis Shasha. Fast algorithm for the unit cost editing distance between trees Journal of algorithms, vol. 11, p. 1245-1262, December 1990 Ian H. Witten and Eibe Frank Data Mining: Practical machine learning tools and techniques 2nd Edition, Morgan Kaufmann, San Francisco, 2005