A Joint Sequence Translation Model with Integrated Reordering



Similar documents
A Joint Sequence Translation Model with Integrated Reordering

Phrase-Based MT. Machine Translation Lecture 7. Instructor: Chris Callison-Burch TAs: Mitchell Stern, Justin Chiu. Website: mt-class.

Statistical Machine Translation Lecture 4. Beyond IBM Model 1 to Phrase-Based Models

Chapter 5. Phrase-based models. Statistical Machine Translation

Chapter 6. Decoding. Statistical Machine Translation

Statistical Machine Translation

HIERARCHICAL HYBRID TRANSLATION BETWEEN ENGLISH AND GERMAN

Convergence of Translation Memory and Statistical Machine Translation

The Prague Bulletin of Mathematical Linguistics NUMBER 96 OCTOBER Ncode: an Open Source Bilingual N-gram SMT Toolkit

Factored bilingual n-gram language models for statistical machine translation

Computer Aided Translation

SYSTRAN 混 合 策 略 汉 英 和 英 汉 机 器 翻 译 系 统

tance alignment and time information to create confusion networks 1 from the output of different ASR systems for the same

SYSTRAN Chinese-English and English-Chinese Hybrid Machine Translation Systems for CWMT2011 SYSTRAN 混 合 策 略 汉 英 和 英 汉 机 器 翻 译 系 CWMT2011 技 术 报 告

Machine Translation and the Translator

The XMU Phrase-Based Statistical Machine Translation System for IWSLT 2006

Factored Markov Translation with Robust Modeling

Factored Translation Models

Machine Translation. Agenda

THUTR: A Translation Retrieval System

Why Evaluation? Machine Translation. Evaluation. Evaluation Metrics. Ten Translations of a Chinese Sentence. How good is a given system?

Machine Translation. Why Evaluation? Evaluation. Ten Translations of a Chinese Sentence. Evaluation Metrics. But MT evaluation is a di cult problem!

Jane 2: Open Source Phrase-based and Hierarchical Statistical Machine Translation

Hybrid Machine Translation Guided by a Rule Based System

Statistical Machine Translation

AP WORLD LANGUAGE AND CULTURE EXAMS 2012 SCORING GUIDELINES

An Iteratively-Trained Segmentation-Free Phrase Translation Model for Statistical Machine Translation

A New Input Method for Human Translators: Integrating Machine Translation Effectively and Imperceptibly

Factored Language Models for Statistical Machine Translation

Machine Learning for natural language processing

FOR TEACHERS ONLY The University of the State of New York

An End-to-End Discriminative Approach to Machine Translation

Statistical Machine Translation: IBM Models 1 and 2

The Impact of Morphological Errors in Phrase-based Statistical Machine Translation from English and German into Swedish

Chapter 7. Language models. Statistical Machine Translation

UNSUPERVISED MORPHOLOGICAL SEGMENTATION FOR STATISTICAL MACHINE TRANSLATION

Comprendium Translator System Overview

Automatic Speech Recognition and Hybrid Machine Translation for High-Quality Closed-Captioning and Subtitling for Video Broadcast

Collaborative Machine Translation Service for Scientific texts

Statistical Machine Translation of French and German into English Using IBM Model 2 Greedy Decoding

Systematic Comparison of Professional and Crowdsourced Reference Translations for Machine Translation

Introduction. Philipp Koehn. 28 January 2016

Exemplar for Internal Assessment Resource German Level 1. Resource title: Planning a School Exchange

Exemplar for Internal Achievement Standard. German Level 1

Deciphering Foreign Language

Building a Web-based parallel corpus and filtering out machinetranslated

Modalverben Theorie. learning target. rules. Aim of this section is to learn how to use modal verbs.

Predicting the Stock Market with News Articles

Search Engines Chapter 2 Architecture Felix Naumann

The finite verb and the clause: IP

AP GERMAN LANGUAGE AND CULTURE EXAM 2015 SCORING GUIDELINES

Chinese-Japanese Machine Translation Exploiting Chinese Characters

Customizing an English-Korean Machine Translation System for Patent Translation *

Neural Machine Transla/on for Spoken Language Domains. Thang Luong IWSLT 2015 (Joint work with Chris Manning)

Hybrid Strategies. for better products and shorter time-to-market

Dublin City University at CLEF 2004: Experiments with the ImageCLEF St Andrew s Collection

UEdin: Translating L1 Phrases in L2 Context using Context-Sensitive SMT

Improving MT System Using Extracted Parallel Fragments of Text from Comparable Corpora

NLP Programming Tutorial 5 - Part of Speech Tagging with Hidden Markov Models

Machine Translation for Human Translators

International Guest Students APPLICATION FORM

Word Completion and Prediction in Hebrew

Language Model of Parsing and Decoding

Statistical Pattern-Based Machine Translation with Statistical French-English Machine Translation

Elena Chiocchetti & Natascia Ralli (EURAC) Tanja Wissik & Vesna Lušicky (University of Vienna)

Student Booklet. Name.. Form..

Domain-specific terminology extraction for Machine Translation. Mihael Arcan

IRIS - English-Irish Translation System

The Transition of Phrase based to Factored based Translation for Tamil language in SMT Systems

Motivation. Korpus-Abfrage: Werkzeuge und Sprachen. Overview. Languages of Corpus Query. SARA Query Possibilities 1

International Guest Students APPLICATION FORM

The Prague Bulletin of Mathematical Linguistics NUMBER 93 JANUARY Training Phrase-Based Machine Translation Models on the Cloud

An Online Service for SUbtitling by MAchine Translation

Statistical NLP Spring Machine Translation: Examples

Multipurpsoe Business Partner Certificates Guideline for the Business Partner

The United Nations Parallel Corpus v1.0

Training and evaluation of POS taggers on the French MULTITAG corpus

Transcription:

A Joint Sequence Translation Model with Integrated Reordering Nadir Durrani Advisors: Alexander Fraser and Helmut Schmid Institute for Natural Language Processing University of Stuttgart

Machine Translation Problem: Automatic translation the foreign text: From Koehn 2008 2

Machine Translation Problem: Automatic translation the foreign text: Approach: The text is actually written in English encoded with strange symbols problem is to decode these symbols Warren Weaver (1949) 3

The Rosetta Stone Egyptian language was a mystery for centuries The Rosetta stone is written in three scripts Hieroglyphic (used for religious documents) Demotic (common script of Egypt) Greek (language of rulers of Egypt at that time) From Koehn 2008 4

Parallel Data 5

Parallel Data Parallel data available for several language pairs Several million sentences for Arabic English Chinese English 11 European languages (Europarl) Monolingual data is available in even more quantity 6

Statistical Machine Translation From Koehn 2008 7

Statistical Machine Translation Main Modules Language Model N-gram models are commonly used Hierarchical models as language model Translation Model Modeling of translation units Words, Phrases, Trees Reordering Decoding Future cost estimation Search and pruning strategies 8

Contributions of this Work A new SMT model that Handles local and long distance reorderings uniformly Enables source-side discontinuities Enables source word deletion Overcomes the phrasal segmentation problem Considers all possible orderings during decoding Removing the distortion limit Integrates of reordering inside translation model Study of Joint Models for translation Comparing with: Moses: A Phrases-based System Noisy Channel Model N-gram Joint Source Channel Model 9

Road Map Literature Review Phrase-based SMT Overview Problems Joint Models N-gram-based Overview Problems This Work Model Contributions Shortcomings and Future Work Summary 10

Road Map Literature Review Phrase-based SMT Overview Problems Joint Models N-gram-based Overview Problems This Work Model Contributions Shortcomings and Future Work Summary 11

Phrase-based SMT State-of-the-art for many language pairs Morgen fliege ich nach Kanada zur Konferenz Tomorrow I will fly to the conference in Canada From Koehn 2008 12

Phrase-based SMT Local and Non-local Reorderings State-of-the-art for many language pairs Morgen fliege ich nach Kanada zur Konferenz Tomorrow I will fly to the conference in Canada From Koehn 2008 13

Problems Local and non-local dependencies are handled differently Gappy units are not allowed outside phrases Deletions and insertions are handled only inside phrases Weak reordering model Hard reordering limit necessary during decoding Spurious phrasal segmentation 14

Inconsistent Handling Local and Non-local Dependencies Er hat eine Pizza gegessen He has eaten a pizza 15

Inconsistent Handling Local and Non-local Dependencies p 1 p 2 Er hat eine Pizza gegessen He has eaten a pizza Phrase Table Er hat eine Pizza gegessen eine Pizza gegessen He has eaten a pizza a pizza eaten 16

Inconsistent Handling Local and Non-local Dependencies Sie hat eine Pizza gegessen Phrase Table Er hat He has eine Pizza gegessen eaten a pizza eine Pizza a pizza gegessen eaten Reordering is internal to phrase therefore handled by translation model independent of the reordering model!!! 17

Inconsistent Handling Local and Non-local Dependencies Sie hat eine Pizza gegessen Er hat schon sehr viel Schoko gegessen Phrase Table Er hat eine Pizza gegessen eine Pizza gegessen sehr viel schon Schoko He has eaten a pizza a pizza eaten so much already chocolate Non-local reorderings have to be handled by reordering model!!! 18

Discontinuities Handled Only Inside Phrases dann hat er ein buch gelesen then he read a book Discontinuities can be handled inside a phrase! 19

Discontinuities Handled Only Inside Phrases dann hat er ein buch gelesen then he read a book Discontinuities can be handled inside a phrase! dann hat er eine Zeitung gelesen Then he has read a newspaper Can not learn discontinuous phrases! 20

Deletions and Insertions Handled Only Inside Phrases kommen sie mit come with me Delete sie when followed by kommen or preceding mit lesen sie bitte mit Can t delete sie in this context please read you with me 21

PBSMT Reordering Model (Galley et al. 2008) Er p 1 p 3 p 2 He has hat eine Pizza gegessen p 3 eaten a pizza Orientation (gegessen,eaten) = Discontinuous Backward a i, a i-1 Orientation (gegessen, eaten) = Swap Forward a i, a i+1 22

Orientation (gegessen, eaten) Discontinuous Backward a i, a i-1 Swap Forward a i, a i+1 23

PBSMT Reordering Model (Galley et al. 2008) Er hat schon sehr viel Schoko gegessen Phrase Table Er hat He has gegessen eaten sehr viel so much schon already Hypotheses He has already eaten so much chocolate He has eaten so much chocolate already He has so much eaten chocolate already He has already so much eaten chocolate O(gegessen, eaten) Back Fwd D S D S D S D S Language Model is required to break the tie!!! 24

Distortion Limit Hard distortion limit during decoding or performance drops Reordering distance of <= 6 words Example: DE : ich 1 möchte 2 am 3 Montag 4 abend 5 für 6 unsere 7 Gäste 8 Truthahn 9 machen 10 EN : I [would like] 2 [to make] 10 Turkey for our guests on Monday 25

Spurious Phrasal Segmentation Training 26

s Spurious Phrasal Segmentation Decoding Phrase-based SMT Stack 1 Stack 2 Stack 3 Stack4 Stack 5 He has eaten He has eaten He has eaten a pizza 27

s Spurious Phrasal Segmentation Decoding Phrase-based SMT Stack 1 Stack 2 Stack 3 Stack4 Stack 5 He has eaten He has eaten He has eaten a pizza Minimal Units s He has eaten a pizza a pizza eaten 28

Road Map Literature Review Phrase-based SMT Overview Problems Joint Models N-gram-based Overview Problems This Work Model Contributions Shortcomings and Future Work Summary 29

Joint Probability Models for SMT Noisy Channel Model Captures how source string can be mapped to the target string Joint Model How source and target strings can be generated simultaneously 30

Joint Probability Models for SMT Noisy Channel Model Captures how source string can be mapped to the target string Joint Model How source and target strings can be generated simultaneously Direct Phrase Extraction through EM Marcu and Wong (2002) Unigram Orientation Model Tillman and Xia (2003 2004) N-gram Based SMT Marino et al. (2006) Many papers to date 31

Joint Probability Models for SMT System Marcu and Wong Translation Units Phrases Context Unigram Reordering IBM Model 3 Tillman and Xia Phrases Unigram Orientation Model N-gram based SMT Tuples Trigram Source Linearization 32

Why Minimal Units? Modeling local and long distance reorderings in unified manner Avoid phrasal segmentation problem 33

Road Map Literature Review Phrase-based SMT Overview Problems Joint Models N-gram-based Overview Problems This Work Model Contributions Shortcomings and Future Work Summary 34

N-gram based SMT Based on bilingual n-gram called as tuples A tuple is extracted such that Monotonic segmentation for each bilingual sentence is produced No word inside a tuple is aligned to words outside of the tuple No smaller tuples can be extracted without violating previous conditions 35

Example Er hat eine Pizza gegessen He has eaten a pizza 36

Example Er Er hat eine Pizza gegessen He has eaten a pizza t 1 t 2 t 3 hat eine Pizza gegessen He has eaten a pizza Banchs et al. 2005 37

Translation Model 38

Tuple Unfolding Source Linearization t 1 t 2 t 3 Er hat eine Pizza gegessen He has eaten a pizza t 1 t 2 t 3 t 4 t 5 Er hat gegessen eine Pizza He has eaten a pizza 39

Reordering Re-write Rules eine Pizza gegessen gegessen eine Pizza Crego et al. (2005) POS tags to alleviate sparsity in rules DT NN VB VB DT NN Crego et al. (2006) 40

Lattice Based Decoding Input sentence handled as a word graph A monotonic search graphic consist of single path Er hat einen Erdbeerkuchen gegessen Graph is extended with new arcs for reordering Rule Used : DT NN VB VB DT NN 41

Lattice Based Decoding Input sentence handled as a word graph A monotonic search graphic consist of single path Er hat einen Erdbeerkuchen gegessen Graph is extended with new arcs for reordering gegessen einen Erdbeerkuchen Er hat einen Erdbeerkuchen gegessen 42

Lattice Based Decoding Search graph is traversed to find the best path Two paths for the discussed example Er hat einen Erdbeerkuchen gegessen Er hat gegessen einen Erdbeerkuchen 43

Problems Weak reordering model Search performed only on pre-calculated reorderings Target side is ignored in pre-calculated reorderings Heavily relies on POS tags for reordering No ability to use lexical triggers The extracted rule DT NN VB VB DT NN fails for the example: Er hat schon sehr viel Schoko gegessen 44

Road Map Literature Review Phrase-based SMT Overview Problems Joint Models N-gram-based Overview Problems This Work Model Contributions Shortcomings and Future Work Summary 45

This Work : Introduction Generation of bilingual sentence pair through a sequence of operations Operation: Translate or Reorder P (E,F,A) = Probability of the operation sequence required to generate the bilingual sentence pair 46

Joint Probability Models for SMT Extension of N-gram based SMT Sequence of operations rather than tuples Integrated reordering rather than source linearization + rule extraction System Translation Units Context Reordering Marcu and Wong Phrases Unigram IBM Model 3 Tillman and Xia Phrases Unigram Orientation Model N-gram based SMT Tuples Trigram Source Linearization Our Model Operations Encapsulating Tuples 9-gram Integrated Reordering 47

List of Operations 4 Translation Operations Generate X Y Continue Source Cept Generate Identical Generate Source Only X 3 Reordering Operations Insert Gap Jump Back (N) Jump Forward 48

Example Er hat eine Pizza gegessen He has eaten a pizza 49

Example Er hat eine Pizza gegessen He has eaten a pizza Simultaneous generation of source and target Generation is done in order of the target sentence Reorder when the source words are not in the same order 50

Example Er hat eine Pizza gegessen He has eaten a pizza Operations Generate Er He Er He 51

Example Er hat eine Pizza gegessen He has eaten a pizza Operations Generate Er He Generate hat has Er hat He has 52

Example Er hat eine Pizza gegessen He has eaten a pizza Operations Generate Er He Generate hat has Er hat Insert gap He has 53

Example Er hat eine Pizza gegessen He has eaten a pizza Operations Generate Er He Generate hat has Er hat gegessen Insert gap Generate gegessen eaten He has eaten 54

Example Er hat eine Pizza gegessen He has eaten a pizza Operations Generate Er He Generate hat has Er hat gegessen Insert gap Generate gegessen eaten He has eaten Jump back (1) 55

Example Er hat eine Pizza gegessen He has eaten a pizza Operations Generate Er He Generate hat has Er hat eine Insert gap Generate gegessen eaten Jump back (1) He has eaten a Generate eine a gegessen 56

Example Operations Generate Er He Generate hat has Insert gap Generate gegessen eaten Jump back (1) Generate eine a Generate Pizza pizza Er hat eine Pizza gegessen He has eaten a pizza Er hat eine Pizza gegessen He has eaten a pizza 57

Lexical Trigger Er hat gegessen He has eaten Generate Er He Generate hat has Insert Gap Generate gegessen eat Jump Back (1) 58

Generalizing to Unseen Context Er hat einen Erdbeerkuchen gegessen He has eaten a strawberry cake Generate Er-He Generate hat-has Insert Gap Generate gegessen-eat Jump Back(1) Generate einen-a Generate Erdbeerkuchen strawberry cake 59

Handling Non-local Reordering Er hat schon sehr viel Schoko gegessen He has eaten so much chocolate already Generate Er He Generate hat has Insert Gap Generate gegessen eat Jump Back (1) Insert Gap Generate sehr viel so much Continue Source Cept Generate Schoko, chocolate Jump Back Generate schon already 60

Contributions common to N-gram based SMT Using minimal units to avoid spurious phrasal segmentation Using source and target information with context Handling local and long distance reordering in a unified manner 61

Contributions Reordering decisions depend upon translation decisions and vice versa Example: Gap Insertion is probable after Generate hat has in order to move gegessen to right postion Ability to learn lexical triggers unlike N-gram based SMT which uses POS based re-write rules 62

Source Side Discontinuities er hat ein Buch gelesen he read a book 63

Source Side Discontinuities er hat ein Buch gelesen he read a book Operations er Generate er he he 64

Source Side Discontinuities er hat ein Buch gelesen he read a book Operations er hat Generate er he Generate hat [gelesen] read he read 65

Source Side Discontinuities er hat ein Buch gelesen he read a book Operations er hat gelesen Generate er he Generate hat [gelesen] read Insert Gap Continue Source Cept he read 66

Source Side Discontinuities er hat ein Buch gelesen he read a book Operations Generate er he Generate hat [gelesen] read Insert Gap Continue Source Cept Jump Back (1) Generate ein a Generate Buch book er hat ein Buch gelesen he read a book 67

Contribution Handling Source Side Discontinuities hat gelesen Generate hat [gelesen] read Insert Gap read Continue Source Cept 68

Contribution Handling Source Side Discontinuities Generate hat gelesen read hat gelesen Insert Gap Continue Source Cept read Generalizing to unseen sentences sie hat eine Zeitung gelesen Phrases with gaps (Galley and Manning 2010) 69

Source Word Deletion Model kommen sie mit come with me 70

Source Word Deletion Model kommen sie mit come with me Operations Generate kommen come Generate Source Only sie Generate mit with me 71

Contribution Ability to delete source words during decoding lesen sie bitte mit please read with me Models for deleting source word in Moses (Li et al. 2008) 72

Learning Phrases through Operation Sequences über konkrete Zahlen nicht verhandeln wollen do not want to negotiate on specific figures Phrase Pair : nicht verhandeln wollen ~ do not want to negotiate Generate (nicht, do not) Insert Gap Generate (wollen, want to) Jump Back(1) Generate (verhandeln, negotiate) 73

Model Joint-probability model over operation sequences 74

Search Search is defined as: Incorporating language model Language model (p LM ) 5-gram Operation model and prior probability (p pr ) 9-gram Implemented in a stack based beam decoder 75

Other Features Length Penalty : Counts the number of target words produced Deletion Penalty : Counts the number of source words deleted 76

Other Features Length Penalty : Counts the number of target words produced Deletion Penalty : Counts the number of source words deleted Gap Penalty : Counts the number of gaps inserted Open Gap Penalty : Number of open gaps, paid once per each translation operation 77

Other Features Length Penalty : Counts the number of target words produced Deletion Penalty : Counts the number of source words deleted Gap Penalty : Counts the number of gaps inserted Open Gap Penalty : Number of open gaps, paid once per each translation operation Reordering Distance : Distance from the last translated tuple Gap Width : Distance from the first open gap 78

Other Features Length Penalty : Counts the number of target words produced Deletion Penalty : Counts the number of source words deleted Gap Penalty : Counts the number of gaps inserted Open Gap Penalty : Number of open gaps, paid once per each translation operation Reordering Distance : Distance from the last translated tuple Gap Width : Distance from the first open gap Lexical Probabilities : Source-to-Target and Target-to-Source lexical translation probabilities 79

Experimental Setup Language Pairs: German, Spanish and French to English Data 4 th Version of the Europarl Corpus Bilingual Data: 200K parallel sentences (reduced version of WMT 09) ~74K News commentary + ~ 126K Europarl Monolingual Data: 500K = 300K from the monolingual corpus (news commentary) + 200K English side of bilingual corpus Standard WMT 2009 sets for tuning and testing 80

Training Giza++ for word alignment Training & Tuning Heuristic modification of alignments to remove target-side gaps and unaligned target words (see the paper for details) Convert word-aligned bilingual corpus into operation corpus (see paper for details) SRI-Toolkit to train n-gram language models Kneser-Ney Smoothing Tuning Parameter Tuning with Z-mert 81

Results Baseline: Moses (with lexicalized reordering) with defaults A 5-gram language model (same as ours) Two baselines with no distortion limit and using a reordering limit 6 Two variations of our system Using no reordering limit Using gap-width of 6 as a reordering limit 82

Using Non-Gappy Source Cepts Source Baseline - no reordering limit Baseline reordering limit = 6 This Work - no reordering limit This Work - reordering limit = 6 German 17.41 18.57 18.97 19.03 Spanish 19.85 21.67 22.17 21.88 French 19.39 20.84 20.92 20.72 Moses score without reordering limit drops by more than a BLEU point Our best system This Work - no reordering limit gives Statistically significant results over Baseline reordering limit = 6 for German and Spanish Comparable results for French 83

Contribution Removing hard reordering limit Consider all possible permutations Example: DE : 74 1 % 2 würden 3 gegen 4 die 5 studiengebühren 6, 7 79 8 % 9 gegen 10 die 11 praxisgebühr 12, 13 und 14 84 15 [% 16 gegen 17 das 18 krankenhaus-taggeld 19 stimmen 20. 21 ] EN : 74% would vote 20 against the tuition fees, 79% against the clinical practice, and 84% against the hospital daily allowance. Moses : 74% would against the tuition fees, 79% against the clinical practice, and 84% vote 20 against the hospital daily allowance. Increasing distortion limit to 15 in Phrasal (Green et al. 2010) 84

Gappy + Non-Gappy Source Cepts Source This Work - no reordering limit This Work - reordering limit = 6 This Work - no reordering limit + gappy units German 18.97 19.03 18.61 Spanish 22.17 21.88 21.60 French 20.92 20.72 20.59 This Work - reordering limit = 6 + gappy units 18.65 21.40 20.47 85

Sample Output DE: letzte Woche kehrten die Kinder zu Ihren Eltern zurück EN: Last week, the children returned to their biological parents 86

Why didn t Gappy-Cepts improve performance? Using all source gaps explodes the search space Source German Spanish French Gaps 965,515 1,705,156 1,473,798 No Gaps 256,992 313,690 343,220 Number of tuples using 10-best translations 87

Why didn t Gappy-Cepts improve performance? Using all source gaps explodes the search space Source German Spanish Gaps 965,515 1,705,156 No Gaps 256,992 313,690 French 1,473,798 343,220 Number of tuples using 10-best translations Future cost is incorrectly estimated in case of gappy cepts 88

Heuristic Use only the gappy cepts with scores better than sum of their parts log prob(habe gemacht made) > log p(habe have) + log p(gemacht made) Source German Spanish French Gaps 965,515 1,705,156 1,473,798 No Gaps 256,992 313,690 343,220 Heuristic 281,618 346,993 385,869 89

With Gappy Source Cepts + Heuristic Source This Work - no reordering limit + gappy units This Work - reordering limit = 6 + gappy units German 18.61 18.65 Spanish 21.60 21.40 French 20.59 20.47 This Work - no reordering limit + gappy units + heuristic 18.91 21.93 20.87 This Work - reordering limit = 6 + gappy units + heuristic 19.23 21.79 20.75 90

Road Map Literature Review Phrase-based SMT Overview Problems Joint Models N-gram-based Overview Problems This Work Model Contributions Shortcomings and Future Work Summary 91

Shortcomings of this Model More difficult search problem Decoding Future cost estimation Target unaligned words are not allowed Target side gaps are not allowed 92

More difficult Search Decoding s Stack 1 Stack 2 Stack 3 Nach meine meinung In my opinion 93

More difficult Search Decoding s Stack 1 Stack 2 Stack 3 nach after to by meine my nach meine meinung in my opinion in 94

More difficult Search Future Cost Estimation Future cost estimate available to Phrase-based SMT Translation Model: p(in my opinion nach meine meinung) Language Model: p (in) * p (my in) * p (opinion in my) Future cost estimate available to N-gram SMT Translation Model: p (after, nach) * p (my, meine) * p (opinion, meinung) Language Model : p (after) * p (my) * p (opinion) 95

Shortcomings of this Model More difficult search problem Decoding Future cost estimation Target unaligned words are not allowed Model: Introduce operation Generate Target Only (X) Decoding: Spurious generation of target words? Target side gaps are not allowed 96

Shortcomings of this Model More difficult search problem Decoding Future cost estimation Target unaligned words are not allowed Model: Introduce operation Generate Target Only (X) Decoding: Spurious generation of target words? Target side gaps are not allowed 97

Future Work Handling of Target Gaps German English hat ein Buch gelesen Generate hat gelesen Insert Gap read a book Continue Source Cept English German read a book Generate read hat hat ein Buch gelesen Problem: Target is generated left to right when to generate gelesen 98

Future Work Handling Target Gaps Solution 1 : Generate read hat (gelesen) Continue Target Cept (Galley and Manning 2010) Solution 2 : Use split rules (Crego and Yvon 2009) Generate read 1 hat Generate read 2 gelesen Problem: How to split read to read 1 and read 2 during decoding? read a book read 1 a book read 2 Solution 3 : Introduce Insert Gaps operation on target-side read hat gelesen Parsing based decoders 99

Future Work Factored based Machine Translation (Koehn and Hoang 2007) Using POS Tags Morphological Information er hat gegessen PN AUX VB he has eaten PN AUX VB helpful for translating: Ich habe eine Tasse Tee gemacht 10 0

Using Source Side Information Source-side language model Source-side syntax 101

Using Source Side Information Source-side language model gestern hat er ein Pizza gegessen he ate a pizza yesterday 102

Source Side Language Model Using source-side LM to guide reordering S: gestern hat er ein Pizza gegessen he ate a pizza yesterday S : er hat gegessen ein Pizza gestern Build language model on S use as a feature (Feng et al. 2010) 103

Source Side Syntax Finding path of jump 104

Source Side Syntax Finding path of jump (NN, ADV-MO) : L VP-OC S ADV-MO POS tags of source and target words Direction of jump (Left or Right) Sequence of parse labels encountered on the traversal from start to end 105

Summary A new SMT Model that Integrates translation and reordering in a single generative story Uses bilingual context (like N-gram based SMT) Improved reordering mechanism Models local and non-local dependencies uniformly (unlike PBSMT) Takes previously translated words into account (unlike PBSMT) Has ability to use lexical triggers (unlike N-gram SMT) Considers all possible reorderings during search (unlike N-gram SMT) 106

Summary A new SMT Model that Enables source-side gaps Enables source word deletion Does not have spurious phrasal segmentation (like N-gram SMT) Does not use hard distortion limit Compared with state-of-the-art Moses system gives significantly better results for German-to-English and Spanish-to-English comparable results for French-to-English 107

Thank you - Questions? 108

Search and Future Cost Estimation The search problem is much harder than in PBSMT Larger beam needed to produce translations similar to PBSMT Example zum Beispiel for example vs zum for, Beispiel example Problem with future cost estimation Language model probability Phrase based : p(for) * p(example for) Our Model : p(for) * p(example) Future Cost for reordering operations Future Cost for features gap penalty, gap-width and reordering distance

Future Cost Estimation with Source-Side Gaps Future Cost estimation with source side gaps is problematic Future Cost for Bigger Spans cost (I,K) = min( cost (I,J) + cost (J+1,K) ) for all J in I K cost (1,8) = min ( { cost (1,1) + cost (2,8) }, {cost (1,2) + cost (3,7)},, {cost(1,7) + cost(8,8)} 1 2 3 4 5 6 7 8

Future Cost Estimation with Source-Side Gaps FC estimation with source side gaps is problematic Future Cost for Bigger Spans cost (I,K) = min( cost (I,J) + cost (J+1,K) ) for all J in I K cost (1,8) = min ( { cost (1,1) + cost (2,8) }, {cost (1,2) + cost (3,7)},, {cost(1,7) + cost(8,8)} 1 2 3 4 5 6 7 8

Future Cost Estimation with Source-Side Gaps FC estimation with source side gaps is problematic Future Cost for Bigger Spans cost (I,K) = min( cost (I,J) + cost (J+1,K) ) for all J in I K cost (1,8) = min ( { cost (1,1) + cost (2,8) }, {cost (1,2) + cost (3,7)},, {cost(1,7) + cost(8,8)} 1 2 3 4 5 6 7 8

Future Cost Estimation with Source-Side Gaps Does not work for cepts with gaps Best way to cover word 1,4 and 8 is through cept 1..4..8 cost (1,8) =? After computation of cost (1,8) we do another pass to find min (cost (1,8), cost (2,3) + cost (5,7) + cost_of_cept (1..4..8) 1 2 3 4 5 6 7 8

Future Cost Estimation with Source-Side Gaps Does not work for cepts with gaps Best way to cover word 1,4 and 8 is through cept 1..4..8 cost (1,8) =? After computation of cost (1,8) we do another pass to find min (cost (1,8), cost (2,3) + cost (5,7) + cost_of_cept (1..4..8) 1 2 3 4 5 6 7 8

Future Cost Estimation with Source-Side Gaps Does not work for cepts with gaps Best way to cover word 1,4 and 8 is through cept 1..4..8 cost (1,8) =? After computation of cost (1,8) we do another pass to find min (cost (1,8), cost (2,3) + cost (5,7) + cost_of_cept (1..4..8) 1 2 3 4 5 6 7 8

Future Cost Estimation with Source-Side Gaps Does not work for cepts with gaps Best way to cover word 1,4 and 8 is through cept 1..4..8 cost (1,8) =? min (cost (1,8), cost (2,3) + cost (5,7) + cost_of_cept (1..4..8), cost (3,7) + cost_of_cept (1..2..8)) 1 2 3 4 5 6 7 8

Future Cost Estimation with Source-Side Gaps Still problematic when gappy Cepts interleave Example: Consider best way to cover 1 & 5 is through cept 1 5 Modification can not capture that best cost = cost_of_cept (1..5) + cost_of_cept(2 4...8) + cost (3,3) + cost (6,7) 1 2 3 4 5 6 7 8

Future Cost Estimation with Source-Side Gaps Gives incorrect cost if coverage vector already covers a word between the gappy cept 1 2 3 4 5 6 7 8 Decoder has covered 3 Future cost estimate cost (1,2) + cost (4,8) is wrong The correct estimate is cost_of_cept (1 4 8) + cost (2,2) + cost (5,8) No efficient way to cover all possible permutations

Target Side Gaps & Unaligned Words Our model does not allow target-side gaps and target unaligned words Post-editing of alignments a 3 step process Step-I: Remove all target-side gaps For a gappy alignment, link to least frequent target word is identified A group of link that contain this word is retained Example A B C D U V W X Y Z Target Side Discontinuity!!

Target Side Gaps & Unaligned Words Our model does not allow target-side gaps and target unaligned words Post-editing of alignments a 3 step process Step-I: Remove all target-side gaps For a gappy alignment, link to least frequent target word is identified A group of link that contain this word is retained Example A B C D U V W X Y Z

Target Side Gaps & Unaligned Words Our model does not allow target-side gaps and target unaligned words Post-editing of alignments a 3 step process Step-I: Remove all target-side gaps For a gappy alignment, link to least frequent target word is identified A group of link that contain this word is retained Example A B C D U V W X Y Z No target side gaps but target unaligned words!!!

Continued After Step-I A B C D U V W X Y Z Step-II: Counting over the training corpus to find the attachment preference of a word Count (U,V) = 1 Count (W,X) = 1 Count (W,X) = 1 Count (X,Y) = 0.5 Count (Y,Z) = 0.5

Continued Step-III: Attached target-unaligned words to right or left based on the collected counts After Step-III A B C D U V W X Y Z

List of Operations 4 Translation Operations Generate (X,Y) Continue Source Cept Generate Identical Generate Source Only (X) Example Generate (gegessen, eaten) 3 Reordering Operations Insert Gap Jump Back (N) Jump Forward

List of Operations 4 Translation Operations Generate (X,Y) Continue Source Cept Generate Identical Generate Source Only (X) Example Generate (Inflationsraten, inflation rate) Inflationsraten Inflation rate 3 Reordering Operations Insert Gap Jump Back (N) Jump Forward

List of Operations 4 Translation Operations Generate (X,Y) Continue Source Cept Generate Identical Generate Source Only (X) 3 Reordering Operations Insert Gap Jump Back (N) Jump Forward Example kehrten zurück returned Generate (kehrten zurück, returned) Insert Gap Continue Source Cept

List of Operations 4 Translation Operations Generate (X,Y) Continue Source Cept Generate Identical Generate Source Only (X) Example Generate Identical instead of Generate (Portland, Portland) 3 Reordering Operations Insert Gap Jump Back (N) Jump Forward If count (Portland) = 1

List of Operations 4 Translation Operations Generate (X,Y) Continue Source Cept Generate Identical Generate Source Only (X) Example kommen Sie mit come with me Generate Source Only (Sie) 3 Reordering Operations Insert Gap Jump Back (N) Jump Forward

List of Operations 4 Translation Operations Generate (X,Y) Continue Source Cept Generate Identical Generate Source Only (X) Example über konkrete Zahlen nicht verhandeln wollen do not want to negotiate on specific figures 3 Reordering Operations Insert Gap Jump Back (N) Jump Forward Gap # 1 do not nicht

List of Operations 4 Translation Operations Generate (X,Y) Continue Source Cept Generate Identical Generate Source Only (X) Example über konkrete Zahlen nicht verhandeln wollen do not want to negotiate on specific figures 3 Reordering Operations Insert Gap Jump Back (N) Jump Forward Gap # 2 Gap # 1 nicht wollen do not want to Jump Back (1)!!!

List of Operations 4 Translation Operations Generate (X,Y) Continue Source Cept Generate Identical Generate Source Only (X) Example über konkrete Zahlen nicht verhandeln wollen do not want to negotiate on specific figures 3 Reordering Operations Insert Gap Jump Back (N) Jump Forward Gap # 1 do not want to negotiate nicht verhandeln wollen

List of Operations 4 Translation Operations Generate (X,Y) Continue Source Cept Generate Identical Generate Source Only (X) Example über konkrete Zahlen nicht verhandeln wollen do not want to negotiate on specific figures 3 Reordering Operations Insert Gap Jump Back (N) Jump Forward Gap # 1 nicht verhandeln wollen do not want to negotiate Jump Back (1)!!!

List of Operations 4 Translation Operations Generate (X,Y) Continue Source Cept Generate Identical Generate Source Only (X) 3 Reordering Operations Insert Gap Jump Back (N) Jump Forward über konkrete Zahlen nicht verhandeln wollen do not want to negotiate on specific figures

List of Operations 4 Translation Operations Generate (X,Y) Continue Source Cept Generate Identical Generate Source Only (X) Jump Forward!!! 3 Reordering Operations Insert Gap über konkrete Zahlen nicht verhandeln wollen Jump Back (N) Jump Forward do not want to negotiate on specific figures

List of Operations 4 Translation Operations Generate (X,Y) Continue Source Cept Generate Identical Generate Source Only (X) 3 Reordering Operations Insert Gap Jump Back (N) Jump Forward über konkrete Zahlen nicht verhandeln wollen. do not want to negotiate on specific figures.