Pairwise distance methods

Similar documents
Arbres formels et Arbre(s) de la Vie

STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp

Missing Data: Part 1 What to Do? Carol B. Thompson Johns Hopkins Biostatistics Center SON Brown Bag 3/20/13

Introduction to Phylogenetic Analysis

Lecture 2 Matrix Operations

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Introduction to Bioinformatics AS Laboratory Assignment 6

Cluster Analysis. Isabel M. Rodrigues. Lisboa, Instituto Superior Técnico

People have thought about, and defined, probability in different ways. important to note the consequences of the definition:

Systems of Linear Equations

Solution to Homework 2

A Review of Methods. for Dealing with Missing Data. Angela L. Cool. Texas A&M University

D-optimal plans in observational studies

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

MATCH Commun. Math. Comput. Chem. 61 (2009)

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Chapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS

Algebra I Vocabulary Cards

Data Mining: Algorithms and Applications Matrix Math Review

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

An introduction to Value-at-Risk Learning Curve September 2003

Multiple regression - Matrices

Example: Boats and Manatees

Gerry Hobbs, Department of Statistics, West Virginia University

The Method of Least Squares

Question 2: How do you solve a matrix equation using the matrix inverse?

On Correlating Performance Metrics

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Introduction to Matrix Algebra

Molecular Clocks and Tree Dating with r8s and BEAST

Notes on Symmetric Matrices

Least-Squares Intersection of Lines

Online EFFECTIVE AS OF JANUARY 2013

Web Data Extraction: 1 o Semestre 2007/2008

Lecture 10: Regression Trees

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

A Brief Study of the Nurse Scheduling Problem (NSP)

Simple Regression Theory II 2010 Samuel L. Baker

Mining Social Network Graphs

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

An experimental study comparing linguistic phylogenetic reconstruction methods *

ECE 842 Report Implementation of Elliptic Curve Cryptography

Factor Analysis. Chapter 420. Introduction

Introduction. Appendix D Mathematical Induction D1

A Step-by-Step Tutorial: Divergence Time Estimation with Approximate Likelihood Calculation Using MCMCTREE in PAML

Notes on Determinant

Bio-Informatics Lectures. A Short Introduction

P2 Performance Management March 2014 examination

MORPHEUS. Prediction of Transcription Factors Binding Sites based on Position Weight Matrix.

IV. ALGEBRAIC CONCEPTS

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:

Vector and Matrix Norms

Sequence Analysis 15: lecture 5. Substitution matrices Multiple sequence alignment

Calculate Highest Common Factors(HCFs) & Least Common Multiples(LCMs) NA1

Handling missing data in large data sets. Agostino Di Ciaccio Dept. of Statistics University of Rome La Sapienza

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Linear Algebra Notes for Marsden and Tromba Vector Calculus

BASIC STATISTICAL METHODS FOR GENOMIC DATA ANALYSIS

Problem of Missing Data

Introduction. The Quine-McCluskey Method Handout 5 January 21, CSEE E6861y Prof. Steven Nowick

Steven M. Ho!and. Department of Geology, University of Georgia, Athens, GA

Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016

Common Core Unit Summary Grades 6 to 8

DATA ANALYSIS II. Matrix Algorithms

6.3 Conditional Probability and Independence

Session 7 Bivariate Data and Analysis

Maximum-Likelihood Estimation of Phylogeny from DNA Sequences When Substitution Rates Differ over Sites1

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

Engineering Problem Solving and Excel. EGN 1006 Introduction to Engineering

How To Understand And Solve A Linear Programming Problem

Solving Systems of Linear Equations Using Matrices

Professor Anita Wasilewska. Classification Lecture Notes

A. The answer as per this document is No, it cannot exist keeping all distances rational.

MATH Fundamental Mathematics IV

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS

Math 312 Homework 1 Solutions

A successful market segmentation initiative answers the following critical business questions: * How can we a. Customer Status.

Data Analysis Tools. Tools for Summarizing Data

Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any.

Nonlinear Algebraic Equations Example

1.3 Algebraic Expressions

MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing!

Hacking-proofness and Stability in a Model of Information Security Networks

Multiple Linear Regression in Data Mining

Algebra Cheat Sheets

Metodi Numerici per la Bioinformatica

Big Data Analytics CSCI 4030

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

The Central Dogma of Molecular Biology

10.2 Series and Convergence

Balance Accuracy. FlexiWeigh

Risk Decomposition of Investment Portfolios. Dan dibartolomeo Northfield Webinar January 2014

CALCULATIONS & STATISTICS

Benchmarking Student Learning Outcomes using Shewhart Control Charts

4. How many integers between 2004 and 4002 are perfect squares?

NOTES ON LINEAR TRANSFORMATIONS

Transcription:

Pairwise distance methods Peter Beerli October 26, 2005 When we infer a tree from genetic data using parsimony we minimize the amount of change along the branches of the tree. Similarly when we use the likelihood principle we minimize change conditional on a specific mutation model. The mutation model is crucial. The model can take into account that we do not observe all substitution events, because recent events might hide ancient events. Parsimony is therefore undercounting the number of changes and so might have a shorter tree than the true tree. Likelihood does not escape this problem either, we have either a tree that is shorter or the same length as the true tree. An alternative to likelihood or parsimony is an approach based on evolutionary distances between a pair of sequences, where the distance is accounting for all unseen events, for example using similar mutation models as likelihood. Pairwise distance methods are not so popular anymore because the are outperformed by likelihood methods. Pairwise methods evaluate all pairs of sequences and transform the differences into a distance. This essentially is a data reduction from a possibly many state difference to a single number. Combining these distances to estimate a tree must be less powerful than the full likelihood approach. In addition, an identical distance can be generated from different sequence pairs and once we only analyze the distance matrix that difference is lost. Using the number of different sites as a distance measure makes quickly clear that we can arrive at the same measure from different sequences. Distance methods have still their merit because once the distance matrix is calculated the tree building can be very fast and under many circumstances are the trees generated with such methods not all that terrible and often are identical to the likelihood tree. 1

Table 1: Example of a problematic data sets for distance methods, the used distance is simply counting sites that are different between pairs. Individual Sequence Individual one two three four one two ATTAGC ATTGGC one - 1 2 3 two - 1 2 three ATGGGC three - 1 four GTGGGC four - 1 Additive distances If we could estimate branch length on a tree with absolute certainty all distances on a tree would be additive, for example the distances between all the tips of the tree in Figure 1 are d AB = v 1 + v 2 d AC = v 1 + v 3 + v 4 d AD = v 1 + v 3 + v 5 d BC = v 2 + v 3 + v 4 d BD = v 2 + v 3 + v 5 d CD = v 4 + v 5 Additive trees satisfy the four-point metric condition, for any four taxa A, B, C, and D, d AB + d CD max(d AC + d BD, d AD + d BC ) Figure 1: Addititive tree. 2

2 Ultrameric trees Ultrameric trees have additive properties and also obey ultrameric properties: d AB = v 1 + v 2 + v 3 d AC = v 1 + v 2 + v 4 d BC = v 3 + v 4 v 3 = v 3 v 1 = v 2 + v 3 = v 2 + v 4 Ultrametric trees are often expressed as molecular clock tree, also such trees do not necessarily assume that there is linear change of the mutation rate through time. 2.1 Additive tree method The discussion about additive trees all real great but unfortunately due the finiteness of the data there will be random fluctuations that will result in deviations from the perfect additivity. Methods were derived that used this deviation (or distortion) from the perfect additivity as an optimality criterion. Fitch and Margoliash and others derived a method that minimizes the following objective function E = T 1 T i=0 j=i+1 w ij d ij p ij α (1) where E is the error of fitting the distance estimates and the tree, T is the number of taxa, d ij is a distance measure between the taxa i and j; p ij is the length of the path connecting i and j; w ij is a Figure 2: Ultrameric tree. 3

weighting to separate the taxa i and j, and α can take values such as 1 or 2. With a value of α = 2 one uses a least-squares minimization. If α = 1 then the absolute differences will be minimized. The weightings w ij can accommodate previous knowledge about the data, but often it is unclear what we know. The most common weightings are w ij = 1 w ij = 1/d 2 ij w ij = 1/d ij w ij = 1/σ 2 All errors of the distances are the same percentage error is the same square root of the error is the same error is inversely correlated to the expected variance of the measurements of d ij Missing data can be easily accommodated by setting the w ij involving missing data to zero. If the variation of σ 2 is known this weighting would be preferred. Problems might arise when identical sequences are in the dataset. For an unrooted tree there are 2T 3 independent branches defining the p ij values, and there are T (T 1)/2 distinct distances. We can represent the tree as an indicator matrix A of T (T 1)/2 rows and 2T 3 columns. An element of this matrix is 1 if the branch k is part of connecting taxon i to taxon j otherwise it is zero. The p values can now be expresses as the Av = p 1 1 0 0 0 v 1 0 1 1 0 1 1 0 1 0 1 v 2 v 0 1 1 1 0 3 = 0 1 1 0 1 v 4 v 5 0 0 0 1 1 p AB p AC p AD p BC p BD p CD (2) If the distances were additive then p = d for all pairs and we could solve the equation directly. but due to the imperfection of the data we use formula 2 to eliminate p in formula 1. Assuming α = 2 and w ij = 1 we can find the branch length v = (A T A) 1 (A T d) Still we need an appropriate search strategy to search for the best tree, but we can use any heuristic strategy can be used to do that. Sometimes the solution of the above equation results in negative branch length, several strategies are described but most simply the negative branch length are set to zero without adjust the other branch lengths. If the data does not propose negative branch lengths the error estimate E will be accurate but when negative branch length are encountered the estimates of E will be too low. 4

Errors using this procedure come from two two sources: (1) we assuem each pairwise distance is independent of each other, this is certainly untrue. (2) any error in the data will be amplified by the pairwise use and underestimates similarity by state compared to similarity by descent (homoplasy). 2.2 Minimal evolution Minimal evolution sets the weights to 1 and α = 2 and simply assumes that the sum of all branches are minimized (instead of all individual branches by itself) L = 2T 3 This was described by Kidd and Sgaramella-Zonta (1971). Rzhetsky and Nei as minimal evolution in a very similar form L = i1 2T 3 i1 v ij (3) This was (re)described in 1992 by v ij (4) the newer version takes the absolute value which seems a big drawback but under realistic condition is often of no big concern. Some proponent claim that ME is superior to other techniques, although others have shown that the simple Fitch-Margoliash method works as well as ME with enough data. 3 Distance measures To get a distance we need to have some model of change between the two sequences and a possible way to about this is to look at the frequency of changes between the two taxa: n AA /N n AC /N n AG /N n AT /N a b c d n CA /N n CC /N n CG /N n CT /N e f g h F XY = = n GA /N n GC /N n GG /N n GT /N i j k l n T A /N n T C /N n T G /N n T T /N m n o p Ambiguities are not coded correctly usign the above scheme, the uncertainties might be counted more than once, for example a Purine would be an A or a G and will overestimate the similarity. The most simply distance is the p-distance or dissimilarity D. It is d XY =b + c + d + e + g + h + i + j + l + m + n + o =1 (a + f + k + p) 5

The mutation models specified earlier can be used to generate distances and for example the Jukes- Cantor distance is D =1 (a + f + k + p)d XY = 3 4 ln(1 4 3 D) 4 Actual strategies to find optimal trees with distance methods 4.1 Neighbor-Joining The neighbor-joining technique is kin to clustering technology. It was developed by Saitou and Nei (1987) and uses a distance matrix to construct a tree. It assumes that the data are close to an additive tree, but it does not assume a molecular clock. NJ is a special case of the star decomposition algorithm described earlier. I start wit a star phylogeny and then uses the smallest distance in the distance matrix to find the next two pairs move out of the multifurcation. The none need s to recalculate the distance matrix that now contains a tip less. 6

Algorithm 1 Neighbor joining 1. Give a matrix of pairwise distances (d ij ), for each terminal node I calculate its net divergence r i from all other taxa using the formula r i = N k=1 where N is the number of terminal nodes in the current matrix. Note that the assumption that d ii = 0, otherwise the summation would need to skip over k = i. 2. Create a rate corrected distance matrix M in which the elemens are defined as d ji M ij = d ij (r i r j )/(N 2) only states i j are interesting, even only the minimum needs to be known. 3. define a new node u whose three branches join nodes i, j and the rest of the tree. Define the length of the tree branches from u to i to j as v iu = d ij 2 + (r i r j ) 2(N 2) v ju = d ij v iu 4. Define the distance from u to each other terminal node d ku = (d ik + d jk + d ij )/2 5. Remove distance to nodes i and j from the data matrix and decrease N by 1. 6. If more than two nodes remaining, go back to step 1. Otherwise the tree is full defined except for the last branch length which is v ij = d ij 7