Metodi Numerici per la Bioinformatica



Similar documents
Biclustering Algorithms for Biological Data Analysis: A Survey

A Toolbox for Bicluster Analysis in R

Computing the maximum similarity bi-clusters of gene expression data

Comparative Analysis of Biclustering Algorithms

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

Evolutionary Detection of Rules for Text Categorization. Application to Spam Filtering

GA as a Data Optimization Tool for Predictive Analytics

A Brief Study of the Nurse Scheduling Problem (NSP)

Introduction To Genetic Algorithms

Genetic Algorithm. Based on Darwinian Paradigm. Intrinsically a robust search and optimization mechanism. Conceptual Algorithm

Empirically Identifying the Best Genetic Algorithm for Covering Array Generation

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Exploratory data analysis for microarray data

Package NHEMOtree. February 19, 2015

Genetic Algorithms commonly used selection, replacement, and variation operators Fernando Lobo University of Algarve

Unsupervised learning: Clustering

Model-based Parameter Optimization of an Engine Control Unit using Genetic Algorithms

An evolutionary learning spam filter system

New Modifications of Selection Operator in Genetic Algorithms for the Traveling Salesman Problem

Original Article Efficient Genetic Algorithm on Linear Programming Problem for Fittest Chromosomes

D-optimal plans in observational studies

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

How To Cluster

Protein Protein Interaction Networks

Memory Allocation Technique for Segregated Free List Based on Genetic Algorithm

Multivariate Analysis of Ecological Data

Statistical Machine Learning

Management of Software Projects with GAs

Genetic Algorithms and Sudoku

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

How I won the Chess Ratings: Elo vs the rest of the world Competition

Alpha Cut based Novel Selection for Genetic Algorithm

Multiple Linear Regression in Data Mining

Nonlinear Model Predictive Control of Hammerstein and Wiener Models Using Genetic Algorithms

Volume 3, Issue 2, February 2015 International Journal of Advance Research in Computer Science and Management Studies

A Robust Method for Solving Transcendental Equations

Social Media Mining. Data Mining Essentials

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

Standardization and Its Effects on K-Means Clustering Algorithm

Estimation of the COCOMO Model Parameters Using Genetic Algorithms for NASA Software Projects

Clustering & Visualization

A Survey of Evolutionary Algorithms for Data Mining and Knowledge Discovery

ISSN: ISO 9001:2008 Certified International Journal of Engineering Science and Innovative Technology (IJESIT) Volume 2, Issue 3, May 2013

Boolean Network Models

New binary representation in Genetic Algorithms for solving TSP by mapping permutations to a list of ordered numbers

Management Science Letters

Lecture 10: Regression Trees

A Comparison of Genotype Representations to Acquire Stock Trading Strategy Using Genetic Algorithms

Practical Applications of Evolutionary Computation to Financial Engineering

Clustering. Danilo Croce Web Mining & Retrieval a.a. 2015/201 16/03/2016

Cellular Automaton: The Roulette Wheel and the Landscape Effect

Optimization of sampling strata with the SamplingStrata package

A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II

Solving Three-objective Optimization Problems Using Evolutionary Dynamic Weighted Aggregation: Results and Analysis

Why do statisticians "hate" us?

Local outlier detection in data forensics: data mining approach to flag unusual schools

System Identification for Acoustic Comms.:

A Service Revenue-oriented Task Scheduling Model of Cloud Computing

Keywords revenue management, yield management, genetic algorithm, airline reservation

A Non-Linear Schema Theorem for Genetic Algorithms

Mining Social-Network Graphs

Lab 4: 26 th March Exercise 1: Evolutionary algorithms

CHAPTER 3 SECURITY CONSTRAINED OPTIMAL SHORT-TERM HYDROTHERMAL SCHEDULING

Mathematical Models of Supervised Learning and their Application to Medical Diagnosis

Genetic Algorithms for Bridge Maintenance Scheduling. Master Thesis

Categorical Data Visualization and Clustering Using Subjective Factors

Nimble Algorithms for Cloud Computing. Ravi Kannan, Santosh Vempala and David Woodruff

Comparative genomic hybridization Because arrays are more than just a tool for expression analysis

Approximation Algorithms

USE OF EIGENVALUES AND EIGENVECTORS TO ANALYZE BIPARTIVITY OF NETWORK GRAPHS

ANALYTIC HIERARCHY PROCESS (AHP) TUTORIAL

Data Mining - Evaluation of Classifiers

Transportation Polytopes: a Twenty year Update

Linear Codes. Chapter Basics

LESSON 3.5 WORKBOOK. How do cancer cells evolve? Workbook Lesson 3.5

Big Data & Scripting Part II Streaming Algorithms

Learning in Abstract Memory Schemes for Dynamic Optimization

Integrating DNA Motif Discovery and Genome-Wide Expression Analysis. Erin M. Conlon

A Study of Crossover Operators for Genetic Algorithm and Proposal of a New Crossover Operator to Solve Open Shop Scheduling Problem

A Genetic Algorithm Processor Based on Redundant Binary Numbers (GAPBRBN)

STATISTICA Formula Guide: Logistic Regression. Table of Contents

Three Effective Top-Down Clustering Algorithms for Location Database Systems

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu

Compact Representations and Approximations for Compuation in Games

Overview. Swarms in nature. Fish, birds, ants, termites, Introduction to swarm intelligence principles Particle Swarm Optimization (PSO)

Effect of Using Neural Networks in GA-Based School Timetabling

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Environmental Remote Sensing GEOG 2021

ENHANCED CONFIDENCE INTERPRETATIONS OF GP BASED ENSEMBLE MODELING RESULTS

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics*

Analysis of gene expression data. Ulf Leser and Philippe Thomas

Dynamic Programming. Lecture Overview Introduction

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

High-Dimensional Data Visualization by PCA and LDA

Least Squares Estimation

Transcription:

Metodi Numerici per la Bioinformatica Biclustering A.A. 2008/2009 1

Outline Motivation What is Biclustering? Why Biclustering and not just Clustering? Bicluster Types Algorithms 2

Motivations Gene expression matrices have been extensively analyzed using clustering in one of two dimensions The gene dimension The condition dimension This correspond to the: Analysis of expression patterns of genes by comparing rows in the matrix. Analysis of expression patterns of samples by comparing columns in the matrix. 3

Motivations Analysis via clustering makes several a priori assumptions that may not be adequate in all circumstances: Clustering can be applied to either genes or samples, implicitly directing the analysis to a particular aspect of the system under study (e.g., groups of patients or groups of co-regulated genes) Clustering algorithms usually seek a disjoint cover of the set of elements, requiring that no gene or sample belongs to more than one cluster. 4

Motivations the results of the application of standard clustering techniques to genes are limited due to the existence of a number of experimental conditions where the activity of genes is uncorrelated. Many activation patterns are common to a group of genes only under specific experimental conditions. Discovering such local expression patterns may be the key to uncovering many genetic pathways that are not apparent otherwise. It is therefore highly desirable to move beyond the clustering paradigm and develop approaches capable of discovering local patterns in microarray data. 5

What is Biclustering? BICLUSTER: a submatrix spanned by a set of genes (rows) and a set of sample (column) given a gene expression matrix, it s possible to characterize the biological phenomena it embodies by a collection of biclusters, each representing a different type of joint behavior of a set of genes in a corresponding set of samples. 6

What is Biclustering? 7

What is Biclustering? Given the matrix A = (X,Y) I= Subset of rows J= Subset of columns (I,Y) = a subset of rows that exhibit similar behavior across the set of all columns = cluster of rows (X,J) = a subset of columns that exhibit similar behavior across the set of all rows = cluster of columns 8

What is Biclustering? Biclustering Goals: find a set of significant biclusters in a matrix: identify sub-matrices (subsets of rows and subsets of columns) with interesting properties. Perform simultaneous clustering on the row and column dimensions of the gene expression matrix instead of clustering the rows and columns separetely. Gene Expression Data Analysis Identify subgroups of genes and subgroups of conditions, where the genes exhibit highly correlated activities for every condition 9

Why Biclustering and not just Clustering? general models Clustering Can be applied to either the rows or the columns of the data matrix, separately. Produce either clusters of rows (subgroups of rows) or clusters of columns (subgroups of columns). local models Biclustering Perform simultaneous clustering of both rows and columns of the data matrix. Produce biclusters (subgroups of rows and subgroups of columns) 10

Why Biclustering and not just Clustering? Unlike Clustering : Biclustering identifies groups of genes that show similar activity patterns under a specific subset of the experimental conditions. Biclustering is the key technique to use when: Only a small set of the genes participates in a cellular process of interest. An interesting cellular process is active only in a subset of the conditions. A single gene may participate in multiple pathways that may or not be co-active under all conditions. 11

Biclustering V s Clustering Gene A Gene B Gene C Gene D Gene E Gene F Gene G Gene H Gene I Gene J Gene K Gene L Gene M 1 2 3 4 5 6 7 8 9 10 Clustering Gene A Gene B Gene C Gene D Gene K Gene L 1 2 3 5 7 10 Bicluster {1,2,3,5,7,10} {A,B,C,D,E,F} Similarity does not exist over all attributes Solution: Cluster both Row and Columns Simultaneously - Biclustering

Biclustering characteristics Biclustering algorithms should identify groups of genes and conditions, obeying the following rules: A cluster of genes should be defined with respect to only a subset of the conditions. A cluster of conditions should be defined with respect to only a subset of the genes. The clusters should not be exclusive and/or exhaustive There are no a-priori constraints on the organization of biclusters: a gene or condition should be able to belong to more than one bicluster or to no bicluster at all. The lack of structural constrains on biclustering solutions allows greater freedom but is consequently more vulnerable to overfitting biclustering algorithms must guarantee that the output biclusters are meaningful accompanying statistical model or a heuristic scoring method that define which of the many possible submatrices represent a significant biological behavior. 13

Biclustering: clinical application In clinical applications, gene expression analysis is done on tissues taken from patients with a medical condition. Using such assays, biologists have identified molecular fingerprints that can help in the classification and diagnosis of the patient status and guide treatment protocols. the focus is: identify profiles of expression over a subset of the genes that can be associated with clinical conditions and treatment outcomes, where ideally, the set of samples is equal in all but the subtype or the stage of the disease. However, a patient may be a part of more than one clinical group, e.g., may suffer from syndrome A, have a genetic background B and be exposed to environment C. Biclustering analysis is thus highly appropriate for identifying and distinguishing the biological factors affecting the patients along with the corresponding gene subsets. 14

Biclustering: functional genomics application Goal: understand the functions of each of the genes operating in a biological system. The rationale is that genes with similar expression patterns are likely to be regulated by the same factors and therefore may share function. By collecting expression profiles from many different biological conditions and identifying joint patterns of gene expression among them, researchers have characterized transcriptional programs and assigned putative function to thousands of genes. Since genes have multiple functions, and since transcriptional programs are often based on combinatorial regulation, biclustering is highly appropriate for these applications as well. An important aspect of gene expression data is their high noise levels: biclustering algorithms should be robust enough to cope with significant levels of noise 15

Bicluster Types An interesting criteria to evaluate a biclustering algorithm concerns the identification of the type of biclusters the algorithm is able to find. We identified four major classes of biclusters: 1. Biclusters with constant values. 2. Biclusters with constant values on rows or columns. 3. Biclusters with coherent values. 4. Biclusters with coherent evolutions. 16

Bicluster Types According to the specific properties of each problem One or more of these different types of biclusters are generally considered interesting. A different type of merit function should be used to evaluate the quality of the biclusters identified. The choice of the merit function is strongly related with the characteristics of the biclusters each algorithm aims at finding. 17

Biclusters with constant values The simplest biclustering algorithms identify subsets of rows and subsets of columns with constant values. A perfect constant bicluster is a sub-matrix (I,J) where all values within the bicluster are equal for all i I and j J: a ij = µ a a a a a a a a a a a a a a a a The merit function used to compute and evaluate constant biclusters is, in general, the variance or some metric based on it. 18

Biclusters with constant values on rows A perfect bicluster with constant rows: is a sub-matrix (I,J) where all values within the bicluster can be obtained using one of the following expressions: a a a a a a a a a ij = µ +α i a+i a+i a+i a+i a ij = µ x α i Where: µ is the typical value within the bicluster α is the adjustment for row i I. a+j a+j a+j a+j a+k a+k a+k a+k a x i a x i a x i a x i a x j a x j a x j a x j a x k a x k a x k a x k A bicluster with constant values in the rows identifies a subset of genes with similar expression values across a subset of conditions, allowing the expression levels to differ from gene to gene. 19

Biclusters with constant values on columns A perfect bicluster with constant columns: is a sub-matrix (I,J) where all values within the bicluster can be obtained using one of the following expressions: a a+i a+j a+k a a x i a x j a x k a ij = µ + β j a a+i a+j a+k a a x i a x j a x k a ij = µ x β j a a+i a+j a+k a a+i a+j a+k Where: µ is the typical value within the bicluster β is the adjustment for column j J. a a x i a x j a x k a a x i a x j a x k A bicluster with constant values in the columns identifies a subset of conditions within which a subset of genes present similar expression values assuming that the expression values may differ from condition to condition. 20

Biclusters with constant values on rows or columns The straightforward approach to identify non-constant biclusters is to normalize the rows or the columns of the data matrix using the row mean and the column mean, respectively. By doing this, the biclusters with constant rows/columns are transformed into constant biclusters before the biclustering algorithm is applied. 21

Biclusters with coherent values A perfect bicluster with coherent values: is defined as a subset of rows and a subset of columns whose values are predicted using the following expressions: ADDITIVE MODEL: a ij = µ + α i + β j Where: µ is the typical value within the bicluster α i is the adjustment for row i I β j is the adjustment for row j J. a b c d a+i b+i c+i d+i a+j b+j c+j d+j a+k b+k c+k d+k 22

Biclusters with coherent values MULTIPLICATIVE MODEL: a b c d a x i b x i c x i d x i a ij = µ x α i x β j a x j b x j c x j d x j a x k b x k c x k d x k Where: µ is the typical value within the bicluster α i is the adjustment for row i I β j is the adjustment for row j J. 23

Types of Biclusters : examples Constant values Constant values on rows Constant values on columns 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 2.0 3.0 4.0 1.0 1.0 1.0 1.0 2.0 2.0 2.0 2.0 1.0 2.0 3.0 4.0 1.0 1.0 1.0 1.0 3.0 3.0 3.0 3.0 1.0 2.0 3.0 4.0 1.0 1.0 1.0 1.0 4.0 4.0 4.0 4.0 1.0 2.0 3.0 4.0 Coherent values 1.0 2.0 5.0 0.0 1.0 2.0 0.5 1.5 2.0 3.0 6.0 1.0 2.0 4.0 1.0 3.0 4.0 5.0 8.0 3.0 4.0 8.0 2.0 6.0 5.0 6.0 9.0 4.0 3.0 6.0 1.5 4.5 Additive model 24 Multiplicative model

General additive models For every element a ij : The general additive model represents a sum of models. Each model represents the contribution of the bicluster B k to the value of a ij in case i I and j J. The general additive model is defined as follows: where: aij k is the number of biclusters = K k =0 The terms θ ik andκ jk are binary values that represent memberships: θ ρik is the membership of row i in the bicluster k. κ jk is the membership of column j in the bicluster k. ijk ρ ik κ jk 25

General additive models The value of θ ijk specifies the contribution of each bicluster k and can be one of the following expressions: µ k µ k + α ik µ k + β jk µ k + α ik + β jk Representing different types of biclusters: Constant Biclusters Biclusters with constant rows/columns Biclusters with additive model 26

General additive models: GENERAL ADDITIVE MODELS: 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 2.0 2.0 2.0 2.0 1.0 2.0 3.0 4.0 1.0 2.0 3.0 4.0 1.0 1.0 3.0 3.0 2.0 2.0 3.0 3.0 8.0 8.0 5.0 5.0 1.0 2.0 8.0 10 7.0 8.0 1.0 1.0 3.0 3.0 2.0 2.0 4.0 4.0 10 10 6.0 6.0 1.0 2.0 8.0 10 7.0 8.0 2.0 2.0 2.0 2.0 7.0 7.0 7.0 7.0 5.0 6.0 7.0 8.0 2.0 2.0 2.0 2.0 8.0 8.0 8.0 8.0 5.0 6.0 7.0 8.0 Constant values Constant rows Constant columns 1.0 2.0 5.0 0.0 2.0 3.0 6.0 1.0 4.0 5.0 5.0 6.0 9.0 5.0 5.0 0.0 11 7.0 6.0 1.0 4.0 5.0 8.0 3.0 Coherent Values 27 5.0 6.0 9.0 4.0

General multiplicative models Similiarly we can also think of a general multiplicative model: a ij = K k = 0 θ ijk ρ ik κ jk where: K is the number of biclusters The terms θ ik andκ jk are binary values that represent memberships: ρik is the membership of row i in the bicluster k. κ jk is the membership of column j in the bicluster k. 28

General multiplicative models The value of θ ijk specifies the contribution of each bicluster k and can be one of the following expressions: µ k µ k x α ik µ k x β jk µ k x α ik + β jk Representing different types of biclusters: Constant Biclusters Biclusters with constant rows/columns Biclusters with multiplicative model 29

General multiplicative models GENERAL MULTIPLICATIVE MODELS: 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 2.0 3.0 4.0 1.0 1.0 1.0 1.0 2.0 2.0 2.0 2.0 1.0 2.0 3.0 4.0 1.0 1.0 2.0 2.0 2.0 2.0 3.0 3.0 15 15 5.0 5.0 1.0 2.0 15 24 7.0 8.0 1.0 1.0 2.0 2.0 2.0 2.0 4.0 4.0 24 24 6.0 6.0 1.0 2.0 15 24 7.0 8.0 2.0 2.0 2.0 2.0 7.0 7.0 7.0 7.0 5.0 6.0 7.0 8.0 2.0 2.0 2.0 2.0 8.0 8.0 8.0 8.0 5.0 6.0 7.0 8.0 Constant values Constant rows Constant columns 2X1.5 1.0 2.0 5.0 0.0 2.0 3.0 6.0 1.0 4.0 5.0 2.0 12 5.0 0.0 5.0 6.0 3.0 18 6.0 1.0 4.0 5.0 8.0 3.0 1X2 6X2 Coherent Values 30 5.0 6.0 9.0 4.0 4.5X4

BICLUSTERING ALGORITHMS 31

Algorithms DifferentObjectives Identify one bicluster. Identify a given number of biclusters. DifferentApproaches Discover one bicluster at a time. Discover one set of biclusters at a time. Discover all biclusters at the same time (Simultaneous bicluster identification) 32

Algorithms: Iterative Row and Column Clustering Combination Apply clustering algorithms to the rows and columns of the data matrix, separately. Combine the results using some sort of iterative procedure to combine the two cluster arrangements. Divide and Conquer: Break the problem into several sub-problems that are similar to the original problem but smaller in size. Solve the problems recursively. Combine the intermediate solutions to create a solution to the original problem. Usually break the matrix into submatrices (biclusters) based on a certain criterion and then continue the biclustering process on the new submatrices. 33

Algorithms: Greedy Iterative Search: make a locally optimal choice in the hope that this choice will lead to a globally good solution. Usually perform greedy row/column addition/removal. Exhaustive Bicluster Enumeration: Cheng & Church Algorithm The best biclusters are identified using an exhaustive enumeration of all possible biclusters existent in the data, in exponential time. 34

Overview of the Biclustering Algorithms Method Publish Cluster Model Goal Cheng & Church ISMB 2000 Background + row effect + column effect Getz et al. (CTWC) PNAS 2000 Depending on plugin clustering algorithm Lazzeroni & Owen Bioinformatics Background + row effect (Plaid Models) 2000 + column effect Ben-Dor et al. (OPSM) Tanay et al. (SAMBA) Yang et al. (FLOC) Kluger et al. (Spectral) RECOMB 2002 Bioinformatics 2002 BIBE 2003 Genome Res. 2003 All genes have the same order of expression values Maximum bounded bipartite subgraph Background + row effect + column effect Background row effect column effect Minimize mean squared residue of biclusters Depending on plugin clustering algorithm Minimize modeling error Minimize the p-values of biclusters Minimize the p-values of biclusters Minimize mean squared residue of biclusters Finding checkerboard structures 35 Taken from Kevin Yip, 2003

Overview of the Biclustering Algorithms Method Allow overlap? Bicluster Discovery Complexity Testing Data Cheng & Church Yes (rare in reality) One at a time O(MN) or O(MlogN) Yeast (2884 17), lymphoma (4026 96) Getz et al. (CTWC) Yes One set at a time Exponential Leukemia (1753 72), colon cancer (2000 62) Lazzeroni & Owen Yes One at a time Polynomial Food (961 6), (Plaid Models) Ben-Dor et al. (OPSM) Tanay et al. (SAMBA) Yang et al. (FLOC) Kluger et al. (Spectral) Yes Yes Yes No All at the same time All at the same time All at the same time All at the same time forex (276 18), yeast (2467 79) O(NM 3 l) Breast tumor (3226 22) O((N2 d+1 ) log (r+1) /r(rd) ) Lymphoma (4026 96), yeast (6200 515) O((N+M) 2 kp) Yeast (2884 17) Polynomial Lymphoma (1 rel., 1 abs.), leukemia, breast cell line, CNS embryonal tumor 36

Cheng and Church s Algorithm Cheng and Church were the first to introduce biclustering to gene expression analysis. Their algorithmic framework represents the biclustering problem as an optimization problem, defining a score for each candidate bicluster and developing heuristics to solve the constrained optimization problem defined by this score function. The constraints force the uniformity of the matrix and the procedure gives preference to larger submatrices. Cheng and Church implicitly assume that (gene, condition) pairs in a good bicluster have a constant expression level, plus possibly additive row and column specific effects. 37 Biclustering of Expression data Y. Cheng and M.Church, ISMB 2000

Cheng and Church s Algorithm Model: A bicluster is represented by the submatrix A of the whole expression matrix (the involved rows and columns need not be contiguous in the original matrix). Each entry a ij in the bicluster is the summation of: 1. The background level 2. The row (gene) effect 3. The column (condition) effect A dataset contains a number of biclusters, which are not necessarily disjoint. 38

Cheng and Church s Algorithm:residue In the matrix A the residue score of element a ij is given by: I i J j a a ij = mean of row i a Ij =mean of column j a Ij = mean of A a IJ = a ij a = Ij j J = i I, j J J I J a ij i I I a ij a ij R ( a ) = a a a + ij ij ij Ij a IJ Biological meaning: the genes have the same (amount of) response to the conditions 39

Cheng and Church s Algorithm: mean square residue The mean square residue is the variance of the set of all elements in the bicluster, plus the mean row variance and the mean column variance. H ( I, J ) = I 1 J 2 ( aij aij aij + aij ) = i I, j J i I, j J 2 R ij I J A submatrix A IJ is called a δ-bicluster if H(I,J) δ for some δ 0. GOAL: find biclusters with low mean squared residue, in particular, large and maximal ones with scores below a certain threshold δ. 40

Cheng & Church algorithm H ( I, J ) = I 1 J i I, j J ( a ij a ij a Ij + a IJ ) 2 = i I, j J R 2 ij I J A score of H(I,J)=0 would mean that we are in the case of a constant bicluster of elements of a single value. (The gene expression levels fluctuates in unison) With a score of H(I,J) 0 it is always possible to remove a row ora a column to lower the score, until the remaining bicluster becomes constant. The global H score gives an indicator of how data fits together within that matrix; whether it has some coherence or is random: A high H value signifies that data is uncorrelated. A low H score values means that there is a correlation in the matrix 41

Minimum squared residue: example If 5 was replaced with 3 then the score would change to : H(M 2 )= 2.06 A matrix with elements randomly and uniformly generated in the range [a,b] (a=1, b=12), has an expected score of(b-a) 2 /12. In this case: H(M 3 )= (12-1 )2 /12=10.08 42

Cheng & Church algorithm Constraints: 1xM and Nx1 matrixes always give zero residue. Find biclusters with maximum sizes, with residues not more than a threshold δ (largest δ-biclusters) Constant matrixes always give zero residue. Use average row variance to evaluate the interestingness of a bicluster. Biologically, it represents genes that have large change in expression values over different conditions. 43

Cheng & Church algorithm Objective function for heuristic methods (to minimize): H ( I, J ) = I 1 J i I, j J ( a ij a ij a Ij + a IJ ) 2 = i I, j J R 2 ij I J sum of the components from each row and column, which suggests simple greedy algorithms to evaluate each row and column independently 44

Cheng and Church s Algorithm Greedy approach to rapidly converge to a maximal bicluster. In phase I, it removes rows/columns with a large contribution to the mean residue score (msr). In phase II, rows/columns are added that have a low contribution to the msr without exceeding δ. After a bicluster is identified, its values are randomized to prevent it to show up again.

Cheng and Church s Algorithm Given the threshold parameter δ, the algorithm runs in two phases: FIRST PHASE: the algorithm removes rows and columns from the full matrix. At each step,where the current submatrix has row set and column set, the algorithm examines the set of possible moves. 1 d ( i ) = for rows it calculates: j J RSI, J ( i, j) J for columns it calculates: 1 e ( j) = i RS I I, J ( i, j) I It then selects the highest scoring row or column and removes it from the current submatrix, as long as H(I,J)>δ. The idea is that rows/columns with large contribution to the score can be removed with guaranteed improvement (decrease) in the total mean square residue score. A possible variation of this heuristic removes at each step all rows/columns with a contribution to the residue score that is higher than some threshold. 46

Cheng and Church s Algorithm SECOND PHASE: Goal: increases the matrix size without crossing the threshold δ. For this rows and columns are being added, using the same scoring scheme, but this time looking for the lowest square residues d(i) e(j) at each move, and terminating where none of the possible moves increases the matrix size without crossing the threshold δ. Upon convergence, the algorithm outputs a submatrix with low mean residue and locally maximal size. To discover more than one bicluster, Cheng and Church suggested repeated application of the biclustering algorithm on modified matrices. The modification includes randomization of the values in the cells of the previously discovered biclusters, preventing the correlative signal in them to be beneficial for any other bicluster in the matrix. This has the obvious effect of precluding the identification of biclusters with significant overlaps. 47

Evolutionary bicluster Binary encoding for rows/columns Fitness: mean squared residue row variance large volume penalty (exponential) Typical genetic operators Evolutionary Biclustering of Gene Expressions H.Banka and S.Mitra ACM, Ubiquity, 7 (42) 2006 48

Genetic Algorithms -a brief introduction- The idea of genetic algorithm (GA) was first introduced by John Holland in early 1970 s based on the adaptive global search heuristic inspired by natural evolution and genetics with survival of the fittest strategy. It is a stochastic population based search strategy works on biological mechanism of natural selection, crossover, and mutation. GAs are executed iteratively on a set of coded solutions, called population, with the three basic operators: selection, crossover, and mutation. For solving a problem, GA starts with a set of encoded random solutions (i.e., chromosomes) and evolves better set of solutions over generations (iterations) by applying the basic GA operators. Better solutions are determined from objective values (fitness functions) that determines the suitability of reproduction for the solutions. Hence better solutions are selected whereas the bad ones are eliminated from the population at each generation 49

Simple Genetic Algorithm { } initialize population; evaluate population; while Termination Criteria Not Satisfied { } select parents for reproduction; perform recombination and mutation; evaluate population;

Evolutionary biclustering: Representation An encoded solution representing a bicluster: Each bicluster is represented by a fixed sized binary string called chromosome or individual, with a bit string for genes appended by another bit string for conditions. The chromosome corresponds to a solution for this optimal bicluster generation problem. A bit is set to one if the corresponding gene and/or condition is present in the bicluster, and reset to zero otherwise. 51

Evolutionary biclustering: fitness function Goal: generating maximal set of genes and conditions while maintaining the homogeneity of the biclusters Maximize: where: Multi-objective optimization g and c are the number of ones in the genes and conditions within the bicluster, G(g, c) is its mean squared residue score δ is the user-defined threshold for the maximum acceptable dissimilarity or mean squared residue score of the bicluster G and C are the total number of genes and conditions of the original gene expression array 52

Evolutionary biclustering: Local search Since the initial biclusters are generated randomly, it may happen that some irrelevant genes and/or conditions get included in spite of their expression values lying far apart in the feature space. An analogous situation may also arise during crossover and mutation in each generation. These genes and conditions, with dissimilar values, need to be eliminated deterministically. Furthermore, for good biclustering, some genes and/or conditions having similar expression values need to be incorporated as well. The algorithm starts with a given bicluster and an initial gene expression array (G,C). The irrelevant genes or conditions having mean squared residue above (or below) a certain threshold are now selectively eliminated (or added) using the some conditions. 53

Evolutionary biclustering: Domination: The conditions for a solution to be dominated with respect to the other solutions is: If there are M objective functions, a solution x(1) is said to dominate another solution x(2), if both conditions the solution x(1) is no worse than x(2) in all the M objective functions and the solution x(1) is strictly better than x(2) in at least one of the M objective functions. Crowding distance: this assigns the highest value to the boundary solutions and the average distance of two solutions [(i+1) th and (i 1) th ] on either side of solution i along each of the objectives. Crowding selection: A solution i wins tournament with another solution j if: solution i has better rank, i.e., r i < r j. both the solutions are in the same front, i.e., r i = r j, but solution i is less densely located in the search space, i.e., d i > d j. 54

Evolutionary biclustering: The algorithm The main steps of the proposed algorithm, repeated over a specified number of generations, are: 1. Generate a random population of size P. 2. Delete or add multiple nodes (genes and conditions) from each individual of the population. 3. Calculate the multi-objective fitness functions f1 and f2 4. Rank the population using the dominance criteria. 5. Calculate crowding distance. 6. Perform selection using crowding tournament selection. 7. Perform crossover and mutation (as in conventional GA) to generate offspring population of size P. 8. Combine parent and offspring population. 9. Rank the mixed population using dominance criteria and crowding distance, as above. 10.Replace the parent population by the best P members of the combined population. 55

Biclustering advantages 1. automatically selects genes and conditions with more coherent measurement 2. groups items based on a similarity measure that depends on a context, which is best defined as a subset of the attributes. It discovers not only the grouping, but the context as well. And to some extent, these two become inseparable and exchangeable, which is a major difference between biclustering and clustering rows after clustering columns. 3. allows rows and columns to be included in multiple biclusters, and thus allows one gene or one condition to be identified by more than one function categories. This added flexibility correctly reflects the reality in the functionality of genes and overlapping factors in tissue samples and experiment conditions. 56

Biclustering: observations The algorithms presented demonstrate some of the approaches developed for the identification of bicluster patterns in large matrices, and in gene expression matrices in particular. A classification of the different methods ca be: a) By their model and scoring schemes b) By the type of algorithm used for detecting biclusters 57

Biclustering: models and score To ensure that the biclusters are statistically significant, each of the biclustering methods defines a scoring scheme to assess the quality of candidate biclusters, or a constraint that determines which submatrices represent significant bicluster behavior. Constraint based methods: search for gene (property) sets that define stable subsets of properties. Algorithms: iterative signature algorithm, the coupled two-way clustering method and the spectral algorithm of Kluger et al. Scoring based methods : rely on a background model for the data. The basic model assumes that biclusters are essentially uniform submatrices and scores them according to their deviation from such uniform behavior. More elaborate models allow different distributions for each condition and gene, usually in a linear way. Algorithms: the Cheng-Church algorithm and the Plaid model. 58

Biclustering: algorithmic approaches The algorithmic approaches for detecting biclusters given the data are greatly affected by the type of score/constraint model in use: Several algorithms alternate between phases of gene sets and condition sets optimization (the iterative signature algorithm and the coupled two-way clustering algorithm.) Other use standard linear algebra or optimization algorithms to solve key subproblems. (Plaid model and the Spectral algorithm) A heuristic hill climbing algorithm is used in the Cheng-Church algorithm. 59

Research Opportunities Many issues in biclustering algorithm design also remain open and should be addressed by the scientific community: Propose other bicluster models. Based on the current models, propose new algorithms that improve bicluster quality (validated statistically or biologically) and/or time complexity. Combine the strength of multiple studies. Investigate the effects of normalization to the models/algorithms. Compare the different methods on some other real datasets. Make better use of domain knowledge. 60