Examples and Proofs of Inference in Junction Trees

Save this PDF as:

Size: px
Start display at page:

Transcription

1 Examples and Proofs of Inference in Junction Trees Peter Lucas LIAC, Leiden University February 4, Representation and notation Let P(V) be a joint probability distribution, where V stands for a set of random variables. Now assume that V = U, where and U are disjoint sets of random variables. Then we use the notation P(V) V\ to represent the (marginalised) joint probability distribution obtained by summing out the variables V with the exception of, i.e. P(V) = P() V\ ometimes we use potentials, denoted by ϕ, rather than probability distributions. Potentials are functions that look very similar to probability distributions. However, there are two differences with probability distributions: (1) They need not sum to 1; (2) There is no notation for conditioning as in probability distributions (i.e. P( )). For example if ϕ(v 1,V 2 ) is defined in terms of a probability distribution, it may represent P(V 1 V 2 ), but also P(V 1,V 2 ) or P(V 2 V 1 ), and even P(V 1 ) or P(V 2 ). imply look at how it is defined to find out which of these is the case. 2 Motivation We illustrate the basic ideas of probabilistic inference in junction trees by means of a simple example. Consider the Bayesian network shown in Figure 1(a). Assume we have the following probability distribution associated with the Bayesian network (following the structure of the graph, i.e. condition a variable on its parents): P(v 1 v 2 ) = 0.2 P(v 1 v 2 ) = 0.6 P( v 1 v 2 ) = 0.8 P( v 1 v 2 ) = 0.4 P(v 3 v 2 ) = 0.5 P(v 3 v 2 ) = 0.7 P( v 3 v 2 ) = 0.5 P( v 3 v 2 ) = 0.3 P(v 2 ) = 0.9 P( v 2 ) = 0.1 1

2 V 1 V 2 V 3 (a) Bayesian network V 1 V 2 V 3 (b) triangulated graph V 1,V 2 V 2 V 2,V 3 (c) junction tree Figure 1: Example Bayesian network, triangulated graph and junction tree. We know that a Bayesian network defines a joint probability distribution on its associated variables, i.e. P(X V ) = v V P(X v X π(v) ) with π(v) the set of parents of vertex v V, and V the set of vertices of the associated acyclic directed graph G = (V,A). o in this case we have: P(V 1,V 2,V 3 ) = P(V 1 V 2 )P(V 2 )P(V 3 V 2 ) For example: P(v 1, v 2,v 3 ) = P(v 1 v 2 )P( v 2 )P(v 3 v 2 ) = = The algorithm to transform the Bayesian network into a junction tree transforms the Bayesian network first into a triangulated graph (ee slides of the lecture on probabilistic inference). This is an undirected graph, where children on a converging connection are connected by a line (called moralisation), and chords are added to cut a cycle of more than three vertices short (called triangulation. In this example, there is no need for moralisation (no converging connection) and triangulation (not cycles), and thus the resulting triangulated graph is just the original directed graph with the direction of the edges removed. Check that Figure 1(a) and 1(b) represent exactly the same independence statements (slides lectures 4 5). We finally transform the triangulated graph into a junction tree by first constructing cliques C i : C 1 = {V 1,V 2 } and C 2 = {V 2,V 3 } and their intersections, called separators, here = {V 2 }. Finally we have to define probability distributions on the cliques and separators using potentials. Here we have different options. (Choice 1) One choice is for example to define: = P(V 1 V 2 ) ϕ V2 = 1 = P(V 3 V 2 )P(V 2 ) 2

3 Note that we have given the probabilities given above that (V 1,V 2 ) = P(V 1 V 2 ) = = 2 V 1,V 2 V 1,V 2 and so, this potential is not normalised. Actually, it would not make sense to normalise it in this case, as we are dealing with two probability distributions represented in the potential (P(V 1 v 2 ) and P(V 1 v 2 )). However, when it had represented a joint probability distributions, as for example does, and it is not normalised, it is possible to normalise it. For this is not needed, as we already have that V 2,V 3 (V 2,V 3 ) = 1. Clearly, given the potentials defined above, we have that the joint probability distribution P(V 1,V 2,V 3 ) can be obtained as follows: P(V 1,V 2,V 3 ) = ϕ V 1,V 2 ϕ V2 = P(V 1 V 2 )P(V 2 )P(V 3 V 2 ) However, we might also have defined the potentials as follows (Choice 2): = P(V 1 V 2 )P(V 2 ) ϕ V2 = 1 = P(V 3 V 2 ) and even as follows (Choice 3): = P(V 1 V 2 )P(V 2 ) ϕ V2 = P(V 2 ) = P(V 3 V 2 )P(V 2 ) Note that in all cases P(V 1,V 2,V 3 ) will be the same. Let us now assume that we adopt the first definition of the potentials. In that case the clique C 1 only knows the family of probability distributions P(V 1 V 2 ) (one probability distribution P(V 1 v 2 ) and one probability distribution P(V 1 v 2 )). However, in order to compute P(V 1 ) or P(V 2 ) we also need P(V 2 ), as: P(V 1 ) = V 2 P(V 1 V 2 )P(V 2 ) and P(V 2 ) = V 1 P(V 1 V 2 )P(V 2 ) This information is not available in C 1, but it is in clique C 2. This explains the idea of using message passing to obtain this information from other cliques. For this definition of potentials, C 1 needs information from C 2 ; for the second example definition of potentials it appears that C 2 needs information from C 1. In general, it is not clear beforehand whether a clique will or will not need information from its neighbours. Hence, we need a general purpose algorithm. 3

4 ϕ V ϕ W Figure 2: chematic structure of message passing scheme. 3 Inference in junction trees The general scheme for message passing in junction trees is shown in Figure 2. The example above can be easily translated into this general scheme, as we have: ϕ V = = ϕ V2 ϕ W = The algorithm for message passing consists of two phases: 1. Updating from V to W (forward pass): ϕ = V\ ϕ V ϕ W = ϕ ϕ W This sets the separator to the marginal in ϕ V. 2. Then, from W to V (backward pass): = W\ ϕ W ϕ V = ϕ ϕ ϕ V Let us first see how this works for the examples above. Let us consider Choice 1 for the potentials (see above). We get: ϕ = V\ϕ V = V 1 = V 1 P(V 1 V 2 ) = 1 = ϕ V 2 Next, we update ϕ W = ϕ V2,V3 : ϕ W = ϕ ϕ W = ϕ V 2 ϕ V2 = 1/1 = ϕ V 2,V 3 o, here nothing has changed as = ϕ V 2,V 3. ubsequently, a message is passed back from W = C 2 to V = C 1 : = W\ϕ W = V 3 ϕ V 2,V 3 = V 3 P(V 3 V 2 )P(V 2 ) = P(V 2 ) 4

5 Finally, we compute: ϕ V = ϕ ϕ ϕ V = ϕ V 2 ϕ = P(V 2) P(V 1 V 2 ) = P(V 1 V 2 )P(V 2 ) V 2 1 o, now clique C 1 knows P(V 2 ) and we can compute P(V 1 ) and P(V 2 ). Investigate what happens when we had used Choice 2 or Choice 3 for the definition of the potentials. Using the probabilities given above for the Bayesian network in Figure 1, we would get: ϕ V 2 (v 2 ) = ϕ V 2 ( v 2 ) = 1, ϕ V 2,V 3 (v 2,v 3 ) = = 0.45, ϕ V 2,V 3 ( v 2,v 3 ) = = 0.07, ϕ V 2,V 3 ( v 2, v 3 ) = = 0.03, V 2 (v 2 ) = 0.9, V 2 ( v 2 ) = 0.1, ϕ V 1,V 2 (v 1,v 2 ) = = 0.18, ϕ V 1,V 2 ( v 1,v 2 ) = = 0.72, ϕ V 1,V 2 (v 1, v 2 ) = = 0.06, ϕ V 1,V 2 ( v 1, v 2 ) = = Note that now: ϕ V 1,V 2 (V 1,V 2 ) = = 1 V 1,V 2 4 Proof of correctness Although the formulas for message passing given above appear to work correctly, we also would like to know in general whether the algorithm is correct. It does require some derivations, but in the end it is straightforward to obtain them. 4.1 Basics Refer again to Figure 2. The following definitions will be used P(V,W) = ϕ Vϕ W This is the definition for the joint probability for junction trees. The general formula is as follows (see slides): P(V) = C Cϕ C where C is the set of cliques and the set of separators of a junction tree, and V = C C C. Using marginalisation, we obtain the following formulas: P(V) = W\P(V,W) (1) and P(W) = V\P(V,W) and finally, P() = and (V W)\P(V,W) = V\P(V) = W\ Also recall the message passing formulas: ϕ = V\ ϕ V = W\ ϕ W ϕ W = ϕ ϕ W ϕ V = ϕ ϕ ϕ V P(W) 5

6 4.2 Forward pass The derivation of the formula for the forward pass stage of message passing is easy, using Equation (1): P(W) = V\P(V,W) = ϕ V ϕ W V\ = ϕ W V\ = ϕ W ϕ ϕ V = ϕ ϕ W = ϕ W (2) Thus, after the forward pass the updated potential of W, i.e. ϕ W, is equal to the joint probability distribution P(W). The ratio ϕ is often called the update ratio. 4.3 Backward pass We start by using the basic result from subsection 4.1 that P() = W\ P(W) = W\ϕ W according to Equation (2). The algorithm uses the notation = W\ϕ W (3) for this result. Using the notation used in the algorithm: ϕ V = ϕ ϕ ϕ V We wish to prove that (4) ϕ V = P(V) as this means that, together with the result above, that the algorithm is correct. We start as follows: P(V,W) = ϕ Vϕ W ϕ ϕ = ϕ Vϕ W ϕ (5) using Equation (2). From Equation (4), we derive by switching terms, that ϕ V = ϕ V ϕ 6

7 ubstituting this result into Equation (5) gives: P(V,W) = P(W)ϕ V = P(W)ϕ V P() using Equation (3). Now we proceed as follows: P(V) = W\P(V,W) = W\ P(W)ϕ V P() = ϕ V P() P(W) W\ = ϕ V P() P() = ϕ V o after termination of the algorithm, we have that ϕ V = P(W), in other words, the algorithm is correct. ϕ W = P(V) and we already had that 4.4 Local consistency If we now pass back information again from V to W then of course nothing will change, as the situation is already stable. Passing back again means computing: = V\ϕ V This must be the same as = W\ ϕ W, and that is what we need to prove. o, to be proved is whether = holds. The proof goes as follows: = V\ϕ V = V\ = ϕ ϕ ϕ ϕ V V\ = ϕ ϕ ϕ = ϕ V o, that was pretty easy. To conclude, after finishing with message passing, probabilistic information about the separator obtained from V is exactly the same as the probabilistic information obtained from W, called local consistency. 7

Lecture 10: Probability Propagation in Join Trees

Lecture 10: Probability Propagation in Join Trees In the last lecture we looked at how the variables of a Bayesian network could be grouped together in cliques of dependency, so that probability propagation

Bayesian Networks Tutorial I Basics of Probability Theory and Bayesian Networks

Bayesian Networks 2015 2016 Tutorial I Basics of Probability Theory and Bayesian Networks Peter Lucas LIACS, Leiden University Elementary probability theory We adopt the following notations with respect

Lecture 6: Approximation via LP Rounding

Lecture 6: Approximation via LP Rounding Let G = (V, E) be an (undirected) graph. A subset C V is called a vertex cover for G if for every edge (v i, v j ) E we have v i C or v j C (or both). In other

3. Markov property. Markov property for MRFs. Hammersley-Clifford theorem. Markov property for Bayesian networks. I-map, P-map, and chordal graphs

Markov property 3-3. Markov property Markov property for MRFs Hammersley-Clifford theorem Markov property for ayesian networks I-map, P-map, and chordal graphs Markov Chain X Y Z X = Z Y µ(x, Y, Z) = f(x,

COS513 LECTURE 5 JUNCTION TREE ALGORITHM

COS513 LECTURE 5 JUNCTION TREE ALGORITHM D. EIS AND V. KOSTINA The junction tree algorithm is the culmination of the way graph theory and probability combine to form graphical models. After we discuss

Solutions to Exercises 8

Discrete Mathematics Lent 2009 MA210 Solutions to Exercises 8 (1) Suppose that G is a graph in which every vertex has degree at least k, where k 1, and in which every cycle contains at least 4 vertices.

The Basics of Graphical Models

The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures

CS268: Geometric Algorithms Handout #5 Design and Analysis Original Handout #15 Stanford University Tuesday, 25 February 1992

CS268: Geometric Algorithms Handout #5 Design and Analysis Original Handout #15 Stanford University Tuesday, 25 February 1992 Original Lecture #6: 28 January 1991 Topics: Triangulating Simple Polygons

Minimum Spanning Trees

Minimum Spanning Trees Algorithms and 18.304 Presentation Outline 1 Graph Terminology Minimum Spanning Trees 2 3 Outline Graph Terminology Minimum Spanning Trees 1 Graph Terminology Minimum Spanning Trees

Markov Chains, part I

Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X

Probabilistic Graphical Models Final Exam

Introduction to Probabilistic Graphical Models Final Exam, March 2007 1 Probabilistic Graphical Models Final Exam Please fill your name and I.D.: Name:... I.D.:... Duration: 3 hours. Guidelines: 1. The

3. The Junction Tree Algorithms

A Short Course on Graphical Models 3. The Junction Tree Algorithms Mark Paskin mark@paskin.org 1 Review: conditional independence Two random variables X and Y are independent (written X Y ) iff p X ( )

Distributed Computing over Communication Networks: Maximal Independent Set

Distributed Computing over Communication Networks: Maximal Independent Set What is a MIS? MIS An independent set (IS) of an undirected graph is a subset U of nodes such that no two nodes in U are adjacent.

Course: Model, Learning, and Inference: Lecture 5

Course: Model, Learning, and Inference: Lecture 5 Alan Yuille Department of Statistics, UCLA Los Angeles, CA 90095 yuille@stat.ucla.edu Abstract Probability distributions on structured representation.

A Partition Formula for Fibonacci Numbers

A Partition Formula for Fibonacci Numbers Philipp Fahr and Claus Michael Ringel Fakultät für Mathematik Universität Bielefeld POBox 00 3 D-330 Bielefeld, Germany philfahr@mathuni-bielefeldde ringel@mathuni-bielefeldde

CSL851: Algorithmic Graph Theory Semester I Lecture 1: July 24

CSL851: Algorithmic Graph Theory Semester I 2013-2014 Lecture 1: July 24 Lecturer: Naveen Garg Scribes: Suyash Roongta Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

Lecture 11: Graphical Models for Inference

Lecture 11: Graphical Models for Inference So far we have seen two graphical models that are used for inference - the Bayesian network and the Join tree. These two both represent the same joint probability

Week 2 Polygon Triangulation

Week 2 Polygon Triangulation What is a polygon? Last week A polygonal chain is a connected series of line segments A closed polygonal chain is a polygonal chain, such that there is also a line segment

PROBLEM SET # 2 SOLUTIONS

PROBLEM SET # 2 SOLUTIONS CHAPTER 2: GROUPS AND ARITHMETIC 2. Groups.. Let G be a group and e and e two identity elements. Show that e = e. (Hint: Consider e e and calculate it two ways.) Solution. Since

Fall 2015 Midterm 1 24/09/15 Time Limit: 80 Minutes

Math 340 Fall 2015 Midterm 1 24/09/15 Time Limit: 80 Minutes Name (Print): This exam contains 6 pages (including this cover page) and 5 problems. Enter all requested information on the top of this page,

Homework 15 Solutions

PROBLEM ONE (Trees) Homework 15 Solutions 1. Recall the definition of a tree: a tree is a connected, undirected graph which has no cycles. Which of the following definitions are equivalent to this definition

A graph is called a tree, if it is connected and has no cycles. A star is a tree with one vertex adjacent to all other vertices.

1 Trees A graph is called a tree, if it is connected and has no cycles. A star is a tree with one vertex adjacent to all other vertices. Theorem 1 For every simple graph G with n 1 vertices, the following

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

Outline NP-completeness Examples of Easy vs. Hard problems Euler circuit vs. Hamiltonian circuit Shortest Path vs. Longest Path 2-pairs sum vs. general Subset Sum Reducing one problem to another Clique

Probabilistic Graphical Models

Probabilistic Graphical Models Raquel Urtasun and Tamir Hazan TTI Chicago April 4, 2011 Raquel Urtasun and Tamir Hazan (TTI-C) Graphical Models April 4, 2011 1 / 22 Bayesian Networks and independences

CMPSCI611: Approximating MAX-CUT Lecture 20

CMPSCI611: Approximating MAX-CUT Lecture 20 For the next two lectures we ll be seeing examples of approximation algorithms for interesting NP-hard problems. Today we consider MAX-CUT, which we proved to

TU e. Advanced Algorithms: experimentation project. The problem: load balancing with bounded look-ahead. Input: integer m 2: number of machines

The problem: load balancing with bounded look-ahead Input: integer m 2: number of machines integer k 0: the look-ahead numbers t 1,..., t n : the job sizes Problem: assign jobs to machines machine to which

Lecture Notes on Spanning Trees

Lecture Notes on Spanning Trees 15-122: Principles of Imperative Computation Frank Pfenning Lecture 26 April 26, 2011 1 Introduction In this lecture we introduce graphs. Graphs provide a uniform model

Math 443/543 Graph Theory Notes 4: Connector Problems

Math 443/543 Graph Theory Notes 4: Connector Problems David Glickenstein September 19, 2012 1 Trees and the Minimal Connector Problem Here is the problem: Suppose we have a collection of cities which we

Some notes on the Poisson distribution

Some notes on the Poisson distribution Ernie Croot October 2, 2008 1 Introduction The Poisson distribution is one of the most important that we will encounter in this course it is right up there with the

Machine Learning

Machine Learning 10-701 Tom M. Mitchell Machine Learning Department Carnegie Mellon University February 8, 2011 Today: Graphical models Bayes Nets: Representing distributions Conditional independencies

CSE373: Data Structures and Algorithms Lecture 2: Proof by Induction. Linda Shapiro Winter 2015

CSE373: Data Structures and Algorithms Lecture 2: Proof by Induction Linda Shapiro Winter 2015 Background on Induction Type of mathematical proof Typically used to establish a given statement for all natural

Analysis of Algorithms, I

Analysis of Algorithms, I CSOR W4231.002 Eleni Drinea Computer Science Department Columbia University Thursday, February 26, 2015 Outline 1 Recap 2 Representing graphs 3 Breadth-first search (BFS) 4 Applications

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

Machine Learning

Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University February 23, 2015 Today: Graphical models Bayes Nets: Representing distributions Conditional independencies

diffy boxes (iterations of the ducci four number game) 1

diffy boxes (iterations of the ducci four number game) 1 Peter Trapa September 27, 2006 Begin at the beginning and go on till you come to the end: then stop. Lewis Carroll Consider the following game.

Lecture 4: BK inequality 27th August and 6th September, 2007

CSL866: Percolation and Random Graphs IIT Delhi Amitabha Bagchi Scribe: Arindam Pal Lecture 4: BK inequality 27th August and 6th September, 2007 4. Preliminaries The FKG inequality allows us to lower bound

Lecture 7: NP-Complete Problems

IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 7: NP-Complete Problems David Mix Barrington and Alexis Maciel July 25, 2000 1. Circuit

8.7 Mathematical Induction

8.7. MATHEMATICAL INDUCTION 8-135 8.7 Mathematical Induction Objective Prove a statement by mathematical induction Many mathematical facts are established by first observing a pattern, then making a conjecture

Lecture 16 : Relations and Functions DRAFT

CS/Math 240: Introduction to Discrete Mathematics 3/29/2011 Lecture 16 : Relations and Functions Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT In Lecture 3, we described a correspondence

Sample Problems in Discrete Mathematics

Sample Problems in Discrete Mathematics This handout lists some sample problems that you should be able to solve as a pre-requisite to Computer Algorithms Try to solve all of them You should also read

Lecture 22: November 10

CS271 Randomness & Computation Fall 2011 Lecture 22: November 10 Lecturer: Alistair Sinclair Based on scribe notes by Rafael Frongillo Disclaimer: These notes have not been subjected to the usual scrutiny

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Steiner Tree and TSP Date: 01/29/15 Scribe: Katie Henry

600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Steiner Tree and TSP Date: 01/29/15 Scribe: Katie Henry 2.1 Steiner Tree Definition 2.1.1 In the Steiner Tree problem the input

(Refer Slide Time 1.50)

Discrete Mathematical Structures Dr. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology, Madras Module -2 Lecture #11 Induction Today we shall consider proof

Planarity Planarity

Planarity 8.1 71 Planarity Up until now, graphs have been completely abstract. In Topological Graph Theory, it matters how the graphs are drawn. Do the edges cross? Are there knots in the graph structure?

Introduction to Discrete Probability Theory and Bayesian Networks

Introduction to Discrete Probability Theory and Bayesian Networks Dr Michael Ashcroft September 15, 2011 This document remains the property of Inatas. Reproduction in whole or in part without the written

Lecture 3. Mathematical Induction

Lecture 3 Mathematical Induction Induction is a fundamental reasoning process in which general conclusion is based on particular cases It contrasts with deduction, the reasoning process in which conclusion

Assigning Probabilities

What is a Probability? Probabilities are numbers between 0 and 1 that indicate the likelihood of an event. Generally, the statement that the probability of hitting a target- that is being fired at- is

Spectral graph theory

Spectral graph theory Uri Feige January 2010 1 Background With every graph (or digraph) one can associate several different matrices. We have already seen the vertex-edge incidence matrix, the Laplacian

6.896 Probability and Computation February 14, Lecture 4

6.896 Probability and Computation February 14, 2011 Lecture 4 Lecturer: Constantinos Daskalakis Scribe: Georgios Papadopoulos NOTE: The content of these notes has not been formally reviewed by the lecturer.

Lecture notes from Foundations of Markov chain Monte Carlo methods University of Chicago, Spring 2002 Lecture 1, March 29, 2002

Lecture notes from Foundations of Markov chain Monte Carlo methods University of Chicago, Spring 2002 Lecture 1, March 29, 2002 Eric Vigoda Scribe: Varsha Dani & Tom Hayes 1.1 Introduction The aim of this

Chapter 6 Planarity. Section 6.1 Euler s Formula

Chapter 6 Planarity Section 6.1 Euler s Formula In Chapter 1 we introduced the puzzle of the three houses and the three utilities. The problem was to determine if we could connect each of the three utilities

CSC 373: Algorithm Design and Analysis Lecture 16

CSC 373: Algorithm Design and Analysis Lecture 16 Allan Borodin February 25, 2013 Some materials are from Stephen Cook s IIT talk and Keven Wayne s slides. 1 / 17 Announcements and Outline Announcements

CS5314 Randomized Algorithms. Lecture 16: Balls, Bins, Random Graphs (Random Graphs, Hamiltonian Cycles)

CS5314 Randomized Algorithms Lecture 16: Balls, Bins, Random Graphs (Random Graphs, Hamiltonian Cycles) 1 Objectives Introduce Random Graph Model used to define a probability space for all graphs with

ABC methods for model choice in Gibbs random fields

ABC methods for model choice in Gibbs random fields Jean-Michel Marin Institut de Mathématiques et Modélisation Université Montpellier 2 Joint work with Aude Grelaud, Christian Robert, François Rodolphe,

Partitioning edge-coloured complete graphs into monochromatic cycles and paths

arxiv:1205.5492v1 [math.co] 24 May 2012 Partitioning edge-coloured complete graphs into monochromatic cycles and paths Alexey Pokrovskiy Departement of Mathematics, London School of Economics and Political

Scheduling Shop Scheduling. Tim Nieberg

Scheduling Shop Scheduling Tim Nieberg Shop models: General Introduction Remark: Consider non preemptive problems with regular objectives Notation Shop Problems: m machines, n jobs 1,..., n operations

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold Coloring the vertices or edges of a graph leads to a variety of interesting applications in graph theory These applications include various

CSE 20: Discrete Mathematics for Computer Science. Prof. Miles Jones. Today s Topics: Graphs. The Internet graph

Today s Topics: CSE 0: Discrete Mathematics for Computer Science Prof. Miles Jones. Graphs. Some theorems on graphs. Eulerian graphs Graphs! Model relations between pairs of objects The Internet graph!

Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

Math 117 Chapter 7 Sets and Probability

Math 117 Chapter 7 and Probability Flathead Valley Community College Page 1 of 15 1. A set is a well-defined collection of specific objects. Each item in the set is called an element or a member. Curly

Handout #Ch7 San Skulrattanakulchai Gustavus Adolphus College Dec 6, 2010. Chapter 7: Digraphs

MCS-236: Graph Theory Handout #Ch7 San Skulrattanakulchai Gustavus Adolphus College Dec 6, 2010 Chapter 7: Digraphs Strong Digraphs Definitions. A digraph is an ordered pair (V, E), where V is the set

(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7

(67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition

Lecture 7: Approximation via Randomized Rounding

Lecture 7: Approximation via Randomized Rounding Often LPs return a fractional solution where the solution x, which is supposed to be in {0, } n, is in [0, ] n instead. There is a generic way of obtaining

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 1 9/3/2008 PROBABILISTIC MODELS AND PROBABILITY MEASURES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 1 9/3/2008 PROBABILISTIC MODELS AND PROBABILITY MEASURES Contents 1. Probabilistic experiments 2. Sample space 3. Discrete probability

1 Gaussian Elimination

Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 Gauss-Jordan reduction and the Reduced

A counterexample to a result on the tree graph of a graph

AUSTRALASIAN JOURNAL OF COMBINATORICS Volume 63(3) (2015), Pages 368 373 A counterexample to a result on the tree graph of a graph Ana Paulina Figueroa Departamento de Matemáticas Instituto Tecnológico

Graph Theory Lecture 3: Sum of Degrees Formulas, Planar Graphs, and Euler s Theorem Spring 2014 Morgan Schreffler Office: POT 902

Graph Theory Lecture 3: Sum of Degrees Formulas, Planar Graphs, and Euler s Theorem Spring 2014 Morgan Schreffler Office: POT 902 http://www.ms.uky.edu/~mschreffler Different Graphs, Similar Properties

Problem Set 1 Solutions Math 109

Problem Set 1 Solutions Math 109 Exercise 1.6 Show that a regular tetrahedron has a total of twenty-four symmetries if reflections and products of reflections are allowed. Identify a symmetry which is

2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

Lecture 9. 1 Introduction. 2 Random Walks in Graphs. 1.1 How To Explore a Graph? CS-621 Theory Gems October 17, 2012

CS-62 Theory Gems October 7, 202 Lecture 9 Lecturer: Aleksander Mądry Scribes: Dorina Thanou, Xiaowen Dong Introduction Over the next couple of lectures, our focus will be on graphs. Graphs are one of

Computer Science Department. Technion - IIT, Haifa, Israel. Itai and Rodeh [IR] have proved that for any 2-connected graph G and any vertex s G there

- 1 - THREE TREE-PATHS Avram Zehavi Alon Itai Computer Science Department Technion - IIT, Haifa, Israel Abstract Itai and Rodeh [IR] have proved that for any 2-connected graph G and any vertex s G there

1 Digraphs. Definition 1

1 Digraphs Definition 1 Adigraphordirected graphgisatriplecomprisedofavertex set V(G), edge set E(G), and a function assigning each edge an ordered pair of vertices (tail, head); these vertices together

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

Sample Problems. Lecture Notes Equations with Parameters page 1

Lecture Notes Equations with Parameters page Sample Problems. In each of the parametric equations given, nd the value of the parameter m so that the equation has exactly one real solution. a) x + mx m

A Sublinear Bipartiteness Tester for Bounded Degree Graphs

A Sublinear Bipartiteness Tester for Bounded Degree Graphs Oded Goldreich Dana Ron February 5, 1998 Abstract We present a sublinear-time algorithm for testing whether a bounded degree graph is bipartite

Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.

Polygons Have Ears Author(s): G. H. Meisters Source: The American Mathematical Monthly, Vol. 82, No. 6 (Jun. - Jul., 1975), pp. 648-651 Published by: Mathematical Association of America Stable URL: http://www.jstor.org/stable/2319703

Probabilistic Graphical Models Homework 1: Due January 29, 2014 at 4 pm

Probabilistic Graphical Models 10-708 Homework 1: Due January 29, 2014 at 4 pm Directions. This homework assignment covers the material presented in Lectures 1-3. You must complete all four problems to

Regression Analysis Prof. Soumen Maity Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 2 Simple Linear Regression

Regression Analysis Prof. Soumen Maity Department of Mathematics Indian Institute of Technology, Kharagpur Lecture - 2 Simple Linear Regression Hi, this is my second lecture in module one and on simple

6.3 Conditional Probability and Independence

222 CHAPTER 6. PROBABILITY 6.3 Conditional Probability and Independence Conditional Probability Two cubical dice each have a triangle painted on one side, a circle painted on two sides and a square painted

Chapter 4. Trees. 4.1 Basics

Chapter 4 Trees 4.1 Basics A tree is a connected graph with no cycles. A forest is a collection of trees. A vertex of degree one, particularly in a tree, is called a leaf. Trees arise in a variety of applications.

Chapter I Logic and Proofs

MATH 1130 1 Discrete Structures Chapter I Logic and Proofs Propositions A proposition is a statement that is either true (T) or false (F), but or both. s Propositions: 1. I am a man.. I am taller than

Answers to some of the exercises.

Answers to some of the exercises. Chapter 2. Ex.2.1 (a) There are several ways to do this. Here is one possibility. The idea is to apply the k-center algorithm first to D and then for each center in D

14. Trees and Their Properties

14. Trees and Their Properties This is the first of a few articles in which we shall study a special and important family of graphs, called trees, discuss their properties and introduce some of their applications.

Chapter 4 - Lecture 1 Probability Density Functions and Cumul. Distribution Functions

Chapter 4 - Lecture 1 Probability Density Functions and Cumulative Distribution Functions October 21st, 2009 Review Probability distribution function Useful results Relationship between the pdf and the

Lattice Point Geometry: Pick s Theorem and Minkowski s Theorem. Senior Exercise in Mathematics. Jennifer Garbett Kenyon College

Lattice Point Geometry: Pick s Theorem and Minkowski s Theorem Senior Exercise in Mathematics Jennifer Garbett Kenyon College November 18, 010 Contents 1 Introduction 1 Primitive Lattice Triangles 5.1

CS/Math 240: Introduction to Discrete Mathematics Fall 2015 Instructors: Beck Hasti, Gautam Prakriya Reading 7 : Program Correctness 7.1 Program Correctness Showing that a program is correct means that

CSPs: Arc Consistency

CSPs: Arc Consistency CPSC 322 CSPs 3 Textbook 4.5 CSPs: Arc Consistency CPSC 322 CSPs 3, Slide 1 Lecture Overview 1 Recap 2 Consistency 3 Arc Consistency CSPs: Arc Consistency CPSC 322 CSPs 3, Slide 2

Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

Outline 2.1 Graph Isomorphism 2.2 Automorphisms and Symmetry 2.3 Subgraphs, part 1

GRAPH THEORY LECTURE STRUCTURE AND REPRESENTATION PART A Abstract. Chapter focuses on the question of when two graphs are to be regarded as the same, on symmetries, and on subgraphs.. discusses the concept

DO NOT RE-DISTRIBUTE THIS SOLUTION FILE

Professor Kindred Math 04 Graph Theory Homework 7 Solutions April 3, 03 Introduction to Graph Theory, West Section 5. 0, variation of 5, 39 Section 5. 9 Section 5.3 3, 8, 3 Section 7. Problems you should

Finding and counting given length cycles

Finding and counting given length cycles Noga Alon Raphael Yuster Uri Zwick Abstract We present an assortment of methods for finding and counting simple cycles of a given length in directed and undirected

Graph Theory Notes. Vadim Lozin. Institute of Mathematics University of Warwick

Graph Theory Notes Vadim Lozin Institute of Mathematics University of Warwick 1 Introduction A graph G = (V, E) consists of two sets V and E. The elements of V are called the vertices and the elements

Graphs and Network Flows IE411 Lecture 1

Graphs and Network Flows IE411 Lecture 1 Dr. Ted Ralphs IE411 Lecture 1 1 References for Today s Lecture Required reading Sections 17.1, 19.1 References AMO Chapter 1 and Section 2.1 and 2.2 IE411 Lecture

Homework 13 Solutions. X(n) = 3X(n 1) + 5X(n 2) : n 2 X(0) = 1 X(1) = 2. Solution: First we find the characteristic equation

Homework 13 Solutions PROBLEM ONE 1 Solve the recurrence relation X(n) = 3X(n 1) + 5X(n ) : n X(0) = 1 X(1) = Solution: First we find the characteristic equation which has roots r = 3r + 5 r 3r 5 = 0,

4 Basics of Trees. Petr Hliněný, FI MU Brno 1 FI: MA010: Trees and Forests

4 Basics of Trees Trees, actually acyclic connected simple graphs, are among the simplest graph classes. Despite their simplicity, they still have rich structure and many useful application, such as in

Math 4707: Introduction to Combinatorics and Graph Theory

Math 4707: Introduction to Combinatorics and Graph Theory Lecture Addendum, November 3rd and 8th, 200 Counting Closed Walks and Spanning Trees in Graphs via Linear Algebra and Matrices Adjacency Matrices

Boolean Representations and Combinatorial Equivalence

Chapter 2 Boolean Representations and Combinatorial Equivalence This chapter introduces different representations of Boolean functions. It then discuss the applications of these representations for proving

SEMITOTAL AND TOTAL BLOCK-CUTVERTEX GRAPH

CHAPTER 3 SEMITOTAL AND TOTAL BLOCK-CUTVERTEX GRAPH ABSTRACT This chapter begins with the notion of block distances in graphs. Using block distance we defined the central tendencies of a block, like B-radius

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 1

CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 1 Course Outline CS70 is a course on Discrete Mathematics and Probability for EECS Students. The purpose of the course

Exam 3 April 14, 2015

CSC 1300-003 Discrete Structures Exam 3 April 14, 2015 Name: KEY Question Value Score 1 10 2 30 3 30 4 20 5 10 TOTAL 100 No calculators or any reference except your one note card is permitted. If an answer