Bayesian vs. Markov Networks

Size: px
Start display at page:

Download "Bayesian vs. Markov Networks"

Transcription

1 Bayesian vs. Markov Networks Le Song Machine Learning II: dvanced Topics CSE 8803ML, Spring 2012

2 Conditional Independence ssumptions Local Markov ssumption Global Markov ssumption 𝑋 π‘π‘œπ‘›π‘‘π‘’π‘ π‘π‘’π‘›π‘‘π‘Žπ‘›π‘‘π‘‹ π‘ƒπ‘Žπ‘‹ 𝐴 𝐡 𝐢, 𝑠𝑒𝑝𝐺 𝐴, 𝐡; 𝐢 π‘π‘œπ‘›π‘‘π‘’π‘ π‘π‘’π‘›π‘‘π‘Žπ‘›π‘‘π‘‹ π‘ƒπ‘Žπ‘‹ 𝐡𝑁 𝑀𝑁 𝑋 𝐻 𝑆 (𝐴 𝐻) 𝐴 𝐻 𝑆 𝐹 𝐴 𝑆 𝐴 𝐹 (𝐴 𝐹 𝑆) 𝑆 𝑁 𝐻 𝐴 𝐡 Derived local and pairwise assumption D-separation, active trail 𝐴 𝐢 𝑁 𝐻 𝑆 (𝑁 𝐻) 𝑋 π‘‡β„Žπ‘’π‘…π‘’π‘ π‘‘ 𝑀𝐡𝑋 𝑋 π‘Œ π‘‡β„Žπ‘’π‘…π‘’π‘ π‘‘ (no X Y) 𝐴 𝐢 𝑋 𝐡 𝐷 𝑀𝐡𝑋 = {𝐴𝐡𝐢𝐷} 2

3 Distribution Factorization Bayesian Networks (Directed Graphical Models) 𝐼 π‘šπ‘Žπ‘: 𝐼𝑙 𝐺 𝐼 𝑃 𝑛 𝑃(𝑋1,, 𝑋𝑛 ) = Conditional Probability Tables (CPTs) 𝑃(𝑋𝑖 π‘ƒπ‘Žπ‘‹π‘– ) 𝑖=1 Markov Networks (Undirected Graphical Models) π‘ π‘‘π‘Ÿπ‘–π‘π‘‘π‘™π‘¦ π‘π‘œπ‘ π‘–π‘‘π‘–π‘£π‘’ 𝑃, 𝐼 π‘šπ‘Žπ‘: 𝐼 𝐺 𝐼 𝑃 Clique π‘š Potentials 1 𝑃(𝑋1,, 𝑋𝑛 ) = Ψ𝑖 𝐷𝑖 𝑍 Maximal 𝑖=1 Normalization (Partition Function) π‘š 𝑍 = π‘₯1,π‘₯2,,π‘₯𝑛 Ψ𝑖 𝐷𝑖 Clique 𝑖=1 3

4 Representation Power? 𝑃 𝑀𝑁 𝐡𝑁 convert? Minimal I-map not unique Do not always have P-map 𝑋1 𝑋1 𝑋3 𝑋2, 𝑋4 𝑋2 𝑋4 𝑋1, 𝑋3 𝑋4 𝑋2 Minimal I-map unique Do not always have P-map 𝐴 𝐹 𝑆 𝐴 𝐹 (𝐴 𝐹 𝑆) 𝑋3 4

5 Is there a BN that is a P-map for a given MN? MN for swing couple of variables does not have a P-map as BN X 1 X 1 X 1 X 2 X 4 X 2 X 4 X 2 X 4 X 3 X 3 X 3 X 1 X 3 X 2, X 4 X 2 X 4 X 1, X 3 X 1 X 3 X 2, X 4 X 2 X 4 X 1, X 3 X 1 X 3 X 2, X 4 X 2 X 4 X 1, X 3 5

6 Is there an MN that is P-map for a given BN? BN for V-structure does not have a P-map as MN F F F S S S F ( F S) F ( F S) F S F ( F S) 6

7 Conversion using Minimal I-map instead Instead of attempting P-maps between BNs and MNs, we can try minimal I-maps for conversion Recall: G is a minimal I-map for P if I G I P Removal of a single edge in G render it not an I-map Note: If G is a minimal I-map of P, G need not necessarily satisfy all conditional independence relation in P 7

8 Conversion from BN to MN MN: Markov blanket MB Xi of X i : the set of immediate neighbors of X i in the graph X i V X i MB Xi MB Xi : i Markov blanket for BN? I J B B X E X F C D H C D G MB X =, B, C, D MB X =? (X DF BCD) 8

9 Markov blanket for BN Strategy: go outward from X, try to block all active trails to X 1 2 I B J I B J 3 E X F 4 E X F H C D G H C D G MB X =, B, C, D, E, F X V X MB X MB X 9

10 Markov Blanket for BN MB X in BN is the set of nodes consisting of X s parents, X s children and other parents of X s children Moral graph M(G) of a BN G is an undirected graph that contains an undirected edge between X and Y if There is a directed edge between them in the either direction X and Y are parents of a common children Moral graph insure that MB X in the set of neighbors in undirected graph M(G) B moralize B M G G C D C D 10

11 Minimal I-map from BNs to MNs Moral graph of M G of any BN G is a minimal I-map for G Moralization turns each X, Pa X into a fully connected component CPTs associated with BN can be used as clique potentials E E B moralize B M G G C D C D The moral graph loses some independence relation eg. BN: E, B, E MN: can not read marginal independence 11

12 Perfect I-maps from BNs to MNs If BN G is already moral, then its moral graph M(G) is a perfect I-map of G Proof Sketch: I M G I G The only independence relations that are potentially lost from G to M(G) are those arising from V-structures Since G has no V-structures (already moral), no independence are lost in M G 12

13 How about d-separation? D-separation, active trail S ( H) H S H N S H N H S (N H) S F F ( F S) Judge D-separation using moral graph Let U = X, Y, Z be three disjoint sets of nodes in a BN G Let G + be the ancestral graph: the induced BN over U ancestors U Then d-sep G (X; Y Z) iff sep M(G + )(X; Y Z) G d-sep G (; B E) d-sep G (; B D, F) C E B F G D B M(G + ) C E sep M(G + )(; B E) B M(G + ) C D F sep M(G + )(; B D, F) 13

14 Why M G + works? Key: Information blocked through common children in G that are not in the conditioning variables, is simulated by ignoring all children X 1 X 1 X 1 X 2 X 4 X 2 X 4 X 2 X 4 X 3 G: X 2 X 4 X 1 X 3 M(G): (X 2 X 4 X 1 ) M(G + ): X 2 X 4 X 1 14

15 Summary: Minimal I-maps from BNs to MNs Moral Graph M(G) is minimal I-map of G If G is already moral, then M(G) is a perfect I-map of G D-sep_G(X; Y Z) sep M(G + ) (X; Y Z) 15

16 Minimal I-maps from MNs to BNs ny BN I-map for an MN must add triangulating edges into the graph Intuition: V-structures in BN introduce immoralities These immoralities were not present in a Markov networks Triangulation eliminates immoralities X 1 X 1 X 2 X 4 triangulate X 2 X 4 X 3 X 3 16

17 Chordal graphs Let X 1 X 2 X k X 1 be a loop in a graph. chord in a loop is an edge connecting non-consecutive X i and X j n undirected graph G is chordal if any loop X 1 X 2 X k X 1 for k 4 has a chord B C D E F directed graph G is chordal if its underlying undirected graph is chordal 17

18 Minimal I-maps from MNs to BNs Let H be an MN, and G be any BN minimal I-map for H. Then G can have no immoralities Intuitive reason: immoralities introduce additional independencies that are not in the original MN F Let G any BN minimal I-map for H. Then G is necessarily chordal! Because any non-triangulated loop of length at least 4 in a Bayesian network necessarily contains an immorality S F ( F S) Process of adding edges are called triangulation B D 18 C

19 Minimal I-maps from MNs to BNs Let H be a non-chordal MN. Then there is no BN G that is a perfect I-map for H Proof Sketch Minimal I-map G for H is chordal It must therefore contain additional edges no present in H Each additional edge eliminate some independence assumptions How about perfect I-maps from MNs to BNs? 19

20 Clique trees Notation Let G be a connected undirected graph. Let D 1,, D k be the set of maximal cliques in G Let T be a tree structured graph whose nodes are D 1,, D k Let D i and D j be two cliques in the graph G connected by an edge, Let S ij = D i D j be the separator set between D i and D j Let W ij = D i S ij, the residue set D 1 ={,B,C} D 1 B C S 12 =D 1 D 2 ={B,C} BC D D 2 = {B, C, D} D 2 20

21 Clique trees (cont.) tree T is a clique tree for G if Each node corresponds to a clique in G and each maximal clique in G is a node in T Each separator set S ij separates W ij and W ji Every undirected chordal graph G has a clique tree T Proof by induction (start from an triangle and add nodes) 21

22 Clique tree example BC BC D BC B C D E L BCD CD DCE CE B E CD CEL DE D L CE F DEF C F DE 22

23 Perfect I-maps from MN to BN Let H be a chordal MN. Then there existis a BN G such that I H = I G BC BC 1 D BC Proof Sketch Since H is a chordal MN, is has a clique tree Number the nodes consistent with the clique ordering -> 1, B -> 2, C-> 3 D -> 4, E-> 5 F -> 6, L -> 7 BCD CD DCE DE DEF 2 B E CD CE CEL 3 D L CE C F DE

24 Perfect I-maps from MNs to BNs (cont.) Let H be a chordal MN. Then there existis a BN G such that I H = I G Proof sketch (cont.) For each node X i, Let D k be the first clique it occurs in 1 BC BC BCD CD DCE DE DEF CE CEL Define Pa Xi = D k X i X 1,, X i 1 MN H and BN G has the same edges ll parents of each X i are in the same clique They are connected No immoralities in G B2 D4 F6 C3 E5 L7 24

25 Summary: Minimal I-maps from MNs to BNs minimal I-map BN of an MN is chordal Obtained by triangulating the MN If the MN is already chordal, there is a pefect BN I-map for the MN Obtained from the corresponding clique tree 25

26 Partially Directed cyclic Graphs lso called chain graphs (superset of MN and BN) Nodes can be disjointly partition into several chain components n edge within the same chain components must be undirected n edge between two nodes in different chain components be directed B C D E H Chain Components:, B, C, D, E, F, G, H, I F G I 26

27 MN: Gaussian Graphical Models Gaussian distribution can be represented by a fully connected graph with pairwise edge potentials over continuous variable nodes The overall exponential form is: P X 1,, X n = exp ( ij E (X i μ i )Σ 1 ij (X j μ j )) = exp ( X μ Σ 1 X μ ) lso know as Gaussian graphical models (GGM) 27

28 Sparse precision vs. sparse covariance in GGM X 1 X 2 X 3 X 4 X 5 Σ 1 = Σ = Σ 1 15 = 0 X 1 X 5 TheRest X 1 X 5 Σ 15 = 0 28

29 Summary undirected trees undirected chordal graph P BN MN Moralize BN Triangulating MN 29

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Raquel Urtasun and Tamir Hazan TTI Chicago April 4, 2011 Raquel Urtasun and Tamir Hazan (TTI-C) Graphical Models April 4, 2011 1 / 22 Bayesian Networks and independences

More information

3. The Junction Tree Algorithms

3. The Junction Tree Algorithms A Short Course on Graphical Models 3. The Junction Tree Algorithms Mark Paskin mark@paskin.org 1 Review: conditional independence Two random variables X and Y are independent (written X Y ) iff p X ( )

More information

The Basics of Graphical Models

The Basics of Graphical Models The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures

More information

The Joint Probability Distribution (JPD) of a set of n binary variables involve a huge number of parameters

The Joint Probability Distribution (JPD) of a set of n binary variables involve a huge number of parameters DEFINING PROILISTI MODELS The Joint Probability Distribution (JPD) of a set of n binary variables involve a huge number of parameters 2 n (larger than 10 25 for only 100 variables). x y z p(x, y, z) 0

More information

5 Directed acyclic graphs

5 Directed acyclic graphs 5 Directed acyclic graphs (5.1) Introduction In many statistical studies we have prior knowledge about a temporal or causal ordering of the variables. In this chapter we will use directed graphs to incorporate

More information

CS 188: Artificial Intelligence. Probability recap

CS 188: Artificial Intelligence. Probability recap CS 188: Artificial Intelligence Bayes Nets Representation and Independence Pieter Abbeel UC Berkeley Many slides over this course adapted from Dan Klein, Stuart Russell, Andrew Moore Conditional probability

More information

A crash course in probability and NaΓ―ve Bayes classification

A crash course in probability and NaΓ―ve Bayes classification Probability theory A crash course in probability and NaΓ―ve Bayes classification Chapter 9 Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s

More information

Probability, Conditional Independence

Probability, Conditional Independence Probability, Conditional Independence June 19, 2012 Probability, Conditional Independence Probability Sample space Ξ© of events Each event Ο‰ Ξ© has an associated measure Probability of the event P(Ο‰) Axioms

More information

5.1 Bipartite Matching

5.1 Bipartite Matching CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

More information

Tutorial on variational approximation methods. Tommi S. Jaakkola MIT AI Lab

Tutorial on variational approximation methods. Tommi S. Jaakkola MIT AI Lab Tutorial on variational approximation methods Tommi S. Jaakkola MIT AI Lab tommi@ai.mit.edu Tutorial topics A bit of history Examples of variational methods A brief intro to graphical models Variational

More information

M5MS09. Graphical Modelling

M5MS09. Graphical Modelling Course: MMS09 Setter: Walden Checker: Ginzberg Editor: Calderhead External: Wood Date: April, 0 MSc EXAMINATIONS (STATISTICS) May-June 0 MMS09 Graphical Modelling Setter s signature Checker s signature

More information

Course: Model, Learning, and Inference: Lecture 5

Course: Model, Learning, and Inference: Lecture 5 Course: Model, Learning, and Inference: Lecture 5 Alan Yuille Department of Statistics, UCLA Los Angeles, CA 90095 yuille@stat.ucla.edu Abstract Probability distributions on structured representation.

More information

Examples and Proofs of Inference in Junction Trees

Examples and Proofs of Inference in Junction Trees Examples and Proofs of Inference in Junction Trees Peter Lucas LIAC, Leiden University February 4, 2016 1 Representation and notation Let P(V) be a joint probability distribution, where V stands for a

More information

Bayesian Networks Chapter 14. Mausam (Slides by UW-AI faculty & David Page)

Bayesian Networks Chapter 14. Mausam (Slides by UW-AI faculty & David Page) Bayesian Networks Chapter 14 Mausam (Slides by UW-AI faculty & David Page) Bayes Nets In general, joint distribution P over set of variables (X 1 x... x X n ) requires exponential space for representation

More information

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits Outline NP-completeness Examples of Easy vs. Hard problems Euler circuit vs. Hamiltonian circuit Shortest Path vs. Longest Path 2-pairs sum vs. general Subset Sum Reducing one problem to another Clique

More information

Scheduling Shop Scheduling. Tim Nieberg

Scheduling Shop Scheduling. Tim Nieberg Scheduling Shop Scheduling Tim Nieberg Shop models: General Introduction Remark: Consider non preemptive problems with regular objectives Notation Shop Problems: m machines, n jobs 1,..., n operations

More information

p if x = 1 1 p if x = 0

p if x = 1 1 p if x = 0 Probability distributions Bernoulli distribution Two possible values (outcomes): 1 (success), 0 (failure). Parameters: p probability of success. Probability mass function: P (x; p) = { p if x = 1 1 p if

More information

Extracting correlation structure from large random matrices

Extracting correlation structure from large random matrices Extracting correlation structure from large random matrices Alfred Hero University of Michigan - Ann Arbor Feb. 17, 2012 1 / 46 1 Background 2 Graphical models 3 Screening for hubs in graphical model 4

More information

Discrete Mathematics & Mathematical Reasoning Chapter 10: Graphs

Discrete Mathematics & Mathematical Reasoning Chapter 10: Graphs Discrete Mathematics & Mathematical Reasoning Chapter 10: Graphs Kousha Etessami U. of Edinburgh, UK Kousha Etessami (U. of Edinburgh, UK) Discrete Mathematics (Chapter 6) 1 / 13 Overview Graphs and Graph

More information

Part 2: Community Detection

Part 2: Community Detection Chapter 8: Graph Data Part 2: Community Detection Based on Leskovec, Rajaraman, Ullman 2014: Mining of Massive Datasets Big Data Management and Analytics Outline Community Detection - Social networks -

More information

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data

More information

Artificial Intelligence Mar 27, Bayesian Networks 1 P (T D)P (D) + P (T D)P ( D) =

Artificial Intelligence Mar 27, Bayesian Networks 1 P (T D)P (D) + P (T D)P ( D) = Artificial Intelligence 15-381 Mar 27, 2007 Bayesian Networks 1 Recap of last lecture Probability: precise representation of uncertainty Probability theory: optimal updating of knowledge based on new information

More information

Statistical machine learning, high dimension and big data

Statistical machine learning, high dimension and big data Statistical machine learning, high dimension and big data S. GaΓ―ffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,

More information

Bayesian Machine Learning (ML): Modeling And Inference in Big Data. Zhuhua Cai Google, Rice University caizhua@gmail.com

Bayesian Machine Learning (ML): Modeling And Inference in Big Data. Zhuhua Cai Google, Rice University caizhua@gmail.com Bayesian Machine Learning (ML): Modeling And Inference in Big Data Zhuhua Cai Google Rice University caizhua@gmail.com 1 Syllabus Bayesian ML Concepts (Today) Bayesian ML on MapReduce (Next morning) Bayesian

More information

Small Maximal Independent Sets and Faster Exact Graph Coloring

Small Maximal Independent Sets and Faster Exact Graph Coloring Small Maximal Independent Sets and Faster Exact Graph Coloring David Eppstein Univ. of California, Irvine Dept. of Information and Computer Science The Exact Graph Coloring Problem: Given an undirected

More information

Lecture 11: Graphical Models for Inference

Lecture 11: Graphical Models for Inference Lecture 11: Graphical Models for Inference So far we have seen two graphical models that are used for inference - the Bayesian network and the Join tree. These two both represent the same joint probability

More information

Outline 2.1 Graph Isomorphism 2.2 Automorphisms and Symmetry 2.3 Subgraphs, part 1

Outline 2.1 Graph Isomorphism 2.2 Automorphisms and Symmetry 2.3 Subgraphs, part 1 GRAPH THEORY LECTURE STRUCTURE AND REPRESENTATION PART A Abstract. Chapter focuses on the question of when two graphs are to be regarded as the same, on symmetries, and on subgraphs.. discusses the concept

More information

CS 598CSC: Combinatorial Optimization Lecture date: 2/4/2010

CS 598CSC: Combinatorial Optimization Lecture date: 2/4/2010 CS 598CSC: Combinatorial Optimization Lecture date: /4/010 Instructor: Chandra Chekuri Scribe: David Morrison Gomory-Hu Trees (The work in this section closely follows [3]) Let G = (V, E) be an undirected

More information

OHJ-2306 Introduction to Theoretical Computer Science, Fall 2012 8.11.2012

OHJ-2306 Introduction to Theoretical Computer Science, Fall 2012 8.11.2012 276 The P vs. NP problem is a major unsolved problem in computer science It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a $ 1,000,000 prize for the

More information

Gaussian Classifiers CS498

Gaussian Classifiers CS498 Gaussian Classifiers CS498 Today s lecture The Gaussian Gaussian classifiers A slightly more sophisticated classifier Nearest Neighbors We can classify with nearest neighbors x m 1 m 2 Decision boundary

More information

Analysis of Algorithms, I

Analysis of Algorithms, I Analysis of Algorithms, I CSOR W4231.002 Eleni Drinea Computer Science Department Columbia University Thursday, February 26, 2015 Outline 1 Recap 2 Representing graphs 3 Breadth-first search (BFS) 4 Applications

More information

Introduction to Markov Chain Monte Carlo

Introduction to Markov Chain Monte Carlo Introduction to Markov Chain Monte Carlo Monte Carlo: sample from a distribution to estimate the distribution to compute max, mean Markov Chain Monte Carlo: sampling using local information Generic problem

More information

A 2-factor in which each cycle has long length in claw-free graphs

A 2-factor in which each cycle has long length in claw-free graphs A -factor in which each cycle has long length in claw-free graphs Roman Čada Shuya Chiba Kiyoshi Yoshimoto 3 Department of Mathematics University of West Bohemia and Institute of Theoretical Computer Science

More information

SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis

SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu October 17, 2015 Outline

More information

Programming Tools based on Big Data and Conditional Random Fields

Programming Tools based on Big Data and Conditional Random Fields Programming Tools based on Big Data and Conditional Random Fields Veselin Raychev Martin Vechev Andreas Krause Department of Computer Science ETH Zurich Zurich Machine Learning and Data Science Meet-up,

More information

Handout #Ch7 San Skulrattanakulchai Gustavus Adolphus College Dec 6, 2010. Chapter 7: Digraphs

Handout #Ch7 San Skulrattanakulchai Gustavus Adolphus College Dec 6, 2010. Chapter 7: Digraphs MCS-236: Graph Theory Handout #Ch7 San Skulrattanakulchai Gustavus Adolphus College Dec 6, 2010 Chapter 7: Digraphs Strong Digraphs Definitions. A digraph is an ordered pair (V, E), where V is the set

More information

Question 2 NaΓ―ve Bayes (16 points)

Question 2 NaΓ―ve Bayes (16 points) Question 2 NaΓ―ve Bayes (16 points) About 2/3 of your email is spam so you downloaded an open source spam filter based on word occurrences that uses the Naive Bayes classifier. Assume you collected the

More information

Exponential time algorithms for graph coloring

Exponential time algorithms for graph coloring Exponential time algorithms for graph coloring Uriel Feige Lecture notes, March 14, 2011 1 Introduction Let [n] denote the set {1,..., k}. A k-labeling of vertices of a graph G(V, E) is a function V [k].

More information

Lecture 6 Online and streaming algorithms for clustering

Lecture 6 Online and streaming algorithms for clustering CSE 291: Unsupervised learning Spring 2008 Lecture 6 Online and streaming algorithms for clustering 6.1 On-line k-clustering To the extent that clustering takes place in the brain, it happens in an on-line

More information

Scheduling. Open shop, job shop, flow shop scheduling. Related problems. Open shop, job shop, flow shop scheduling

Scheduling. Open shop, job shop, flow shop scheduling. Related problems. Open shop, job shop, flow shop scheduling Scheduling Basic scheduling problems: open shop, job shop, flow job The disjunctive graph representation Algorithms for solving the job shop problem Computational complexity of the job shop problem Open

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm. Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

More information

Decision Trees and Networks

Decision Trees and Networks Lecture 21: Uncertainty 6 Today s Lecture Victor R. Lesser CMPSCI 683 Fall 2010 Decision Trees and Networks Decision Trees A decision tree is an explicit representation of all the possible scenarios from

More information

Finding the M Most Probable Configurations Using Loopy Belief Propagation

Finding the M Most Probable Configurations Using Loopy Belief Propagation Finding the M Most Probable Configurations Using Loopy Belief Propagation Chen Yanover and Yair Weiss School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, Israel

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication CPS343 Parallel and High Performance Computing Spring 2016 CPS343 (Parallel and HPC) Matrix Multiplication Spring 2016 1 / 32 Outline 1 Matrix operations Importance Dense and sparse

More information

Approximating the Partition Function by Deleting and then Correcting for Model Edges

Approximating the Partition Function by Deleting and then Correcting for Model Edges Approximating the Partition Function by Deleting and then Correcting for Model Edges Arthur Choi and Adnan Darwiche Computer Science Department University of California, Los Angeles Los Angeles, CA 995

More information

An Introduction to the Use of Bayesian Network to Analyze Gene Expression Data

An Introduction to the Use of Bayesian Network to Analyze Gene Expression Data n Introduction to the Use of ayesian Network to nalyze Gene Expression Data Cristina Manfredotti Dipartimento di Informatica, Sistemistica e Comunicazione (D.I.S.Co. UniversitΓ  degli Studi Milano-icocca

More information

Life of A Knowledge Base (KB)

Life of A Knowledge Base (KB) Life of A Knowledge Base (KB) A knowledge base system is a special kind of database management system to for knowledge base management. KB extraction: knowledge extraction using statistical models in NLP/ML

More information

Queueing Networks with Blocking - An Introduction -

Queueing Networks with Blocking - An Introduction - Queueing Networks with Blocking - An Introduction - Jonatha ANSELMI anselmi@elet.polimi.it 5 maggio 006 Outline Blocking Blocking Mechanisms (BAS, BBS, RS) Approximate Analysis - MSS Basic Notation We

More information

Social Media Mining. Network Measures

Social Media Mining. Network Measures Klout Measures and Metrics 22 Why Do We Need Measures? Who are the central figures (influential individuals) in the network? What interaction patterns are common in friends? Who are the like-minded users

More information

Social Media Mining. Graph Essentials

Social Media Mining. Graph Essentials Graph Essentials Graph Basics Measures Graph and Essentials Metrics 2 2 Nodes and Edges A network is a graph nodes, actors, or vertices (plural of vertex) Connections, edges or ties Edge Node Measures

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm. Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

More information

Bayesian Networks. Mausam (Slides by UW-AI faculty)

Bayesian Networks. Mausam (Slides by UW-AI faculty) Bayesian Networks Mausam (Slides by UW-AI faculty) Bayes Nets In general, joint distribution P over set of variables (X 1 x... x X n ) requires exponential space for representation & inference BNs provide

More information

Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

More information

Definition 11.1. Given a graph G on n vertices, we define the following quantities:

Definition 11.1. Given a graph G on n vertices, we define the following quantities: Lecture 11 The LovΓ‘sz Ο‘ Function 11.1 Perfect graphs We begin with some background on perfect graphs. graphs. First, we define some quantities on Definition 11.1. Given a graph G on n vertices, we define

More information

Bayesian networks - Time-series models - Apache Spark & Scala

Bayesian networks - Time-series models - Apache Spark & Scala Bayesian networks - Time-series models - Apache Spark & Scala Dr John Sandiford, CTO Bayes Server Data Science London Meetup - November 2014 1 Contents Introduction Bayesian networks Latent variables Anomaly

More information

Complex Network Visualization based on Voronoi Diagram and Smoothed-particle Hydrodynamics

Complex Network Visualization based on Voronoi Diagram and Smoothed-particle Hydrodynamics Complex Network Visualization based on Voronoi Diagram and Smoothed-particle Hydrodynamics Zhao Wenbin 1, Zhao Zhengxu 2 1 School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu

More information

Supervised Learning (Big Data Analytics)

Supervised Learning (Big Data Analytics) Supervised Learning (Big Data Analytics) Vibhav Gogate Department of Computer Science The University of Texas at Dallas Practical advice Goal of Big Data Analytics Uncover patterns in Data. Can be used

More information

Machine Learning and Data Mining. Clustering. (adapted from) Prof. Alexander Ihler

Machine Learning and Data Mining. Clustering. (adapted from) Prof. Alexander Ihler Machine Learning and Data Mining Clustering (adapted from) Prof. Alexander Ihler Unsupervised learning Supervised learning Predict target value ( y ) given features ( x ) Unsupervised learning Understand

More information

10-601. Machine Learning. http://www.cs.cmu.edu/afs/cs/academic/class/10601-f10/index.html

10-601. Machine Learning. http://www.cs.cmu.edu/afs/cs/academic/class/10601-f10/index.html 10-601 Machine Learning http://www.cs.cmu.edu/afs/cs/academic/class/10601-f10/index.html Course data All up-to-date info is on the course web page: http://www.cs.cmu.edu/afs/cs/academic/class/10601-f10/index.html

More information

6.852: Distributed Algorithms Fall, 2009. Class 2

6.852: Distributed Algorithms Fall, 2009. Class 2 .8: Distributed Algorithms Fall, 009 Class Today s plan Leader election in a synchronous ring: Lower bound for comparison-based algorithms. Basic computation in general synchronous networks: Leader election

More information

Big Data, Machine Learning, Causal Models

Big Data, Machine Learning, Causal Models Big Data, Machine Learning, Causal Models Sargur N. Srihari University at Buffalo, The State University of New York USA Int. Conf. on Signal and Image Processing, Bangalore January 2014 1 Plan of Discussion

More information

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge, The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge, cheapest first, we had to determine whether its two endpoints

More information

Gibbs Sampling and Online Learning Introduction

Gibbs Sampling and Online Learning Introduction Statistical Techniques in Robotics (16-831, F14) Lecture#10(Tuesday, September 30) Gibbs Sampling and Online Learning Introduction Lecturer: Drew Bagnell Scribes: {Shichao Yang} 1 1 Sampling Samples are

More information

NEW VERSION OF DECISION SUPPORT SYSTEM FOR EVALUATING TAKEOVER BIDS IN PRIVATIZATION OF THE PUBLIC ENTERPRISES AND SERVICES

NEW VERSION OF DECISION SUPPORT SYSTEM FOR EVALUATING TAKEOVER BIDS IN PRIVATIZATION OF THE PUBLIC ENTERPRISES AND SERVICES NEW VERSION OF DECISION SUPPORT SYSTEM FOR EVALUATING TAKEOVER BIDS IN PRIVATIZATION OF THE PUBLIC ENTERPRISES AND SERVICES Silvija Vlah Kristina Soric Visnja Vojvodic Rosenzweig Department of Mathematics

More information

Data Mining Chapter 6: Models and Patterns Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Data Mining Chapter 6: Models and Patterns Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Data Mining Chapter 6: Models and Patterns Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Models vs. Patterns Models A model is a high level, global description of a

More information

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) 3 4 4 7 5 9 6 16 7 8 8 4 9 8 10 4 Total 92.

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) 3 4 4 7 5 9 6 16 7 8 8 4 9 8 10 4 Total 92. Name: Email ID: CSE 326, Data Structures Section: Sample Final Exam Instructions: The exam is closed book, closed notes. Unless otherwise stated, N denotes the number of elements in the data structure

More information

Graphical Modeling for Genomic Data

Graphical Modeling for Genomic Data Graphical Modeling for Genomic Data Carel F.W. Peeters cf.peeters@vumc.nl Joint work with: Wessel N. van Wieringen Mark A. van de Wiel Molecular Biostatistics Unit Dept. of Epidemiology & Biostatistics

More information

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao,David Tse Note 11

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao,David Tse Note 11 CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao,David Tse Note Conditional Probability A pharmaceutical company is marketing a new test for a certain medical condition. According

More information

Average rate of change of y = f(x) with respect to x as x changes from a to a + h:

Average rate of change of y = f(x) with respect to x as x changes from a to a + h: L15-1 Lecture 15: Section 3.4 Definition of the Derivative Recall the following from Lecture 14: For function y = f(x), the average rate of change of y with respect to x as x changes from a to b (on [a,

More information

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

More information

Lecture 1: Oracle Turing Machines

Lecture 1: Oracle Turing Machines Computational Complexity Theory, Fall 2008 September 10 Lecture 1: Oracle Turing Machines Lecturer: Kristoffer Arnsfelt Hansen Scribe: Casper Kejlberg-Rasmussen Oracle TM Definition 1 Let A Ξ£. Then a Oracle

More information

Clique coloring B 1 -EPG graphs

Clique coloring B 1 -EPG graphs Clique coloring B 1 -EPG graphs Flavia Bonomo a,c, MarΓ­a PΓ­a Mazzoleni b,c, and Maya Stein d a Departamento de ComputaciΓ³n, FCEN-UBA, Buenos Aires, Argentina. b Departamento de MatemΓ‘tica, FCE-UNLP, La

More information

Graph models for the Web and the Internet. Elias Koutsoupias University of Athens and UCLA. Crete, July 2003

Graph models for the Web and the Internet. Elias Koutsoupias University of Athens and UCLA. Crete, July 2003 Graph models for the Web and the Internet Elias Koutsoupias University of Athens and UCLA Crete, July 2003 Outline of the lecture Small world phenomenon The shape of the Web graph Searching and navigation

More information

CMPSCI611: Approximating MAX-CUT Lecture 20

CMPSCI611: Approximating MAX-CUT Lecture 20 CMPSCI611: Approximating MAX-CUT Lecture 20 For the next two lectures we ll be seeing examples of approximation algorithms for interesting NP-hard problems. Today we consider MAX-CUT, which we proved to

More information

Chapter 10 Run-off triangles

Chapter 10 Run-off triangles 0 INTRODUCTION 1 Chapter 10 Run-off triangles 0 Introduction 0.1 Run-off triangles Run-off triangles are used in general insurance to forecast future claim numbers and amounts. In this chapter we will

More information

COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction

COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH ZACHARY ABEL 1. Introduction In this survey we discuss properties of the Higman-Sims graph, which has 100 vertices, 1100 edges, and is 22 regular. In fact

More information

Distributed Computing over Communication Networks: Maximal Independent Set

Distributed Computing over Communication Networks: Maximal Independent Set Distributed Computing over Communication Networks: Maximal Independent Set What is a MIS? MIS An independent set (IS) of an undirected graph is a subset U of nodes such that no two nodes in U are adjacent.

More information

A Sublinear Bipartiteness Tester for Bounded Degree Graphs

A Sublinear Bipartiteness Tester for Bounded Degree Graphs A Sublinear Bipartiteness Tester for Bounded Degree Graphs Oded Goldreich Dana Ron February 5, 1998 Abstract We present a sublinear-time algorithm for testing whether a bounded degree graph is bipartite

More information

MapReduce and Distributed Data Analysis. Sergei Vassilvitskii Google Research

MapReduce and Distributed Data Analysis. Sergei Vassilvitskii Google Research MapReduce and Distributed Data Analysis Google Research 1 Dealing With Massive Data 2 2 Dealing With Massive Data Polynomial Memory Sublinear RAM Sketches External Memory Property Testing 3 3 Dealing With

More information

Protein Protein Interaction Networks

Protein Protein Interaction Networks Functional Pattern Mining from Genome Scale Protein Protein Interaction Networks Young-Rae Cho, Ph.D. Assistant Professor Department of Computer Science Baylor University it My Definition of Bioinformatics

More information

On the Efficiency of Backtracking Algorithms for Binary Constraint Satisfaction Problems

On the Efficiency of Backtracking Algorithms for Binary Constraint Satisfaction Problems On the Efficiency of Backtracking Algorithms for Binary Constraint Satisfaction Problems Achref El Mouelhi and Philippe JΓ©gou and Cyril Terrioux LSIS - UMR CNRS 6168 Aix-Marseille UniversitΓ© Avenue Escadrille

More information

Compact Representations and Approximations for Compuation in Games

Compact Representations and Approximations for Compuation in Games Compact Representations and Approximations for Compuation in Games Kevin Swersky April 23, 2008 Abstract Compact representations have recently been developed as a way of both encoding the strategic interactions

More information

Logic, Probability and Learning

Logic, Probability and Learning Logic, Probability and Learning Luc De Raedt luc.deraedt@cs.kuleuven.be Overview Logic Learning Probabilistic Learning Probabilistic Logic Learning Closely following : Russell and Norvig, AI: a modern

More information

Rameau: A System for Automatic Harmonic Analysis

Rameau: A System for Automatic Harmonic Analysis Rameau: A System for Automatic Harmonic Analysis Genos Computer Music Research Group Federal University of Bahia, Brazil ICMC 2008 1 How the system works The input: LilyPond format \score {

More information

Artificial Intelligence. Conditional probability. Inference by enumeration. Independence. Lesson 11 (From Russell & Norvig)

Artificial Intelligence. Conditional probability. Inference by enumeration. Independence. Lesson 11 (From Russell & Norvig) Artificial Intelligence Conditional probability Conditional or posterior probabilities e.g., cavity toothache) = 0.8 i.e., given that toothache is all I know tation for conditional distributions: Cavity

More information

CSC2420 Spring 2015: Lecture 3

CSC2420 Spring 2015: Lecture 3 CSC2420 Spring 2015: Lecture 3 Allan Borodin January 22, 2015 1 / 1 Announcements and todays agenda Assignment 1 due next Thursday. I may add one or two additional questions today or tomorrow. Todays agenda

More information

Chapter 15: Distributed Structures. Topology

Chapter 15: Distributed Structures. Topology 1 1 Chapter 15: Distributed Structures Topology Network Types Operating System Concepts 15.1 Topology Sites in the system can be physically connected in a variety of ways; they are compared with respect

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01 Probability and Statistics for Engineers Winter 2010-2011 Contents 1 Joint Probability Distributions 1 1.1 Two Discrete

More information

CSE 135: Introduction to Theory of Computation Decidability and Recognizability

CSE 135: Introduction to Theory of Computation Decidability and Recognizability CSE 135: Introduction to Theory of Computation Decidability and Recognizability Sungjin Im University of California, Merced 04-28, 30-2014 High-Level Descriptions of Computation Instead of giving a Turing

More information

2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines 2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

More information

OPTIMAL DESIGN OF DISTRIBUTED SENSOR NETWORKS FOR FIELD RECONSTRUCTION

OPTIMAL DESIGN OF DISTRIBUTED SENSOR NETWORKS FOR FIELD RECONSTRUCTION OPTIMAL DESIGN OF DISTRIBUTED SENSOR NETWORKS FOR FIELD RECONSTRUCTION SΓ©rgio Pequito, Stephen Kruzick, Soummya Kar, JosΓ© M. F. Moura, A. Pedro Aguiar Department of Electrical and Computer Engineering

More information

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics*

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics* IE 680 Special Topics in Production Systems: Networks, Routing and Logistics* Rakesh Nagi Department of Industrial Engineering University at Buffalo (SUNY) *Lecture notes from Network Flows by Ahuja, Magnanti

More information

2.2 Derivative as a Function

2.2 Derivative as a Function 2.2 Derivative as a Function Recall that we defined the derivative as f (a) = lim h 0 f(a + h) f(a) h But since a is really just an arbitrary number that represents an x-value, why don t we just use x

More information

Finding and counting given length cycles

Finding and counting given length cycles Finding and counting given length cycles Noga Alon Raphael Yuster Uri Zwick Abstract We present an assortment of methods for finding and counting simple cycles of a given length in directed and undirected

More information

Divide And Conquer Algorithms

Divide And Conquer Algorithms CSE341T/CSE549T 09/10/2014 Lecture 5 Divide And Conquer Algorithms Recall in last lecture, we looked at one way of parallelizing matrix multiplication. At the end of the lecture, we saw the reduce SUM

More information

Information flow in generalized hierarchical networks

Information flow in generalized hierarchical networks Information flow in generalized hierarchical networks Juan A. Almendral, Luis LΓ³pez and Miguel A. F. SanjuΓ‘n Grupo de DinΓ‘mica no Lineal y TeorΓ­a del Caos E.S.C.E.T., Universidad Rey Juan Carlos TulipΓ‘n

More information

Test Case Design Techniques

Test Case Design Techniques Summary of Test Case Design Techniques Brian Nielsen, Arne Skou {bnielsen ask}@cs.auc.dk Development of Test Cases Complete testing is impossible Testing cannot guarantee the absence of faults How to select

More information

Tracking Algorithms. Lecture17: Stochastic Tracking. Joint Probability and Graphical Model. Probabilistic Tracking

Tracking Algorithms. Lecture17: Stochastic Tracking. Joint Probability and Graphical Model. Probabilistic Tracking Tracking Algorithms (2015S) Lecture17: Stochastic Tracking Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Deterministic methods Given input video and current state, tracking result is always same. Local

More information

The University of the State of New York REGENTS HIGH SCHOOL EXAMINATION GEOMETRY. Wednesday, January 29, 2014 9:15 a.m. to 12:15 p.m.

The University of the State of New York REGENTS HIGH SCHOOL EXAMINATION GEOMETRY. Wednesday, January 29, 2014 9:15 a.m. to 12:15 p.m. GEOMETRY The University of the State of New York REGENTS HIGH SCHOOL EXAMINATION GEOMETRY Wednesday, January 29, 2014 9:15 a.m. to 12:15 p.m., only Student Name: School Name: The possession or use of any

More information

Triangle deletion. Ernie Croot. February 3, 2010

Triangle deletion. Ernie Croot. February 3, 2010 Triangle deletion Ernie Croot February 3, 2010 1 Introduction The purpose of this note is to give an intuitive outline of the triangle deletion theorem of Ruzsa and SzemerΓ©di, which says that if G = (V,

More information