Maximum Likelihood Graph Structure Estimation with Degree Distributions
|
|
|
- Sharlene Williams
- 10 years ago
- Views:
Transcription
1 Maximum Likelihood Graph Structure Estimation with Distributions Bert Huang Computer Science Department Columbia University New York, NY 17 Tony Jebara Computer Science Department Columbia University New York, NY 17 Abstract We describe a generative model for graph edges under specific degree distributions which admits an exact and efficient inference method for recovering the most likely structure. This binary graph structure is obtained by reformulating the inference problem as a generalization of the polynomial time combinatorial optimization problem known as b-matching, which recovers a degree constrained maximum weight subgraph from an original graph. After this mapping, the most likely graph structure can be found in cubic time with respect to the number of nodes using max flow methods. Furthermore, in some instances, the combinatorial optimization problem can be solved exactly in cubic time by loopy belief propagation and max product updates. Empirical results show the method s ability to recover binary graph structure with appropriate degree distributions from partial or noisy information. 1 Introduction An important task in graph analysis is estimating graph structure given only partial information about nodes and edges. This article will consider finding subgraphs from an original (possibly fully connected) graph subject to information about edges in terms of their weight as well as degree distribution information for each node. In general, a graph G = {V, E}, where the cardinality of edges V = n, can contain a large number of subgraphs (graphs that can be obtained from the original by performing edge deletion). In fact, the number of subgraphs is E, and since E can be up to n(n 1)/, this can be a large space of possible subgraphs over which to search or perform probabilistic inference. Working with a probability distribution over such a large set of possibilities is not only computationally hard but also may be misleading since some graph structures are known to be unlikely a priori. This article proposes a particular distribution over graphs that uses factorization assumptions and incorporates priors distribution over node degrees. Surprisingly, this distribution still allows the efficient recovery of the most likely subgraph from the set of all possible graphs. The maximum a posteriori (MAP) estimate under this distribution is shown to be equivalent to the combinatorial problems known as b-matching and maximum weight degree constrained subgraph estimation, both of which have polynomial time algorithms. In fact, the formulation introduced in this article is a strict generalization of b-matching and maximum weight degree constrained subgraph optimization, in which all nodes must have some prespecified degree. In our formulation, each node can now be given its own degree distribution leading to a generalization of b-matching by using delta functions for the degree distribution. Similarly, if we use uniform degree distributions, we obtain bd-matching []. However, in the most general setting, any set of log-concave degree distributions can now be handled within our framework. 1
2 Previous work on denoising edge observations used a similar distribution over edges to ours, but the authors of [] use loopy belief propagation to obtain approximate marginals and perform approximate inference. This article indicates that, in some settings, MAP estimation over subgraphs under degree constraints can be solved exactly in polynomial time. Generative Model for Edges We begin by writing a distribution over all possible subgraphs which involves terms that factorize across (a) edges (to encode independent edge weight) and (b) degree distribution terms that tie edges together, producing dependencies between edges. The probability of any candidate edge set Ê E can be expressed as ln Pr(Ê G) = φ(e) + ψ i (deg(v i, Ê)) lnz (1) e Ê v i V The edge potentials can also be represented by a symmetric potential matrix W where W ij is the potential of edge (v i, v j ). The function deg(v i, Ê) returns the number of edges in Ê that are adjacent to node v i. Thus, this probability puts different local preferences on the edges via edge weights but also enforces more global structural knowledge about the likelihood of a subgraph by imposing degree distributions. Unfortunately, due to the large number of edges implicated in each degree distribution term ψ i, the probability model above has large tree-width. Therefore, exact inference and naïve MAP estimation procedures (for instance, using the junction tree algorithm) can scale exponentially with n. Fortunately, we will be able to show that concavity assumptions on ψ i can lead to efficient polynomial time MAP estimation..1 ENCODING AS A b-matching If we also enforce concavity of the ψ i functions in Equation 1, the above probability can be maximized by solving a b-matching. Formally, we define concavity as δψ i (k) = ψ i (k) ψ i (k 1) δ ψ i (k) = δψ i (k) δψ i (k 1) = ψ i (k) ψ i (k 1) (ψ i (k 1) ψ i (k )). If degree potentials conform to these concavity constraints, we can exactly mimic the probability function Pr(Ê G), which manipulates subgraph Ê of G with soft degree priors, with another equivalent probability function Pr(Ê G ). This larger yet equivalent probability involves a larger graph G and larger subgraph Ê. The key simplification will be that the larger subgraphs will have to satisfy hard degree constraints for each node (or priors that are delta functions on the degrees) on its in-degree and its out-degree (as opposed to a soft distribution over allowable degrees). Our construction proceeds as follows. First create a new graph G, which contains a copy of the original graph G, as well as additional dummy nodes denoted D. We will use these dummy nodes to mimic the role of the soft degree potential functions ψ 1,..., ψ n. For each node v i in our original set V, we introduce a set of dummy nodes. We add one dummy node for each edge in E that is adjacent to each v i. In other words, for each node v i, we will add dummy nodes d i,1,..., d i,ni where N i = deg(v i, E) is the size of the neighborhood of node v i. Each of the dummy nodes in d i,1,...,d i,ni is connected to v i in the graph G. We now have a new graph G = {V, E }, defined as follows: D = {d 1,1,...,d 1,N1,...,d n,1,..., d n,nn }, V = V D, E = E {(v i, d i,j ) 1 j N i, 1 i n}. We next specify the weights of the edges in G. First, set the weight of each edge e copied from E to its original potential, φ(e). Next, we set the edge weights between the original nodes and dummy nodes. The following formula defines the weight between the original node v i and the dummy nodes d i,1,...,d i,ni : w(v i, d i,j ) = ψ i (j 1) ψ i (j). ()
3 While the ψ functions have outputs for ψ(), there are no dummy nodes labeled d i, associated with that setting (ψ() is only used when defining the weight of d i,1 ). Note that by construction, the weights w(v i, d i,j ) are monotonically non-decreasing with respect to the index j due to the concavity of the ψ functions: ψ i (j) ψ i (j 1) ψ i (j 1) ψ i (j ) w(v i, d i,j ) w(v i, d i,j 1 ) w(v i, d i,j ) w(v i, d i,j 1 ). (3) We mimic the probability function Pr(Ê G) in Equation 1 over edges in G with a probability function on G called Pr(Ê G ). However, the probability function on G enforces hard degree constraints on the hypothesized edges Ê. Specifically, for the (original) nodes v 1,..., v n each node v i has to have exactly N i neighbors (including any dummy nodes to which it might connect). Furthermore, all dummy nodes D in G have no degree constraints whatsoever. It is known that such probability functions on Pr(Ê G ) with exact degree constraints can be maximized using combinatorial methods [] as well as belief propagation [1, 3, 7]. The proposed approach recovers the most likely subgraph Ê = arg maxê Pr(Ê G ) as follows: Ê = arg maxê E (v i,d i,j) Ê w(v i, d i,j ) () subject to deg(v i, Ê ) = N i for v i V. In other words, we are free to choose any graph structure in the original graph, but we must exactly meet the degree constraints by selecting dummy edges maximally. Theorem 1. The total edge weight of degree-constrained subgraphsê = argmaxê log Pr(Ê G ) from graph G differs from log Pr(Ê E G) by a fixed additive constant. Proof. Consider the edges Ê E. These are the estimated connectivity Ê after we remove dummy edges from Ê. Since we set the weight of the original edges to the φ potentials, the total weight of these edges is exactly the first term in (1), the local edge weights. What remains is to confirm that the ψ degree potentials agree with the weights of the remaining edges Ê \ Ê E between original nodes and dummy nodes. Recall that our degree constraints require each node in G to have degree N i. By construction, each node v i has N i available edges from which to choose: N i edges from the original graph and N i edges to dummy nodes. Moreover, if v i selects k original edges, it maximally selects N i k dummy edges. Since the dummy edges are constructed so their weights are non-increasing, the maximum N i k dummy edges are to the last N i k dummy nodes, or dummy nodes d i,k+1 through d i,ni. The proof is complete if we can show the following: j=k+1 w(v i, d i,j ) j=k +1 w(v i, d i,j )? = ψ i (k) ψ i (k ). Terms in the summations cancel out to show this equivalence. w(v i, d i,j ), j=k+1 = (ψ i (j 1) ψ i (j)) j=k ψ i (j) j=k+1 = ψ i (k) ψ i (k ). ψ i (j) j=k +1 (ψ i (j 1) ψ i (j)) j=k ψ i (j) + Substituting the definition of j=k +1 ψ i (j) This means the log-probability and the weight of the new graph change the exact same amount as we try different subgraphs of G. Therefore, for any degree constrained subgraph Ê the quantities Ê = arg maxê log Pr(Ê G ) and log Pr(Ê E G) differ only by a constant. 3
4 1 1 Original Weights φ 1 Col. ψ 1 1 Row ψ Expanded Weight Matrix (V,D,E ) 1 1 Graph Estimate (V,Ê) Max b matching Ê 1 1 Figure 1: Example of mapping a degree dependent problem to a hard-constrained b-matching. Left: Original weight matrix and row/column degree distributions. Middle: Weight matrix of expanded graph, whose solution is now constrained to have exactly 1 neighbors per node. Right: The resulting b-matching, where the upper left quadrant can be pulled out as the solution to the original problem. In practice, we find the maximum weight degree constrained subgraph to maximize Pr(Ê G ) using classical maximum flow algorithms [], which require O(n E ) computation time. In the special case of bipartite graphs, we can use a belief propagation algorithm [1, 3, 7], which is significantly faster. Furthermore, since the dummy nodes have no degree constraints, we only need to instantiate max i (N i ) dummy nodes and reuse them for each v i. The process described in this section is illustrated in Figure 1. 3 Experiments 3.1 Graph Reconstruction In this section, we describe a simple experiment that demonstrates one possible usage of our formulation. We consider the problem in which we are given information about nodes and an approximate degree distribution of the hidden graph. One natural approach to this task is to compute a kernel between the nodes such that K ij represents the affinity between node v i and v j, and threshold the kernel to predict if edges exist. Using this method, the problem can be expressed as the maximization of max (K ij t). Ê (i,j) Ê In other words, we pick a threshold t, then compare each kernel entry independently to the threshold and predict an edge when it is greater than t. In this setting, we can think of the edge weights, φ as being set to K ij t, and the ψ degree functions as uniform (such that the degree of a node does not influence the prediction). An equivalent formulation of this simple thresholding method is max K ij t deg(v i ). Ê (i,j) Ê i This can (surprisingly) be interpreted as having the local edge weights φ((i, j)) = K ij and the degree functions ψ(k) = tk, which, recalling that the objective is the log-probability, is an exponential distribution over the edges. We conduct our toy experiments by inducing features for the nodes of a real graph by spectral embedding. We perform an eigenvalue decomposition of the adjacency matrix and treat the projections onto the first ten eigenvectors as the features of the nodes. We then compute a linear kernel between all nodes, which essentially means our kernel is a compressed representation of the original adjacency matrix. An obvious drawback of kernel thresholding that becomes more obvious when we consider that it is equivalent to the exponential distribution is that fitting the distribution well means many nodes
5 Spectral Embedding of original graph Best thresholded graph, Hamming error 1 Best predicted graph, Hamming error Figure : Results of reconstruction experiment on Dolphin social network. Left: Spectral embedding of true graph and histogram of true degrees. Middle: Best simple threshold graph and the resulting degree histogram. Right: Best concave-constrained graph and histogram. Note that all nodes have at least one edge. We show the maximum-likelihood exponential curve derived using λ = N/ P i deg(vi), but since we sweep over scaling factors, we essentially used a range of exponential parameters. will have zero neighbors. Since our formulation allows any log-concave distribution, we can set the probability of a degree of zero to a smaller value, which should encourage our method to prefer that all nodes are connected. For both methods we sweep across thresholds or, equivalently for the exponential baseline, scalings of the ψ functions. In general, a high scaling resulted in prediction of too few edges, and a low threshold resulted in prediction of too many edges, with a moderate setting providing the lowest -1 error. The minor correction (penalizing isolation) to the exponential degree distribution of the threshold model improves reconstruction accuracy as displayed in Figures and 3. The results in Figure were obtained on a graph of dolphin social interaction [] between dolphins in a community living off Doubtful Sound, New Zealand, and the results in Figure 3 were obtained on a graph of character co-appearance between the characters in the novel Les Misérables []. By enforcing that all nodes are connected, the Hamming distance between the original adjacency matrix and the predicted adjacency is reduced. Spectral Embedding of original graph Best thresholded graph, Hamming error Best predicted graph, Hamming error Figure 3: Results of reconstruction experiment on Les Misérables character co-appearance network. Left: Spectral embedding of true graph and histogram of true degrees. Middle: Best simple threshold graph and the resulting degree histogram. Right: Best concave-constrained graph and histogram. Note that all nodes have at least one edge. These experiments show only the minimal usage of what our formulation can model because we are still using the same exponential distribution for all nodes, when the ψ degree functions can be of any non-parametric log-concave shape. This means applications of our method should take advantage of different degree distributions for different nodes. For example, if nodes are categorized, different categories of nodes may have different degree statistics.
6 Discussion We have provided a method to find the most likely graph from a distribution that uses edge weight information as well as degree-dependent distributions. The exact MAP estimate is recovered in polynomial time by showing that the problem is equivalent to b-matching or the maximum weight degree constrained subgraph. These can be efficiently and exactly implemented using maximum flow and belief propagation methods. Our method generalizes b-matching, bd-matching, and simple thresholding. A limitation of the approach is that the degree distributions that can be modeled in this way must be log-concave. This prevents the model from working with the degree distributions in power-law and scale-free networks, which is the subject of future work. However, in practice, log-concave degree distributions are still useful for a variety of real graph problems. References [1] M. Bayati, D. Shah, and M. Sharma. Maximum weight matching via max-product belief propagation. In Proc. of the IEEE International Symposium on Information Theory,. [] C. Fremuth-Paeger and D. Jungnickel. Balanced network flows. i. a unifying framework for design and analysis of matching algorithms. Networks, 33(1):1, [3] B. Huang and T. Jebara. Loopy belief propagation for bipartite maximum weight b-matching. In Artificial Intelligence and Statistics (AISTATS), 7. [] Donald E. Knuth. The stanford graphbase: a platform for combinatorial algorithms. In SODA 93: Proceedings of the fourth annual ACM-SIAM Symposium on Discrete algorithms, pages 1 3, Philadelphia, PA, USA, Society for Industrial and Applied Mathematics. [] David Lusseau, Karsten Schneider, Oliver J. Boisseau, Patti Haase, Elisabeth Slooten, and Steve M. Dawson. The bottlenose dolphin community of doubtful sound features a large proportion of long-lasting associations. Behavioral Ecology and Sociobiology, ():39, 3. [] Quaid D. Morris and Brendan J. Frey. Denoising and untangling graphs using degree priors. In Sebastian Thrun, Lawrence Saul, and Bernhard Schölkopf, editors, Advances in Neural Information Processing Systems 1. MIT Press, Cambridge, MA, 3. [7] S. Sanghavi, D. Malioutov, and A. Willsky. Linear programming analysis of loopy belief propagation for weighted matching. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems, pages MIT Press, Cambridge, MA,.
USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS
USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS Natarajan Meghanathan Jackson State University, 1400 Lynch St, Jackson, MS, USA [email protected]
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
Finding the M Most Probable Configurations Using Loopy Belief Propagation
Finding the M Most Probable Configurations Using Loopy Belief Propagation Chen Yanover and Yair Weiss School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, Israel
Course: Model, Learning, and Inference: Lecture 5
Course: Model, Learning, and Inference: Lecture 5 Alan Yuille Department of Statistics, UCLA Los Angeles, CA 90095 [email protected] Abstract Probability distributions on structured representation.
LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu 10-30-2014
LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING ----Changsheng Liu 10-30-2014 Agenda Semi Supervised Learning Topics in Semi Supervised Learning Label Propagation Local and global consistency Graph
Approximating the Partition Function by Deleting and then Correcting for Model Edges
Approximating the Partition Function by Deleting and then Correcting for Model Edges Arthur Choi and Adnan Darwiche Computer Science Department University of California, Los Angeles Los Angeles, CA 995
Part 2: Community Detection
Chapter 8: Graph Data Part 2: Community Detection Based on Leskovec, Rajaraman, Ullman 2014: Mining of Massive Datasets Big Data Management and Analytics Outline Community Detection - Social networks -
SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH
31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,
Social Media Mining. Graph Essentials
Graph Essentials Graph Basics Measures Graph and Essentials Metrics 2 2 Nodes and Edges A network is a graph nodes, actors, or vertices (plural of vertex) Connections, edges or ties Edge Node Measures
CNFSAT: Predictive Models, Dimensional Reduction, and Phase Transition
CNFSAT: Predictive Models, Dimensional Reduction, and Phase Transition Neil P. Slagle College of Computing Georgia Institute of Technology Atlanta, GA [email protected] Abstract CNFSAT embodies the P
EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines
EECS 556 Image Processing W 09 Interpolation Interpolation techniques B splines What is image processing? Image processing is the application of 2D signal processing methods to images Image representation
9.2 Summation Notation
9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a
An Empirical Study of Two MIS Algorithms
An Empirical Study of Two MIS Algorithms Email: Tushar Bisht and Kishore Kothapalli International Institute of Information Technology, Hyderabad Hyderabad, Andhra Pradesh, India 32. [email protected],
Visualization by Linear Projections as Information Retrieval
Visualization by Linear Projections as Information Retrieval Jaakko Peltonen Helsinki University of Technology, Department of Information and Computer Science, P. O. Box 5400, FI-0015 TKK, Finland [email protected]
Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence
Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support
Linear Programming I
Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins
Bayesian Statistics: Indian Buffet Process
Bayesian Statistics: Indian Buffet Process Ilker Yildirim Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 August 2012 Reference: Most of the material in this note
NETZCOPE - a tool to analyze and display complex R&D collaboration networks
The Task Concepts from Spectral Graph Theory EU R&D Network Analysis Netzcope Screenshots NETZCOPE - a tool to analyze and display complex R&D collaboration networks L. Streit & O. Strogan BiBoS, Univ.
Nan Kong, Andrew J. Schaefer. Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA
A Factor 1 2 Approximation Algorithm for Two-Stage Stochastic Matching Problems Nan Kong, Andrew J. Schaefer Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA Abstract We introduce
Transportation Polytopes: a Twenty year Update
Transportation Polytopes: a Twenty year Update Jesús Antonio De Loera University of California, Davis Based on various papers joint with R. Hemmecke, E.Kim, F. Liu, U. Rothblum, F. Santos, S. Onn, R. Yoshida,
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
A Network Flow Approach in Cloud Computing
1 A Network Flow Approach in Cloud Computing Soheil Feizi, Amy Zhang, Muriel Médard RLE at MIT Abstract In this paper, by using network flow principles, we propose algorithms to address various challenges
Change Management in Enterprise IT Systems: Process Modeling and Capacity-optimal Scheduling
Change Management in Enterprise IT Systems: Process Modeling and Capacity-optimal Scheduling Praveen K. Muthusamy, Koushik Kar, Sambit Sahu, Prashant Pradhan and Saswati Sarkar Rensselaer Polytechnic Institute
Analysis of Algorithms, I
Analysis of Algorithms, I CSOR W4231.002 Eleni Drinea Computer Science Department Columbia University Thursday, February 26, 2015 Outline 1 Recap 2 Representing graphs 3 Breadth-first search (BFS) 4 Applications
Discrete Mathematics & Mathematical Reasoning Chapter 10: Graphs
Discrete Mathematics & Mathematical Reasoning Chapter 10: Graphs Kousha Etessami U. of Edinburgh, UK Kousha Etessami (U. of Edinburgh, UK) Discrete Mathematics (Chapter 6) 1 / 13 Overview Graphs and Graph
Spatial Statistics Chapter 3 Basics of areal data and areal data modeling
Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data
Practical Graph Mining with R. 5. Link Analysis
Practical Graph Mining with R 5. Link Analysis Outline Link Analysis Concepts Metrics for Analyzing Networks PageRank HITS Link Prediction 2 Link Analysis Concepts Link A relationship between two entities
Discuss the size of the instance for the minimum spanning tree problem.
3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can
Maximum Margin Clustering
Maximum Margin Clustering Linli Xu James Neufeld Bryce Larson Dale Schuurmans University of Waterloo University of Alberta Abstract We propose a new method for clustering based on finding maximum margin
Lecture 11: 0-1 Quadratic Program and Lower Bounds
Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite
Support Vector Machines with Clustering for Training with Very Large Datasets
Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France [email protected] Massimiliano
The Basics of Graphical Models
The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures
Adaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
3. The Junction Tree Algorithms
A Short Course on Graphical Models 3. The Junction Tree Algorithms Mark Paskin [email protected] 1 Review: conditional independence Two random variables X and Y are independent (written X Y ) iff p X ( )
Predict Influencers in the Social Network
Predict Influencers in the Social Network Ruishan Liu, Yang Zhao and Liuyu Zhou Email: rliu2, yzhao2, [email protected] Department of Electrical Engineering, Stanford University Abstract Given two persons
CHAPTER 2 Estimating Probabilities
CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a
Message-passing sequential detection of multiple change points in networks
Message-passing sequential detection of multiple change points in networks Long Nguyen, Arash Amini Ram Rajagopal University of Michigan Stanford University ISIT, Boston, July 2012 Nguyen/Amini/Rajagopal
COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH. 1. Introduction
COMBINATORIAL PROPERTIES OF THE HIGMAN-SIMS GRAPH ZACHARY ABEL 1. Introduction In this survey we discuss properties of the Higman-Sims graph, which has 100 vertices, 1100 edges, and is 22 regular. In fact
Compact Representations and Approximations for Compuation in Games
Compact Representations and Approximations for Compuation in Games Kevin Swersky April 23, 2008 Abstract Compact representations have recently been developed as a way of both encoding the strategic interactions
University of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
Applications to Data Smoothing and Image Processing I
Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is
Why? A central concept in Computer Science. Algorithms are ubiquitous.
Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online
KEYWORD SEARCH OVER PROBABILISTIC RDF GRAPHS
ABSTRACT KEYWORD SEARCH OVER PROBABILISTIC RDF GRAPHS In many real applications, RDF (Resource Description Framework) has been widely used as a W3C standard to describe data in the Semantic Web. In practice,
(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7
(67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition
Scheduling Algorithm with Optimization of Employee Satisfaction
Washington University in St. Louis Scheduling Algorithm with Optimization of Employee Satisfaction by Philip I. Thomas Senior Design Project http : //students.cec.wustl.edu/ pit1/ Advised By Associate
STA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! [email protected]! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
SOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.
AUTOMATION OF ENERGY DEMAND FORECASTING by Sanzad Siddique, B.S. A Thesis submitted to the Faculty of the Graduate School, Marquette University, in Partial Fulfillment of the Requirements for the Degree
Equilibrium computation: Part 1
Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
Class #6: Non-linear classification. ML4Bio 2012 February 17 th, 2012 Quaid Morris
Class #6: Non-linear classification ML4Bio 2012 February 17 th, 2012 Quaid Morris 1 Module #: Title of Module 2 Review Overview Linear separability Non-linear classification Linear Support Vector Machines
USE OF EIGENVALUES AND EIGENVECTORS TO ANALYZE BIPARTIVITY OF NETWORK GRAPHS
USE OF EIGENVALUES AND EIGENVECTORS TO ANALYZE BIPARTIVITY OF NETWORK GRAPHS Natarajan Meghanathan Jackson State University, 1400 Lynch St, Jackson, MS, USA [email protected] ABSTRACT This
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan
Statistics in Retail Finance. Chapter 6: Behavioural models
Statistics in Retail Finance 1 Overview > So far we have focussed mainly on application scorecards. In this chapter we shall look at behavioural models. We shall cover the following topics:- Behavioural
Approximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding
On Integer Additive Set-Indexers of Graphs
On Integer Additive Set-Indexers of Graphs arxiv:1312.7672v4 [math.co] 2 Mar 2014 N K Sudev and K A Germina Abstract A set-indexer of a graph G is an injective set-valued function f : V (G) 2 X such that
Online Adwords Allocation
Online Adwords Allocation Shoshana Neuburger May 6, 2009 1 Overview Many search engines auction the advertising space alongside search results. When Google interviewed Amin Saberi in 2004, their advertisement
Generating models of a matched formula with a polynomial delay
Generating models of a matched formula with a polynomial delay Petr Savicky Institute of Computer Science, Academy of Sciences of Czech Republic, Pod Vodárenskou Věží 2, 182 07 Praha 8, Czech Republic
Network (Tree) Topology Inference Based on Prüfer Sequence
Network (Tree) Topology Inference Based on Prüfer Sequence C. Vanniarajan and Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology Madras Chennai 600036 [email protected],
Discrete Applied Mathematics. The firefighter problem with more than one firefighter on trees
Discrete Applied Mathematics 161 (2013) 899 908 Contents lists available at SciVerse ScienceDirect Discrete Applied Mathematics journal homepage: www.elsevier.com/locate/dam The firefighter problem with
The Open University s repository of research publications and other research outputs
Open Research Online The Open University s repository of research publications and other research outputs The degree-diameter problem for circulant graphs of degree 8 and 9 Journal Article How to cite:
The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
COUNTING INDEPENDENT SETS IN SOME CLASSES OF (ALMOST) REGULAR GRAPHS
COUNTING INDEPENDENT SETS IN SOME CLASSES OF (ALMOST) REGULAR GRAPHS Alexander Burstein Department of Mathematics Howard University Washington, DC 259, USA [email protected] Sergey Kitaev Mathematics
Neural Voting Machines
massachusetts institute of technology computer science and artificial intelligence laboratory Neural Voting Machines Whitman Richards & H. Sebastian Seung AI Memo 2004-029 December 2004 2004 massachusetts
Distributed Computing over Communication Networks: Maximal Independent Set
Distributed Computing over Communication Networks: Maximal Independent Set What is a MIS? MIS An independent set (IS) of an undirected graph is a subset U of nodes such that no two nodes in U are adjacent.
Using Generalized Forecasts for Online Currency Conversion
Using Generalized Forecasts for Online Currency Conversion Kazuo Iwama and Kouki Yonezawa School of Informatics Kyoto University Kyoto 606-8501, Japan {iwama,yonezawa}@kuis.kyoto-u.ac.jp Abstract. El-Yaniv
Towards better accuracy for Spam predictions
Towards better accuracy for Spam predictions Chengyan Zhao Department of Computer Science University of Toronto Toronto, Ontario, Canada M5S 2E4 [email protected] Abstract Spam identification is crucial
Moving Least Squares Approximation
Chapter 7 Moving Least Squares Approimation An alternative to radial basis function interpolation and approimation is the so-called moving least squares method. As we will see below, in this method the
Support Vector Machines Explained
March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),
ARTICLE IN PRESS. European Journal of Operational Research xxx (2004) xxx xxx. Discrete Optimization. Nan Kong, Andrew J.
A factor 1 European Journal of Operational Research xxx (00) xxx xxx Discrete Optimization approximation algorithm for two-stage stochastic matching problems Nan Kong, Andrew J. Schaefer * Department of
Introduction to Support Vector Machines. Colin Campbell, Bristol University
Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.
Modélisation et résolutions numérique et symbolique
Modélisation et résolutions numérique et symbolique via les logiciels Maple et Matlab Jeremy Berthomieu Mohab Safey El Din Stef Graillat [email protected] Outline Previous course: partial review of what
Social Media Mining. Network Measures
Klout Measures and Metrics 22 Why Do We Need Measures? Who are the central figures (influential individuals) in the network? What interaction patterns are common in friends? Who are the like-minded users
Probabilistic Models for Big Data. Alex Davies and Roger Frigola University of Cambridge 13th February 2014
Probabilistic Models for Big Data Alex Davies and Roger Frigola University of Cambridge 13th February 2014 The State of Big Data Why probabilistic models for Big Data? 1. If you don t have to worry about
5.1 Bipartite Matching
CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson
Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski.
Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski. Fabienne Comte, Celine Duval, Valentine Genon-Catalot To cite this version: Fabienne
Accurate and robust image superresolution by neural processing of local image representations
Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica
In this section, we will consider techniques for solving problems of this type.
Constrained optimisation roblems in economics typically involve maximising some quantity, such as utility or profit, subject to a constraint for example income. We shall therefore need techniques for solving
FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM
International Journal of Innovative Computing, Information and Control ICIC International c 0 ISSN 34-48 Volume 8, Number 8, August 0 pp. 4 FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT
Equivalence Concepts for Social Networks
Equivalence Concepts for Social Networks Tom A.B. Snijders University of Oxford March 26, 2009 c Tom A.B. Snijders (University of Oxford) Equivalences in networks March 26, 2009 1 / 40 Outline Structural
24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
Definition 11.1. Given a graph G on n vertices, we define the following quantities:
Lecture 11 The Lovász ϑ Function 11.1 Perfect graphs We begin with some background on perfect graphs. graphs. First, we define some quantities on Definition 11.1. Given a graph G on n vertices, we define
Mining Social-Network Graphs
342 Chapter 10 Mining Social-Network Graphs There is much information to be gained by analyzing the large-scale data that is derived from social networks. The best-known example of a social network is
DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH
DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,
Discrete Optimization
Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using
Fairness in Routing and Load Balancing
Fairness in Routing and Load Balancing Jon Kleinberg Yuval Rabani Éva Tardos Abstract We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria
Functional Optimization Models for Active Queue Management
Functional Optimization Models for Active Queue Management Yixin Chen Department of Computer Science and Engineering Washington University in St Louis 1 Brookings Drive St Louis, MO 63130, USA [email protected]
CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA
We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical
Exponential Random Graph Models for Social Network Analysis. Danny Wyatt 590AI March 6, 2009
Exponential Random Graph Models for Social Network Analysis Danny Wyatt 590AI March 6, 2009 Traditional Social Network Analysis Covered by Eytan Traditional SNA uses descriptive statistics Path lengths
OPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
Fast Sequential Summation Algorithms Using Augmented Data Structures
Fast Sequential Summation Algorithms Using Augmented Data Structures Vadim Stadnik [email protected] Abstract This paper provides an introduction to the design of augmented data structures that offer
