DATA ANALYSIS II. Matrix Algorithms

Advertisement


Advertisement
Similar documents
Spectral graph theory

USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS

Lesson 3. Algebraic graph theory. Sergio Barbarossa. Rome - February 2010

Similarity and Diagonalization. Similar Matrices

Orthogonal Diagonalization of Symmetric Matrices

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold

x = + x 2 + x

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

Part 2: Community Detection

SGL: Stata graph library for network analysis

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

Introduction to Flocking {Stochastic Matrices}

The Power Method for Eigenvalues and Eigenvectors

Introduction to Matrix Algebra

Chapter 6. Orthogonality

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Mining Social-Network Graphs

= [a ij ] 2 3. Square matrix A square matrix is one that has equal number of rows and columns, that is n = m. Some examples of square matrices are

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

We know a formula for and some properties of the determinant. Now we see how the determinant can be used.

Math 4707: Introduction to Combinatorics and Graph Theory

Data Mining: Algorithms and Applications Matrix Math Review

University of Lille I PC first year list of exercises n 7. Review

The Hadamard Product

Bindel, Fall 2012 Matrix Computations (CS 6210) Week 8: Friday, Oct 12

(67902) Topics in Theory and Complexity Nov 2, Lecture 7

Eigenvalues and Markov Chains

October 3rd, Linear Algebra & Properties of the Covariance Matrix

A CHARACTERIZATION OF TREE TYPE

Lecture 9. 1 Introduction. 2 Random Walks in Graphs. 1.1 How To Explore a Graph? CS-621 Theory Gems October 17, 2012

Similar matrices and Jordan form

Eigenvalues and eigenvectors of a matrix

Markov Chains, part I

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Social Media Mining. Graph Essentials

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

1 Introduction to Matrices

[1] Diagonal factorization

Notes on Symmetric Matrices

USE OF EIGENVALUES AND EIGENVECTORS TO ANALYZE BIPARTIVITY OF NETWORK GRAPHS

Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20

1 Introduction. 2 Matrices: Definition. Matrix Algebra. Hervé Abdi Lynne J. Williams

Social Media Mining. Network Measures

Why graph clustering is useful?

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

10. Graph Matrices Incidence Matrix

The tree-number and determinant expansions (Biggs 6-7)

Seminar assignment 1. 1 Part

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

Solution based on matrix technique Rewrite. ) = 8x 2 1 4x 1x 2 + 5x x1 2x 2 2x 1 + 5x 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATH APPLIED MATRIX THEORY

Chapter 4: Binary Operations and Relations

Linear Algebra Review. Vectors

Notes on Matrix Multiplication and the Transitive Closure

Notes for STA 437/1005 Methods for Multivariate Data

Inner products on R n, and more

( % . This matrix consists of $ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

4. MATRICES Matrices

Inner Product Spaces and Orthogonality

Chapter 4 - Systems of Equations and Inequalities

GENERATING SETS KEITH CONRAD

Classification of Cartan matrices

The Characteristic Polynomial

9 Matrices, determinants, inverse matrix, Cramer s Rule

Split Nonthreshold Laplacian Integral Graphs

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

LINEAR ALGEBRA W W L CHEN

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

Math 215 HW #6 Solutions

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008

Algebra and Linear Algebra

MAT 242 Test 2 SOLUTIONS, FORM T

Lecture notes from Foundations of Markov chain Monte Carlo methods University of Chicago, Spring 2002 Lecture 1, March 29, 2002

The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants

Maximum and Minimum Values

GRA6035 Mathematics. Eivind Eriksen and Trond S. Gustavsen. Department of Economics

A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks

Network (Tree) Topology Inference Based on Prüfer Sequence

SECTIONS NOTES ON GRAPH THEORY NOTATION AND ITS USE IN THE STUDY OF SPARSE SYMMETRIC MATRICES

Graph. Consider a graph, G in Fig Then the vertex V and edge E can be represented as:

Direct Methods for Solving Linear Systems. Matrix Factorization

Mathematics Notes for Class 12 chapter 3. Matrices

A permutation can also be represented by describing its cycles. What do you suppose is meant by this?

Diagonal, Symmetric and Triangular Matrices

Nonlinear Programming Methods.S2 Quadratic Programming

Section Inner Products and Norms

NETZCOPE - a tool to analyze and display complex R&D collaboration networks

Math 315: Linear Algebra Solutions to Midterm Exam I

Manifold Learning Examples PCA, LLE and ISOMAP

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Direct Methods for Solving Linear Systems. Linear Systems of Equations

1 Spherical Kinematics

Linear Algebra and TI 89

Lecture 3: Linear Programming Relaxations and Rounding

Practical Graph Mining with R. 5. Link Analysis

A short proof of Perron s theorem. Hannah Cairns, April 25, 2014.

Advertisement
Transcription:

DATA ANALYSIS II Matrix Algorithms

Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where A(i, j ) = a ij denotes the similarity or affinity between points x i and x j. We require the similarity to be symmetric and non-negative, that is, a ij = a ji and a ij 0, respectively.

Weighted Adjacency Matrix The matrix A may be considered to be a weighted adjacency matrix of the weighted (undirected) graph G = (V,E), where each vertex is a point and each edge joins a pair of points, that is,

Degree Matrix For a vertex x i, let di denote the degree of the vertex, defined as We define the degree matrix D = of graph G as the n n diagonal matrix:

Normalized Adjacency Matrix The normalized adjacency matrix is obtained by dividing each row of the adjacency matrix by the degree of the corresponding node. Given the weighted adjacency matrix A for a graph G, its normalized adjacency matrix is defined as

Eigenvalues Because A is assumed to have non-negative elements, this implies that each element of M, namely m ij is also non-negative, as m ij = a ij, d i 0. Consider the sum of the i-th row in M; we have Thus, each row in M sums to 1. This implies that 1 is an eigenvalue of M. In fact, λ 1 = 1 is the largest eigenvalue of M, and the other eigenvalues satisfy the property that λ i 1. If G is connected then the eigenvector corresponding to λ 1 is u 1 = 1/ n * (1,1,...,1) T = 1/ n * 1.

Example (graph)

Adjacency and Degree Matrices

Graph Laplacian Matrix The Laplacian matrix of a graph is defined as L is a symmetric, positive semidefinite matrix.

Properties L has n real, non-negative eigenvalues, which can be arranged in decreasing order as follows: λ 1 λ 2 λ n 0. We can see that the first column (and the first row) is a linear combination of the remaining columns (rows). That is, if L i denotes the i-th column of L, then we can observe that L 1 +L 2 +L 3 + +L n = 0. This implies that the rank of L is at most n 1, and the smallest eigenvalue is λ n = 0, with the corresponding eigenvector given as u n = 1 n * (1,1,...,1) T = 1 / n * 1, provided the graph is connected. If the graph is disconnected, then the number of eigenvalues equal to zero specifies the number of connected components in the graph.

Eigenvector Centrality A natural extension of the simple degree centrality. We can think of degree centrality as awarding one centrality point for every network neighbor a vertex has. But not all neighbors are equivalent. Vertex s importance in a network is increased by having connections to other vertices that are themselves important.

Important Neighbors Let us make some initial guess about the centrality x i of each vertex i (e.g. x i = 1 for all i). We define the sum of the centralities of i s neighbors: where A ij is an element of the adjacency matrix.

Matrix Representation We can also write this expression in matrix notation as x = Ax, where x is the vector with elements x i. Repeating this process to make better estimates, we have after t steps a vector of centralities x(t) given by

Eigenvectors Now let us write x(0) as a linear combination of the eigenvectors v i of the adjacency matrix for some appropriate choice of constants c i :

Then where the κ i are the eigenvalues of A, and κ 1 is the largest of them. κ i /κ 1 < 1 for all i 1, t

In other words, the limiting vector of centralities is simply proportional to the leading eigenvector of the adjacency matrix. Equivalently we could say that the centrality x satisfies The centrality x i of vertex i is proportional to the sum of the centralities of i s neighbors:

Remarks The eigenvector centralities of all vertices are non-negative. To see this, consider what happens if the initial vector x(0) happens to have only non-negative elements. Since all elements of the adjacency matrix are also nonnegative, multiplication by A can never introduce any negative elements to the vector and x(t) must have all elements non-negative.

Normalization We care only about which vertices have high or low centrality and not about absolute values. We can normalize the centralities by, for instance, requiring that they sum to n (which insures that average centrality stays constant as the network gets larger).

Largest Eigenvalue? Eigenvector centrality is an example of a quantity that can be calculated by a computer in a number of different ways, but not all of them are equally efficient. One way to calculate it would be to use a standard linear algebra method to calculate the complete set of eigenvectors of the adjacency matrix, and then discard all of them except the one corresponding to the largest eigenvalue.???

Power Method If we start with essentially any initial vector x(0) and multiply it repeatedly by the adjacency matrix A, we get x(t) will converge to the required leading eigenvector of A as t. There is no faster method known for calculating the leading eigenvector of any matrix.

Problems We have to choose all elements of our initial vector to be positive, we are guaranteed that the vector cannot be orthogonal to the leading eigenvector. We must periodically renormalize the vector by dividing all the elements by the same value, which we are allowed to do since an eigenvector divided throughout by a constant is still an eigenvector. How long do we need to go on multiplying by the adjacency matrix before the result converges to the leading eigenvalue? One simple way to gauge convergence is to perform the calculation in parallel for two different initial vectors and watch to see when they reach the same value, within some prescribed tolerance.

Sources Zaki, M. J., Meira Jr, W. (2014). Data Mining and Analysis: Fundamental Concepts and Algorithms. Cambridge University Press. [397-401] Newman, M. (2010). Networks: an introduction. Oxford University Press. [169-172, 345-353]