Extracting correlation structure from large random matrices
|
|
|
- Randall Reed
- 10 years ago
- Views:
Transcription
1 Extracting correlation structure from large random matrices Alfred Hero University of Michigan - Ann Arbor Feb. 17, / 46
2 1 Background 2 Graphical models 3 Screening for hubs in graphical model 4 Conclusion 2 / 46
3 Outline 1 Background 2 Graphical models 3 Screening for hubs in graphical model 4 Conclusion 3 / 46
4 Random measurement matrix and assumptions x 11 x 1p X 1 X = =. = [X 1,..., X p ] x n1 x np X n Each row of X is an independent realization of random vector X = [X 1,..., X p ] For this talk we assume: X has multivariate Gaussian distribution (not necessary) X has non-singular covariance matrix Σ (necessary) Either the covariance matrix or inverse covariance are sparse (necessary). A question of interest (Q1): Are there variables X k in X that are highly correlated with many other variables? This question is surprisingly difficult to answer for small n large p. 4 / 46
5 Example: spammer temporal patterns p = 10, 000, n = 30 Source: Xu, Kliger and H, Next Wave, 2010 Highly correlated spammers spammer correlation graph 5 / 46
6 Correlation analysis of multiple asset classes p = 25, n 1 = 80, n 2 = 60 Source: What is behind the fall cross assets correlation? J-J Ohana, 30 mars 2011, Riskelia s blog. Left: Average correlation: 0.42, percent of strong relations 33% Right: Average correlation: 0.3, percent of strong relations 20% What asset classes remain connected in Q4-10 and Q1-11? 6 / 46
7 Example: Acute respiratory infection gene orrelation network p = 20, 000, n = 8 7 / 46
8 Discoveries of variables with high sample correlation Number of discoveries exhibit phase transition phenomenon This phenomenon gets worse as p/n increases. 8 / 46
9 Previous work Regularized l 2 or l F covariance estimation Banded covariance model: Bickel-Levina (2008) Sparse eigendecomposition model: Johnstone-Lu (2007) Stein shrinkage estimator: Ledoit-Wolf (2005), Chen-Weisel-Eldar-H (2010) Gaussian graphical model selection l 1 regularized GGM: Meinshausen-Bühlmann (2006), Wiesel-Eldar-H (2010). Bayesian estimation: Rajaratnam-Massam-Carvalho (2008) Independence testing Sphericity test for multivariate Gaussian: Wilks (1935) Maximal correlation test: Moran (1980), Eagleson (1983), Jiang (2004), Zhou (2007), Cai and Jiang (2011) Our work (H, Rajaratnam 2011a, 2011b): fixed n large p, unrestricted sparsity structure, partial-correlation, hubs of correlation. 9 / 46
10 Covariance and correlation Covariance of X i, X j : σ ij = E[(X i E[X n ])((X j E[X j ])] Correlation of X i, X j : ρ ij = σ ij σii σ jj Covariance matrix Σ = ((σ ij )) p i,j=1 = E[(X E[X])T (X E[X])] Correlation matrix Γ = ((ρ ij )) p i,j=1 = diag(σ) 1/2 Σdiag(Σ) 1/2 Fundamental fact: ρ ij 1 and ρ ij = 1 iff X i = ax j + b. with sign(a) = sign(ρ ij ) 10 / 46
11 Correlation graph or network A correlation network is an undirected graph G with vertices V = {X i,..., X p } edges E = {e ij : ρ ij > η} i.e., an edge e ij exists between X i, X j if the magnitude correlation ρ ij exceeds a threshold η, η [0, 1]. Equivalent question (Q1): for large η, are there highly connected nodes (hubs) in G? 11 / 46
12 A thresholded correlation matrix and correlation graph p = 100 Correlation screening: find nodes that are connected. Hub screening: find nodes of degree at least δ. 12 / 46
13 Outline 1 Background 2 Graphical models 3 Screening for hubs in graphical model 4 Conclusion 13 / 46
14 Sparse multivariate dependency models Two types of sparse correlation models: Sparse correlation graphical models: Most correlation are zero, few marginal dependencies Examples: M-dependent processes, moving average (MA) processes Sparse inverse-correlation graphical models Most inverse covariance entries are zero, few conditional dependencies Examples: Markov random fields, autoregressive (AR) processes, global latent variables Sometimes correlation matrix and its inverse are both sparse Sometimes only one of them is sparse 14 / 46
15 Gaussian graphical models - GGM - (Lauritzen 1996) Multivariate Gaussian model p(x) = K 1/2 (2π) exp 1 p/2 2 i,j=1 where K = [cov(x)] 1 : p p precision matrix G has an edge e ij iff [K] ij = 0 p x i x j [K] ij Adjacency matrix A of G obtained by hard-thresholding K A = h(k), h(u) = 1 2 (sgn( u ρ) + 1) ρ is arbitrary positive threshold 15 / 46
16 Partial correlation representation of GGM Equivalent representation for A is A = h(γ) Γ is partial correlation matrix Γ = [diag(k)] 1/2 K[diag(K)] 1/2 Properties [[Γ]] i,j 1, [[Γ]] i,j = 0 [[K]] i,j = 0 16 / 46
17 Block diagonal Gaussian graphical model Figure: Left: partial correlation matrix A. Right: associated graphical model 17 / 46
18 Two coupled block Gaussian graphical model 18 / 46
19 Multiscale Gaussian graphical model 19 / 46
20 Banded Gaussian graphical model 20 / 46
21 Outline 1 Background 2 Graphical models 3 Screening for hubs in graphical model 4 Conclusion 21 / 46
22 Screening for hubs in G Figure: Star components - hubs of degree d = 1,..., 5,... Single treatment: count number of hub nodes in G p N d = I (d i d) i=1 Different treatments: Count number of hub node coincidences in G a and G b p Nd a b = I (di a d)i (di b d) i=1 22 / 46
23 Screening hubs in G from random samples Problem: Find hubs in G given n i.i.d. samples {X j } n j=1 Solution: Threshold the sample partial correlation matrix P = [diag(r 1 )] 1/2 R 1 [diag(r 1 )] 1/2 R is the sample correlation matrix R = cov(x i, X j ) var(x i ) var(x j ) p i,j=1 = [diag( cov(x))] 1/2 cov(x)[diag( cov(x))] 1/2 23 / 46
24 Issues Difficulties When n < p sample covariance matrix cov(x) is not invertible. False matches can occur at any threshold level ρ [0, 1). The number of false matches abruptly increases in p. Proposed solutions: for n < p We define a rank deficient version of partial correlation We derive finite sample p-values for the number of false matches We derive expressions for phase transition thresholds. Theory applies to both correlation graphs and concentration graphs 24 / 46
25 Z-scores Z-scores associated with X i : µ i = n 1 n l=1 1 σ i (X l i µ i ), l = 1,..., n X l i, σ 2 i = (n 1) 1 Define matrix of projected Z-scores U = [U 1,..., U p ], Sample correlation matrix representation R = U T U, n l=1 U i S n 2 IR n 1 r ij = U T i U j Sample partial correlation representation (X l i µ i ) 2 P = Y T Y, Y = [UU T ] 1 UD 1/2 U[UU T ] 2 U 25 / 46
26 Z-scores lie on sphere S n 2 Correlation is related to distance between Z-scores U i U j = 2(1 r ij ) 26 / 46
27 Example: Z-scores for diagonal Gaussian 27 / 46
28 Example : Z-scores for ARMA(2,2) Gaussian 28 / 46
29 Correlation/concentration hub discovery Hub discoveries: define number of vertices having degree d i δ N δ,ρ = p i=1 φ δ,i { 1, card{j : j i, Z T φ δ,i = i Z j ρ} δ 0, o.w. { Ui, correlation Z i = Y i, partial correlation 29 / 46
30 Asymptotic discovery rate Assume that rows of X are i.i.d. with bounded elliptically contoured density and sparse graphical model. Poisson limit: (H., Rajaratnam, 2011): Theorem For large p P(N δ,ρ > 0) { 1 exp( λδ,ρ /2), δ = 1 1 exp( λ δ,ρ ), δ > 1. ( ) p 1 λ δ,ρ = p (P 0 (ρ, n)) δ δ P 0 (ρ, n) = 2B((n 2)/2, 1/2) 1 ρ (1 u 2 ) n 4 2 du 30 / 46
31 Sparse covariance Ellipticity and sparsity assumption guarantee universal Poisson rate E[N δ,ρ ] = λ δ,ρ ( 1 + O ( (q/p) 2 )). Figure: Left: row k-sparse covariance. Right: block k-sparse covariance. k = 10 and p = 100. For non-elliptical and non-sparse case, Poisson limit holds with E[N δ,ρ ] = λ δ,ρ J(f U ) 31 / 46
32 False discovery probability heatmaps (δ = 1) False discovery probability: P(N δ,ρ > 0) 1 exp( λ δ,ρ ) p=10 (δ = 1) p= / 46
33 Mean discovery rate (δ = 1) n ρ c Critical threshold: ρ c max{ρ : de[n δ,ρ ]/dρ = 1} ρ c = 1 c δ,n (p 1) 2/(n 4) 33 / 46
34 Phase transitions as function of δ, p 34 / 46
35 Poisson convergence rates Assume ρ, n, p are such that p(p 1) δ (1 ρ 2 ) (n 2)/2 = O(1) P(N δ,ρ = 0) e λ δ,ρ O ( }) max {p 1/δ, p 1/(n 2), p,n,k,δ p,n,k,δ is dependency coefficient between δ-nearest-neighbors of Z i and its p k furthest neighbors 35 / 46
36 Where does Poisson convergent limit come from? Specialize to case of δ = 1: Define φ ij : indicator function of an edge between nodes i and j N e = j>i φ ij: the total number of edges N = p i=1 max j:j i φ ij : the total number of connected nodes Key properties: N is even integer {N = 0} {N e = 0} N e converges to a Poisson random variable N with rate Λ = E[N e ] lim P(N e = k) = (Λ ) k e Λ, k = 0, 1,... p,ρ 1 k! 36 / 46
37 Validation: correlation screening with spike-in n α \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \0.75 Table: Achievable limits in FPR (α) for TPR =0.8 (β), as function of n, minimum detectable threshold, and correlation threshold (ρ 1 \ρ). To obtain entries ρ 1 \ρ a Poisson approximation determined ρ = ρ(α) and a Fisher-Z Gaussian approximation determined ρ 1 = ρ 1 (β). Here p = 1000 on Gaussian sample having diagonal covariance with a spike-in correlated pair. 37 / 46
38 Validation: correlation screening with spike-in Figure: Comparison between predicted (diamonds) and actual (numbers) operating points (α, β) using Poisson approximation to false positive rate (α) and Fisher approximation to false negative rate (β). Each number is located at an operating point determined by the sample size n ranging over n = 10, 15, 20, 25, 30, 35. These numbers are color coded according to the target value of β. 38 / 46
39 Hub screening p-value computation algorithm Hub screening p-value algorithm: Step 1: Compute critical phase transition threshold ρ c,1 for discovery of connected vertices Step 2: Generate partial correlation graph with threshold ρ > ρ c,1 Step 3: Compute p-values for each vertex of degree δ = k found pv k (i) = P(N k,ρ(i) > 0) = 1 exp( λ k,ρ(i,k) ) where ρ(i, k) is sample correlation between X i and its k-th NN. Step 4: Render these p-value trajectories as a waterfallplot 39 / 46
40 Example: NKI gene expression dataset Netherlands Cancer Institute (NKI) early stage breast cancer p = 24, 481 gene probes on Affymetrix HU133 GeneChip 295 samples (subjects) Peng et al used 266 of these samples to perform covariance selection They preprocessed (Cox regression) to reduce number of variables to 1, 217 genes They applied sparse partial correlation estimation (SPACE) Here we apply hub screening directly to all 24, 481 gene probes Theory predicts phase transition threshold ρ c,1 = / 46
41 Mean discovery rate validation for sham NKI dataset observed degree # predicted (E[N δ,ρ ]) # actual (N δ,ρ ) d i δ = d i δ = d i δ = d i δ = d i δ = Table: Fidelity of the predicted (mean) number of false positives and the observed number of false positives in the realization of the sham NKI dataset experiment shown in Fig / 46
42 Waterfall plot of p-values for sham NKI dataset Figure: Waterfall plot of log p-values for concentration hub screening of a sham version of the NKI dataset. 42 / 46
43 Waterfall plot of p-values for actual NKI dataset with selected discoveries shown 43 / 46
44 Waterfall plot of p-values of NKI dataset with discoveries of Peng et al shown 44 / 46
45 Outline 1 Background 2 Graphical models 3 Screening for hubs in graphical model 4 Conclusion 45 / 46
46 Final remarks Data-driven discovery of correlation graphs is fraught with danger. Sample starved regime: Number of variables = p n = number of samples Mean number of discoveries: exhibit sharp phase transition Critical phase transition threshold exists Poisson-type limits hold on the number of discoveries Study of theoretical performance limits are essential References: H and Rajaratnam, Large scale correlation screening, JASA 2012 and arxiv H and Rajaratnam, Hub discovery in partial correlation graphical models, arxiv / 46
Correlation mining in massive data
Correlation mining in massive data Alfred Hero University of Michigan - Ann Arbor April 11, 2013 1 / 58 1 Corrrelation mining 2 Caveats: why we need to be careful! 3 Dependency models 4 Correlation Mining
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
Spatial Statistics Chapter 3 Basics of areal data and areal data modeling
Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data
Statistical machine learning, high dimension and big data
Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
Graphical Modeling for Genomic Data
Graphical Modeling for Genomic Data Carel F.W. Peeters [email protected] Joint work with: Wessel N. van Wieringen Mark A. van de Wiel Molecular Biostatistics Unit Dept. of Epidemiology & Biostatistics
Sections 2.11 and 5.8
Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and
Multivariate Normal Distribution
Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues
1 Short Introduction to Time Series
ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The
6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7
(67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition
Lecture 9: Introduction to Pattern Analysis
Lecture 9: Introduction to Pattern Analysis g Features, patterns and classifiers g Components of a PR system g An example g Probability definitions g Bayes Theorem g Gaussian densities Features, patterns
SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH
31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,
CONDITIONAL, PARTIAL AND RANK CORRELATION FOR THE ELLIPTICAL COPULA; DEPENDENCE MODELLING IN UNCERTAINTY ANALYSIS
CONDITIONAL, PARTIAL AND RANK CORRELATION FOR THE ELLIPTICAL COPULA; DEPENDENCE MODELLING IN UNCERTAINTY ANALYSIS D. Kurowicka, R.M. Cooke Delft University of Technology, Mekelweg 4, 68CD Delft, Netherlands
Random graphs with a given degree sequence
Sourav Chatterjee (NYU) Persi Diaconis (Stanford) Allan Sly (Microsoft) Let G be an undirected simple graph on n vertices. Let d 1,..., d n be the degrees of the vertices of G arranged in descending order.
The world s largest matrix computation. (This chapter is out of date and needs a major overhaul.)
Chapter 7 Google PageRank The world s largest matrix computation. (This chapter is out of date and needs a major overhaul.) One of the reasons why Google TM is such an effective search engine is the PageRank
Learning Gaussian process models from big data. Alan Qi Purdue University Joint work with Z. Xu, F. Yan, B. Dai, and Y. Zhu
Learning Gaussian process models from big data Alan Qi Purdue University Joint work with Z. Xu, F. Yan, B. Dai, and Y. Zhu Machine learning seminar at University of Cambridge, July 4 2012 Data A lot of
Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs
CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like
Message-passing sequential detection of multiple change points in networks
Message-passing sequential detection of multiple change points in networks Long Nguyen, Arash Amini Ram Rajagopal University of Michigan Stanford University ISIT, Boston, July 2012 Nguyen/Amini/Rajagopal
Department of Economics
Department of Economics On Testing for Diagonality of Large Dimensional Covariance Matrices George Kapetanios Working Paper No. 526 October 2004 ISSN 1473-0278 On Testing for Diagonality of Large Dimensional
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan
Social Media Mining. Network Measures
Klout Measures and Metrics 22 Why Do We Need Measures? Who are the central figures (influential individuals) in the network? What interaction patterns are common in friends? Who are the like-minded users
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Performance Metrics for Graph Mining Tasks
Performance Metrics for Graph Mining Tasks 1 Outline Introduction to Performance Metrics Supervised Learning Performance Metrics Unsupervised Learning Performance Metrics Optimizing Metrics Statistical
Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008
A tutorial on: Iterative methods for Sparse Matrix Problems Yousef Saad University of Minnesota Computer Science and Engineering CRM Montreal - April 30, 2008 Outline Part 1 Sparse matrices and sparsity
Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling
Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
Master s Theory Exam Spring 2006
Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem
Direct Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
BayesX - Software for Bayesian Inference in Structured Additive Regression
BayesX - Software for Bayesian Inference in Structured Additive Regression Thomas Kneib Faculty of Mathematics and Economics, University of Ulm Department of Statistics, Ludwig-Maximilians-University Munich
OPTIMAL DESIGN OF DISTRIBUTED SENSOR NETWORKS FOR FIELD RECONSTRUCTION
OPTIMAL DESIGN OF DISTRIBUTED SENSOR NETWORKS FOR FIELD RECONSTRUCTION Sérgio Pequito, Stephen Kruzick, Soummya Kar, José M. F. Moura, A. Pedro Aguiar Department of Electrical and Computer Engineering
Complex Networks Analysis: Clustering Methods
Complex Networks Analysis: Clustering Methods Nikolai Nefedov Spring 2013 ISI ETH Zurich [email protected] 1 Outline Purpose to give an overview of modern graph-clustering methods and their applications
Part 2: Community Detection
Chapter 8: Graph Data Part 2: Community Detection Based on Leskovec, Rajaraman, Ullman 2014: Mining of Massive Datasets Big Data Management and Analytics Outline Community Detection - Social networks -
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS
A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of
Social Media Mining. Data Mining Essentials
Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers
Statistical Models in Data Mining
Statistical Models in Data Mining Sargur N. Srihari University at Buffalo The State University of New York Department of Computer Science and Engineering Department of Biostatistics 1 Srihari Flood of
Multivariate normal distribution and testing for means (see MKB Ch 3)
Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................
SECTIONS 1.5-1.6 NOTES ON GRAPH THEORY NOTATION AND ITS USE IN THE STUDY OF SPARSE SYMMETRIC MATRICES
SECIONS.5-.6 NOES ON GRPH HEORY NOION ND IS USE IN HE SUDY OF SPRSE SYMMERIC MRICES graph G ( X, E) consists of a finite set of nodes or vertices X and edges E. EXMPLE : road map of part of British Columbia
USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS
USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS Natarajan Meghanathan Jackson State University, 1400 Lynch St, Jackson, MS, USA [email protected]
Lecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
Design of LDPC codes
Design of LDPC codes Codes from finite geometries Random codes: Determine the connections of the bipartite Tanner graph by using a (pseudo)random algorithm observing the degree distribution of the code
The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
Solution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
Nimble Algorithms for Cloud Computing. Ravi Kannan, Santosh Vempala and David Woodruff
Nimble Algorithms for Cloud Computing Ravi Kannan, Santosh Vempala and David Woodruff Cloud computing Data is distributed arbitrarily on many servers Parallel algorithms: time Streaming algorithms: sublinear
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...
MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................
AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.
AUTOMATION OF ENERGY DEMAND FORECASTING by Sanzad Siddique, B.S. A Thesis submitted to the Faculty of the Graduate School, Marquette University, in Partial Fulfillment of the Requirements for the Degree
An Introduction to Machine Learning
An Introduction to Machine Learning L5: Novelty Detection and Regression Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia [email protected] Tata Institute, Pune,
Social Media Mining. Graph Essentials
Graph Essentials Graph Basics Measures Graph and Essentials Metrics 2 2 Nodes and Edges A network is a graph nodes, actors, or vertices (plural of vertex) Connections, edges or ties Edge Node Measures
Solving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of
Bayesian Machine Learning (ML): Modeling And Inference in Big Data. Zhuhua Cai Google, Rice University [email protected]
Bayesian Machine Learning (ML): Modeling And Inference in Big Data Zhuhua Cai Google Rice University [email protected] 1 Syllabus Bayesian ML Concepts (Today) Bayesian ML on MapReduce (Next morning) Bayesian
Statistics Graduate Courses
Statistics Graduate Courses STAT 7002--Topics in Statistics-Biological/Physical/Mathematics (cr.arr.).organized study of selected topics. Subjects and earnable credit may vary from semester to semester.
Practical Graph Mining with R. 5. Link Analysis
Practical Graph Mining with R 5. Link Analysis Outline Link Analysis Concepts Metrics for Analyzing Networks PageRank HITS Link Prediction 2 Link Analysis Concepts Link A relationship between two entities
Equivalence Concepts for Social Networks
Equivalence Concepts for Social Networks Tom A.B. Snijders University of Oxford March 26, 2009 c Tom A.B. Snijders (University of Oxford) Equivalences in networks March 26, 2009 1 / 40 Outline Structural
CS229 Lecture notes. Andrew Ng
CS229 Lecture notes Andrew Ng Part X Factor analysis Whenwehavedatax (i) R n thatcomesfromamixtureofseveral Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting, we usually
STATISTICA Formula Guide: Logistic Regression. Table of Contents
: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary
Forecasting Geographic Data Michael Leonard and Renee Samy, SAS Institute Inc. Cary, NC, USA
Forecasting Geographic Data Michael Leonard and Renee Samy, SAS Institute Inc. Cary, NC, USA Abstract Virtually all businesses collect and use data that are associated with geographic locations, whether
Cluster Analysis: Advanced Concepts
Cluster Analysis: Advanced Concepts and dalgorithms Dr. Hui Xiong Rutgers University Introduction to Data Mining 08/06/2006 1 Introduction to Data Mining 08/06/2006 1 Outline Prototype-based Fuzzy c-means
Concentration inequalities for order statistics Using the entropy method and Rényi s representation
Concentration inequalities for order statistics Using the entropy method and Rényi s representation Maud Thomas 1 in collaboration with Stéphane Boucheron 1 1 LPMA Université Paris-Diderot High Dimensional
Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning
Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning SAMSI 10 May 2013 Outline Introduction to NMF Applications Motivations NMF as a middle step
LECTURE 4. Last time: Lecture outline
LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random
Correlation in Random Variables
Correlation in Random Variables Lecture 11 Spring 2002 Correlation in Random Variables Suppose that an experiment produces two random variables, X and Y. What can we say about the relationship between
Probabilistic Latent Semantic Analysis (plsa)
Probabilistic Latent Semantic Analysis (plsa) SS 2008 Bayesian Networks Multimedia Computing, Universität Augsburg [email protected] www.multimedia-computing.{de,org} References
Finding and counting given length cycles
Finding and counting given length cycles Noga Alon Raphael Yuster Uri Zwick Abstract We present an assortment of methods for finding and counting simple cycles of a given length in directed and undirected
SGL: Stata graph library for network analysis
SGL: Stata graph library for network analysis Hirotaka Miura Federal Reserve Bank of San Francisco Stata Conference Chicago 2011 The views presented here are my own and do not necessarily represent the
Traffic Driven Analysis of Cellular Data Networks
Traffic Driven Analysis of Cellular Data Networks Samir R. Das Computer Science Department Stony Brook University Joint work with Utpal Paul, Luis Ortiz (Stony Brook U), Milind Buddhikot, Anand Prabhu
1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
Equilibrium computation: Part 1
Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium
Software Review: ITSM 2000 Professional Version 6.0.
Lee, J. & Strazicich, M.C. (2002). Software Review: ITSM 2000 Professional Version 6.0. International Journal of Forecasting, 18(3): 455-459 (June 2002). Published by Elsevier (ISSN: 0169-2070). http://0-
Statistical Data Mining. Practical Assignment 3 Discriminant Analysis and Decision Trees
Statistical Data Mining Practical Assignment 3 Discriminant Analysis and Decision Trees In this practical we discuss linear and quadratic discriminant analysis and tree-based classification techniques.
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.
Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three
An extension of the factoring likelihood approach for non-monotone missing data
An extension of the factoring likelihood approach for non-monotone missing data Jae Kwang Kim Dong Wan Shin January 14, 2010 ABSTRACT We address the problem of parameter estimation in multivariate distributions
Methods of Data Analysis Working with probability distributions
Methods of Data Analysis Working with probability distributions Week 4 1 Motivation One of the key problems in non-parametric data analysis is to create a good model of a generating probability distribution,
The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables
Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables 1 The Monte Carlo Framework Suppose we wish
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate
11. Time series and dynamic linear models
11. Time series and dynamic linear models Objective To introduce the Bayesian approach to the modeling and forecasting of time series. Recommended reading West, M. and Harrison, J. (1997). models, (2 nd
SOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
Factor analysis. Angela Montanari
Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number
FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM
International Journal of Innovative Computing, Information and Control ICIC International c 0 ISSN 34-48 Volume 8, Number 8, August 0 pp. 4 FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT
Service courses for graduate students in degree programs other than the MS or PhD programs in Biostatistics.
Course Catalog In order to be assured that all prerequisites are met, students must acquire a permission number from the education coordinator prior to enrolling in any Biostatistics course. Courses are
QUALITY ENGINEERING PROGRAM
QUALITY ENGINEERING PROGRAM Production engineering deals with the practical engineering problems that occur in manufacturing planning, manufacturing processes and in the integration of the facilities and
7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
Weakly Secure Network Coding
Weakly Secure Network Coding Kapil Bhattad, Student Member, IEEE and Krishna R. Narayanan, Member, IEEE Department of Electrical Engineering, Texas A&M University, College Station, USA Abstract In this
The Basics of Graphical Models
The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures
Profile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases:
Profile Analysis Introduction Profile analysis is the multivariate equivalent of repeated measures or mixed ANOVA. Profile analysis is most commonly used in two cases: ) Comparing the same dependent variables
Network (Tree) Topology Inference Based on Prüfer Sequence
Network (Tree) Topology Inference Based on Prüfer Sequence C. Vanniarajan and Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology Madras Chennai 600036 [email protected],
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
