Dynamic Topic Models. Abstract. 1. Introduction



Similar documents
Bayesian Statistics: Indian Buffet Process

Spam Filtering Using Genetic Algorithm: A Deeper Analysis

Correlated Topic Models

11. Time series and dynamic linear models

A Quantitative Approach to the Performance of Internet Telephony to E-business Sites

CRITERIUM FOR FUNCTION DEFININING OF FINAL TIME SHARING OF THE BASIC CLARK S FLOW PRECEDENCE DIAGRAMMING (PDM) STRUCTURE

Latent Dirichlet Markov Allocation for Sentiment Analysis

Statistics Graduate Courses

STA 4273H: Statistical Machine Learning

Expert Systems with Applications

Network Traffic Prediction Based on the Wavelet Analysis and Hopfield Neural Network

Dirichlet Processes A gentle tutorial

Topic models for Sentiment analysis: A Literature Survey

A Learning Based Method for Super-Resolution of Low Resolution Images

The Research on Demand Forecasting of Supply Chain Based on ICCELMAN

Statistical Models in Data Mining

Analysis of Bayesian Dynamic Linear Models

Bayesian networks - Time-series models - Apache Spark & Scala

Maximum likelihood estimation of mean reverting processes

1 Maximum likelihood estimation

Choosing the Optimal Object-Oriented Implementation using Analytic Hierarchy Process

Logistic Regression (1/24/13)

LCs for Binary Classification

OPTIMIZING WEB SERVER'S DATA TRANSFER WITH HOTLINKS

Lecture 3: Linear methods for classification

Bayes and Naïve Bayes. cs534-machine Learning

Efficient Cluster Detection and Network Marketing in a Nautural Environment

Linear Threshold Units

Machine Learning for Data Science (CS4786) Lecture 1

Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking

Learning Gaussian process models from big data. Alan Qi Purdue University Joint work with Z. Xu, F. Yan, B. Dai, and Y. Zhu

REDUCING RISK OF HAND-ARM VIBRATION INJURY FROM HAND-HELD POWER TOOLS INTRODUCTION

Basics of Statistical Machine Learning

> plot(exp.btgpllm, main = "treed GP LLM,", proj = c(1)) > plot(exp.btgpllm, main = "treed GP LLM,", proj = c(2)) quantile diff (error)

Text Mining in JMP with R Andrew T. Karl, Senior Management Consultant, Adsurgo LLC Heath Rushing, Principal Consultant and Co-Founder, Adsurgo LLC

NEURAL NETWORKS A Comprehensive Foundation

Machine Learning Logistic Regression

Data Mining Yelp Data - Predicting rating stars from review text

Online Courses Recommendation based on LDA

: Introduction to Machine Learning Dr. Rita Osadchy

Statistics in Retail Finance. Chapter 6: Behavioural models

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Probabilistic Latent Semantic Analysis (plsa)

Network Big Data: Facing and Tackling the Complexities Xiaolong Jin

Tensor Factorization for Multi-Relational Learning

Principles of Data Mining by Hand&Mannila&Smyth

Probabilistic Models for Big Data. Alex Davies and Roger Frigola University of Cambridge 13th February 2014

HT2015: SC4 Statistical Data Mining and Machine Learning

The Basics of Graphical Models

Machine Learning and Pattern Recognition Logistic Regression

Predict the Popularity of YouTube Videos Using Early View Data

Social Media Mining. Data Mining Essentials

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Bayesian Predictive Profiles with Applications to Retail Transaction Data

Statistics for BIG data

Machine Learning. CUNY Graduate Center, Spring Professor Liang Huang.

Assessing Data Mining: The State of the Practice

Bayesian Image Super-Resolution

Machine Learning.

TRENDS IN AN INTERNATIONAL INDUSTRIAL ENGINEERING RESEARCH JOURNAL: A TEXTUAL INFORMATION ANALYSIS PERSPECTIVE. wernervanzyl@sun.ac.

Tutorial on Markov Chain Monte Carlo

Machine Learning in FX Carry Basket Prediction

2010 Master of Science Computer Science Department, University of Massachusetts Amherst

Dissertation TOPIC MODELS FOR IMAGE RETRIEVAL ON LARGE-SCALE DATABASES. Eva Hörster

Mobile Phone APP Software Browsing Behavior using Clustering Analysis

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Detection of changes in variance using binary segmentation and optimal partitioning

Variational Mean Field for Graphical Models

Sanjeev Kumar. contribute

Designing a learning system

A Statistical Framework for Operational Infrasound Monitoring

1.2 Cultural Orientation and Consumer Behavior

Gaussian Processes in Machine Learning

An Experience Based Learning Controller

When scientists decide to write a paper, one of the first

Christfried Webers. Canberra February June 2015

ADVANCED MACHINE LEARNING. Introduction

Simple and efficient online algorithms for real world applications

TOWARD BIG DATA ANALYSIS WORKSHOP

Introduction. A. Bellaachia Page: 1

Cell Phone based Activity Detection using Markov Logic Network

Improving predictive inference under covariate shift by weighting the log-likelihood function

Probabilistic. review articles. Surveying a suite of algorithms that offer a solution to managing large document archives.

Big Ideas in Mathematics

Mind the Duality Gap: Logarithmic regret algorithms for online optimization

RELogAnalysis Well Log Analysis

DESIGN PROCEDURE FOR A LEARNING FEED-FORWARD CONTROLLER

Nominal and ordinal logistic regression

Selling T-shirts and Time Shares in the Cloud

Latent variable and deep modeling with Gaussian processes; application to system identification. Andreas Damianou

An Overview of Knowledge Discovery Database and Data mining Techniques

TOPIC MODELS DAVID M. BLEI PRINCETON UNIVERSITY JOHN D. LAFFERTY CARNEGIE MELLON UNIVERSITY

An Introduction to Data Mining

Server Load Prediction

Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data

Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye MSRC

Modeling and Analysis of Call Center Arrival Data: A Bayesian Approach

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

Sales and operations planning (SOP) Demand forecasting

Analysis of algorithms of time series analysis for forecasting sales

Transcription:

David M. Blei omputer Science Department, Princeton University, Princeton, NJ 08544, USA John D. Lafferty School of omputer Science, arnegie Mellon University, Pittsburgh PA 15213, USA BLEI@S.PRINETON.EDU LAFFERTY@S.MU.EDU Abstract A family of probabilistic time series models is developed to analye the time evolution of topics in large document collections. The approach is to use state space models on the natural parameters of the multinomial distributions that represent the topics. Variational approximations based on Kalman filters and nonparametric avelet regression are developed to carry out approximate posterior inference over the latent topics. In addition to giving quantitative, predictive models of a sequential corpus, dynamic topic models provide a qualitative indo into the contents of a large document collection. The models are demonstrated by analying the OR ed archives of the journal Science from 1880 through 2000. 1. Introduction Managing the explosion of electronic document archives requires ne tools for automatically organiing, searching, indexing, and brosing large collections. Recent research in machine learning and statistics has developed ne techniques for finding patterns of ords in document collections using hierarchical probabilistic models Blei et al., 2003; Mcallum et al., 2004; Rosen-Zvi et al., 2004; Griffiths and Steyvers, 2004; Buntine and Jakulin, 2004; Blei and Lafferty, 2006. These models are called topic models because the discovered patterns often reflect the underlying topics hich combined to form the documents. Such hierarchical probabilistic models are easily generalied to other kinds of data; for example, topic models have been used to analye images Fei-Fei and Perona, 2005; Sivic et al., 2005, biological data Pritchard et al., 2000, and survey data Erosheva, 2002. In an exchangeable topic model, the ords of each docu- Appearing in Proceedings of the 23 rd International onference on Machine Learning, Pittsburgh, PA, 2006. opyright 2006 by the authors/oners. ment are assumed to be independently dran from a mixture of multinomials. The mixing proportions are randomly dran for each document; the mixture components, or topics, are shared by all documents. Thus, each document reflects the components ith different proportions. These models are a poerful method of dimensionality reduction for large collections of unstructured documents. Moreover, posterior inference at the document level is useful for information retrieval, classification, and topic-directed brosing. Treating ords exchangeably is a simplification that it is consistent ith the goal of identifying the semantic themes ithin each document. For many collections of interest, hoever, the implicit assumption of exchangeable documents is inappropriate. Document collections such as scholarly journals, email, nes articles, and search query logs all reflect evolving content. For example, the Science article The Brain of Professor Laborde may be on the same scientific path as the article Reshaping the ortical Motor Map by Unmasking Latent Intracortical onnections, but the study of neuroscience looked much different in 1903 than it did in 1991. The themes in a document collection evolve over time, and it is of interest to explicitly model the dynamics of the underlying topics. In this paper, e develop a dynamic topic model hich captures the evolution of topics in a sequentially organied corpus of documents. We demonstrate its applicability by analying over 100 years of OR ed articles from the journal Science, hich as founded in 1880 by Thomas Edison and has been published through the present. Under this model, articles are grouped by year, and each year s articles arise from a set of topics that have evolved from the last year s topics. In the subsequent sections, e extend classical state space models to specify a statistical model of topic evolution. We then develop efficient approximate posterior inference techniques for determining the evolving topics from a sequential collection of documents. Finally, e present qualitative results that demonstrate ho dynamic topic models allo the exploration of a large document collection in ne

ays, and quantitative results that demonstrate greater predictive accuracy hen compared ith static topic models. 2. Dynamic Topic Models While traditional time series modeling has focused on continuous data, topic models are designed for categorical data. Our approach is to use state space models on the natural parameter space of the underlying topic multinomials, as ell as on the natural parameters for the logistic normal distributions used for modeling the document-specific topic proportions. First, e revie the underlying statistical assumptions of a static topic model, such as latent Dirichlet allocation LDA Blei et al., 2003. Let 1:K be K topics, each of hich is a distribution over a fixed vocabulary. In a static topic model, each document is assumed dran from the folloing generative process: 1. hoose topic proportions from a distribution over the K 1-simplex, such as a Dirichlet. 2. For each ord: a hoose a topic assignment Z Mult. b hoose a ord W Mult. This process implicitly assumes that the documents are dran exchangeably from the same set of topics. For many collections, hoever, the order of the documents reflects an evolving set of topics. In a dynamic topic model, e suppose that the data is divided by time slice, for example by year. We model the documents of each slice ith a K- component topic model, here the topics associated ith slice t evolve from the topics associated ith slice t 1. For a K-component model ith V terms, let t,k denote the V -vector of natural parameters for topic k in slice t. The usual representation of a multinomial distribution is by its mean parameteriation. If e denote the mean parameter of a V -dimensional multinomial by π, the ith component of the natural parameter is given by the mapping i = logπ i /π V. In typical language modeling applications, Dirichlet distributions are used to model uncertainty about the distributions over ords. Hoever, the Dirichlet is not amenable to sequential modeling. Instead, e chain the natural parameters of each topic t,k in a state space model that evolves ith Gaussian noise; the simplest version of such a model is t,k t 1,k N t 1,k,σ 2 I. 1 Our approach is thus to model sequences of compositional random variables by chaining Gaussian distributions in a dynamic model and mapping the emitted values to the simplex. This is an extension of the logistic normal distribu- N N N A A A Figure 1. Graphical representation of a dynamic topic model for three time slices. Each topic s natural parameters t,k evolve over time, together ith the mean parameters t of the logistic normal distribution for the topic proportions. tion Aitchison, 1982 to time-series simplex data West and Harrison, 1997. In LDA, the document-specific topic proportions are dran from a Dirichlet distribution. In the dynamic topic model, e use a logistic normal ith mean to express uncertainty over proportions. The sequential structure beteen models is again captured ith a simple dynamic model t t 1 N t 1,δ 2 I. 2 For simplicity, e do not model the dynamics of topic correlation, as as done for static models by Blei and Lafferty 2006. By chaining together topics and topic proportion distributions, e have sequentially tied a collection of topic models. The generative process for slice t of a sequential corpus is thus as follos: 1. Dra topics t t 1 N t 1,σ 2 I. 2. Dra t t 1 N t 1,δ 2 I. 3. For each document: a Dra η N t,a 2 I b For each ord: i. Dra Z Multπη. ii. Dra W t,d,n Multπ t,. Note that π maps the multinomial natural parameters to the mean parameters, π k,t = exp k,t, P exp k,t,. The graphical model for this generative process is shon in Figure 1. When the horiontal arros are removed, breaking the time dynamics, the graphical model reduces to a set of independent topic models. With time dynamics, the kth K

topic at slice t has smoothly evolved from the kth topic at slice t 1. For clarity of presentation, e no focus on a model ith K dynamic topics evolving as in 1, and here the topic proportion model is fixed at a Dirichlet. The technical issues associated ith modeling the topic proportions in a time series as in 2 are essentially the same as those for chaining the topics together. 3. Approximate Inference Working ith time series over the natural parameters enables the use of Gaussian models for the time dynamics; hoever, due to the nonconjugacy of the Gaussian and multinomial models, posterior inference is intractable. In this section, e present a variational method for approximate posterior inference. We use variational methods as deterministic alternatives to stochastic simulation, in order to handle the large data sets typical of text analysis. While Gibbs sampling has been effectively used for static topic models Griffiths and Steyvers, 2004, nonconjugacy makes sampling methods more difficult for this dynamic model. The idea behind variational methods is to optimie the free parameters of a distribution over the latent variables so that the distribution is close in Kullback-Liebler KL divergence to the true posterior; this distribution can then be used as a substitute for the true posterior. In the dynamic topic model, the latent variables are the topics t,k, mixture proportions t,d, and topic indicators t,d,n. The variational distribution reflects the group structure of the latent variables. There are variational parameters for each topic s sequence of multinomial parameters, and variational parameters for each of the document-level latent variables. The approximate variational posterior is K q k,1,..., k,t ˆ k,1,..., ˆ k,t 3 k=1 T Dt d=1 q t,d γ t,d N t,d n=1 q t,d,n φ t,d,n In the commonly used mean-field approximation, each latent variable is considered independently of the others. In the variational distribution of { k,1,..., k,t }, hoever, e retain the sequential structure of the topic by positing a dynamic model ith Gaussian variational observations {ˆ k,1,..., ˆ k,t }. These parameters are fit to minimie the KL divergence beteen the resulting posterior, hich is Gaussian, and the true posterior, hich is not Gaussian. A similar technique for Gaussian processes is described in Snelson and Ghahramani, 2006. The variational distribution of the document-level latent. N N N A A A Figure 2. A graphical representation of the variational approximation for the time series topic model of Figure 1. The variational parameters ˆ and ˆ are thought of as the outputs of a Kalman filter, or as observed data in a nonparametric regression setting. variables follos the same form as in Blei et al. 2003. Each proportion vector t,d is endoed ith a free Dirichlet parameter γ t,d, each topic indicator t,d,n is endoed ith a free multinomial parameter φ t,d,n, and optimiation proceeds by coordinate ascent. The updates for the documentlevel variational parameters have a closed form; e use the conjugate gradient method to optimie the topic-level variational observations. The resulting variational approximation for the natural topic parameters { k,1,..., k,t } incorporates the time dynamics; e describe one approximation based on a Kalman filter, and a second based on avelet regression. 3.1. Variational Kalman Filtering The vie of the variational parameters as outputs is based on the symmetry properties of the Gaussian density, f µ,σ x = f x,σ µ, hich enables the use of the standard forard-backard calculations for linear state space models. The graphical model and its variational approximation are shon in Figure 2. Here the triangles denote variational parameters; they can be thought of as hypothetical outputs of the Kalman filter, to facilitate calculation. To explain the main idea behind this technique in a simpler setting, consider the model here unigram models t in the natural parameteriation evolve over time. In this model there are no topics and thus no mixing parameters. The calculations are simpler versions of those e need for the more general latent variable models, but exhibit the es- K

sential features. Our state space model is t t 1 N t 1,σ 2 I t,n t Multπ t and e form the variational state space model here ˆ t t N t, ˆν 2 t I The variational parameters are ˆ t and ˆν t. Using standard Kalman filter calculations Kalman, 1960, the forard mean and variance of the variational posterior are given by m t E t ˆ 1:t = ˆν 2 t V t 1 + σ 2 + ˆν 2 t m t 1 + 1 V t E t m t 2 ˆ 1:t ˆν 2 t = V t 1 + σ 2 + ˆν t 2 V t 1 + σ 2 ˆν 2 t V t 1 + σ 2 + ˆν 2 t ˆ t ith initial conditions specified by fixed m 0 and V 0. The backard recursion then calculates the marginal mean and variance of t given ˆ 1:T as m t 1 E t 1 ˆ 1:T = σ 2 V t 1 + σ 2 m t 1 + 1 σ 2 V t 1 + σ 2 m t Ṽ t 1 E t 1 m t 1 2 ˆ 1:T 2 Vt 1 = V t 1 + Ṽt V t 1 + σ 2 V t 1 + σ 2 ith initial conditions m T = m T and ṼT = V T. We approximate the posterior p 1:T 1:T using the state space posterior q 1:T ˆ 1:T. From Jensen s inequality, the loglikelihood is bounded from belo as log pd 1:T 4 q 1:T ˆ p 1:T pd 1:T 1:T 1:T log q 1:T ˆ d 1:T 1:T = E q log p 1:T + E q log pd t t + Hq Details of optimiing this bound are given in an appendix. 3.2. Variational Wavelet Regression The variational Kalman filter can be replaced ith variational avelet regression; for a readable introduction standard avelet methods, see Wasserman 2006. We rescale time so it is beteen 0 and 1. For 128 years of Science e take n = 2 J and J = 7. To be consistent ith our earlier notation, e assume that ˆ t = m t + ˆνɛ t here ɛ t N0,1. Our variational avelet regression algorithm estimates {ˆ t }, hich e vie as observed data, just as in the Kalman filter method, as ell as the noise level ˆν. For concreteness, e illustrate the technique using the Haar avelet basis; Daubechies avelets are used in our actual examples. The model is then J 1 ˆ t = φx t + 2 j 1 j=0 k=0 D jk ψ jk x t here x t = t/n, φx = 1 for 0 x 1, { 1 if 0 x 1 ψx = 2, 1 if 1 2 < x 1 and ψ jk x = 2 j/2 ψ2 j x k. Our variational estimate for the posterior mean becomes J 1 m t = ˆφx t + 2 j 1 j=0 k=0 ˆD jk ψ jk x t. here ˆ = n 1 n ˆ t, and ˆD jk are obtained by thresholding the coefficients Z jk = 1 n n ˆ t ψ jk x t. To estimate ˆ t e use gradient ascent, as for the Kalman filter approximation, requiring the derivatives m t / ˆ t. If soft thresholding is used, then e have that m t ˆ s = ˆ ˆ s φx t + J 1 2 j 1 j=0 k=0 ˆD jk ˆ s ψ jk x t. ith ˆ/ ˆ s = n 1 and { 1 ˆD jk / ˆ s = n ψ jkx s if Z jk > λ 0 otherise. Note also that Z jk > λ if and only if ˆD jk > 0. These derivatives can be computed using off-the-shelf softare for the avelet transform in any of the standard avelet bases. Sample results of running this and the Kalman variational algorithm to approximate a unigram model are given in Figure 3. Both variational approximations smooth out the

Darin Einstein moon 0.0000 0.0004 0.0008 0.0012 0e+00 2e 04 4e 04 6e 04 0e+00 2e 04 4e 04 6e 04 8e 04 1e 03 0.0000 0.0004 0.0008 0.0012 0e+00 2e 04 4e 04 6e 04 0e+00 2e 04 4e 04 6e 04 8e 04 1e 03 Figure 3. omparison of the Kalman filter top and avelet regression bottom variational approximations to a unigram model. The variational approximations red and blue curves smooth out the local fluctuations in the unigram counts gray curves of the ords shon, hile preserving the sharp peaks that may indicate a significant change of content in the journal. The avelet regression is able to superresolve the double spikes in the occurrence of Einstein in the 1920s. The spike in the occurrence of Darin near 1910 may be associated ith the centennial of Darin s birth in 1809. local fluctuations in the unigram counts, hile preserving the sharp peaks that may indicate a significant change of content in the journal. While the fit is similar to that obtained using standard avelet regression to the normalied counts, the estimates are obtained by minimiing the KL divergence as in standard variational approximations. In the dynamic topic model of Section 2, the algorithms are essentially the same as those described above. Hoever, rather than fitting the observations from true observed counts, e fit them from expected counts under the document-level variational distributions in 3. 4. Analysis of Science We analyed a subset of 30,000 articles from Science, 250 from each of the 120 years beteen 1881 and 1999. Our data ere collected by JSTOR.jstor.org, a notfor-profit organiation that maintains an online scholarly archive obtained by running an optical character recognition OR engine over the original printed journals. JS- TOR indexes the resulting text and provides online access to the scanned images of the original content through keyord search. Our corpus is made up of approximately 7.5 million ords. We pruned the vocabulary by stemming each term to its root, removing function terms, and removing terms that occurred feer than 25 times. The total vocabulary sie is 15,955. To explore the corpus and its themes, e estimated a 20-component dynamic topic model. Posterior inference took approximately 4 hours on a 1.5GHZ PoerP Macintosh laptop. To of the resulting topics are illustrated in Figure 4, shoing the top several ords from those topics in each decade, according to the posterior mean number of occurrences as estimated using the Kalman filter variational approximation. Also shon are example articles hich exhibit those topics through the decades. As illustrated, the model captures different scientific themes, and can be used to inspect trends of ord usage ithin them. To validate the dynamic topic model quantitatively, e consider the task of predicting the next year of Science given all the articles from the previous years. We compare the predictive poer of three 20-topic models: the dynamic topic model estimated from all of the previous years, a static topic model estimated from all of the previous years, and a static topic model estimated from the single previous year. All the models are estimated to the same convergence criterion. The topic model estimated from all the previous data and dynamic topic model are initialied at the same point. The dynamic topic model performs ell; it alays assigns higher likelihood to the next year s articles than the other to models Figure 5. It is interesting that the predictive poer of each of the models declines over the years. We can tentatively attribute this to an increase in the rate of specialiation in scientific language.

9 : : 9 < D E< @ F E;;? = I A @? D F E=? > D I D D? = =? J K GD 9 : L M < D E< @ I A @? D F E;;? = F E=? > D GE@? =? J K GD 9 L M M I A @? D? G? > D = E> D H? < = B J B J D? < D E< @ GE@? N < E@ D F E;;? = 9 L 9 M I A @? D D H? < =B? G? > D = E> J B J D? GE@? O < F B 9 L P M D H? < =B? G? > D = GE@? Q I GK? 9 L R M D H? < =B S I Q?? G? > D = E> Q I GK? 9 L T M N < E@ D D H? < = B 9 L U M D S < I O J < = N D 9 L V M DS < < O J? = Q? 9 L X M < F? G I A @? D 9 L : M < F? G D S < 9 L L M? @? = A B J D = K > D K = < F? G J D I D? D S < I A @? D P M M M J D I D? I A @? D J B J D? D S < Y K I @ D K N H B J E> e f ggh i!! " # $ %% & ' ' Z [ \ ] ^ _ ` a b c d _ ` d Z h jh k gil m * ' +, + -.. n o f m go e /. 0 1 2 3 0 " 4 / 5 4 & ' " 5 6 ", 5 2 7 8 ª «{ { { { } ~ ƒ ƒ } ~ ƒ ~ } ˆ }~ ƒ } ˆ ˆ ~ Š ƒ } ˆ Ž Š Ž Š Ž ~ } } Š } Ž Ž ˆ ~ }~ }ƒ ~ Ž Š }ƒ Š Œ ~ ƒ Œ ˆ ~ Š ƒ ƒ Ž ~ } }Š } Ž Ž } Œ ~ }ƒ Ž ~ } Ž } Ž Ž ~ Ž ~ Ž Ž Ž } } Š Ž }ƒ }ƒ }ƒ Ž Š Ž Š ƒ Ž Œ } }ƒ } Ž Š } ˆ } ~ Œ } } Š Š ~ } ˆ ~ ~ Ž } Ž } ~ Ž ~ } }~ ƒ ~ ƒ ~ ƒ } ~ } } ~ ƒ Œ } ~ } ƒ ~ }~ ~ ~ ~ }ƒ Ž } Ž ƒ Ž }~ } } Š Ž ƒ Ž Ž ƒ }~ } }ƒ ~ ~ }ƒ Ž Š Ž š } }Š Ž Ž } } Ž ˆ ~ Ž ~ Ž }~ }ƒ } } } Ž } } Ž } } Ž ~ Ž Ž ~ p q r s t u v x r y r p m h i h k f m h o il m œ ž œ Ÿ ž ž œ Ÿ ž œ Ÿ ž œ Ÿ ž œ Ÿ ž ž ž ž 8 " '!!, %% & 5 5. & * ' 0 / $ 0 4 1 2 * & " 2 4 ± 4 ± " 8 5 ' $ ' ' 0 0 Figure 4. Examples from the posterior analysis of a 20-topic dynamic model estimated from the Science corpus. For to topics, e illustrate: a the top ten ords from the inferred posterior distribution at ten year lags b the posterior estimate of the frequency as a function of year of several ords from the same to topics c example articles throughout the collection hich exhibit these topics. Note that the plots are scaled to give an idea of the shape of the trajectory of the ords posterior probability i.e., comparisons across ords are not meaningful. 5. Discussion We have developed sequential topic models for discrete data by using Gaussian time series on the natural parameters of the multinomial topics and logistic normal topic proportion models. We derived variational inference algorithms that exploit existing techniques for sequential data; e demonstrated a novel use of Kalman filters and avelet regression as variational approximations. Dynamic topic models can give a more accurate predictive model, and also offer ne ays of brosing large, unstructured document collections. There are many ays that the ork described here can be extended. One direction is to use more sophisticated state space models. We have demonstrated the use of a simple Gaussian model, but it ould be natural to include a drift term in a more sophisticated autoregressive model to explicitly capture the rise and fall in popularity of a topic, or in the use of specific terms. Another variant ould allo for heteroscedastic time series. Perhaps the most promising extension to the methods presented here is to incorporate a model of ho ne topics in the collection appear or disappear over time, rather than assuming a fixed number of topics. One possibility is to use a simple Galton-Watson or birth-death process for the topic population. While the analysis of birth-death or branching processes often centers on extinction probabilities, here a goal ould be to find documents that may be responsible for spaning ne themes in a collection.

Negative log likelihood log scale 1e+06 2e+06 4e+06 7e+06 LDA prev LDA all DTM data. PhD thesis, arnegie Mellon University, Department of Statistics. Fei-Fei, L. and Perona, P. 2005. A Bayesian hierarchical model for learning natural scene categories. IEEE omputer Vision and Pattern Recognition. Griffiths, T. and Steyvers, M. 2004. Finding scientific topics. Proceedings of the National Academy of Science, 101:5228 5235. Kalman, R. 1960. A ne approach to linear filtering and prediction problems. Transaction of the AMSE: Journal of Basic Engineering, 82:35 45. 1920 1940 1960 1980 2000 Year Figure 5. This figure illustrates the performance of using dynamic topic models and static topic models for prediction. For each year beteen 1900 and 2000 at 5 year increments, e estimated three models on the articles through that year. We then computed the variational bound on the negative log likelihood of next year s articles under the resulting model loer numbers are better. DTM is the dynamic topic model; LDA-prev is a static topic model estimated on just the previous year s articles; LDAall is a static topic model estimated on all the previous articles. Acknoledgments This research as supported in part by NSF grants IIS- 0312814 and IIS-0427206, the DARPA ALO project, and a grant from Google. References Aitchison, J. 1982. The statistical analysis of compositional data. Journal of the Royal Statistical Society, Series B, 442:139 177. Blei, D., Ng, A., and Jordan, M. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993 1022. Blei, D. M. and Lafferty, J. D. 2006. orrelated topic models. In Weiss, Y., Schölkopf, B., and Platt, J., editors, Advances in Neural Information Processing Systems 18. MIT Press, ambridge, MA. Buntine, W. and Jakulin, A. 2004. Applying discrete PA in data analysis. In Proceedings of the 20th onference on Uncertainty in Artificial Intelligence, pages 59 66. AUAI Press. Erosheva, E. 2002. Grade of membership and latent structure models ith application to disability survey Mcallum, A., orrada-emmanuel, A., and Wang, X. 2004. The author-recipient-topic model for topic and role discovery in social netorks: Experiments ith Enron and academic email. Technical report, University of Massachusetts, Amherst. Pritchard, J., Stephens, M., and Donnelly, P. 2000. Inference of population structure using multilocus genotype data. Genetics, 155:945 959. Rosen-Zvi, M., Griffiths, T., Steyvers, M., and Smith, P. 2004. The author-topic model for authors and documents. In Proceedings of the 20th onference on Uncertainty in Artificial Intelligence, pages 487 494. AUAI Press. Sivic, J., Rusell, B., Efros, A., Zisserman, A., and Freeman, W. 2005. Discovering objects and their location in images. In International onference on omputer Vision IV 2005. Snelson, E. and Ghahramani, Z. 2006. Sparse Gaussian processes using pseudo-inputs. In Weiss, Y., Schölkopf, B., and Platt, J., editors, Advances in Neural Information Processing Systems 18, ambridge, MA. MIT Press. Wasserman, L. 2006. Springer. All of Nonparametric Statistics. West, M. and Harrison, J. 1997. Bayesian Forecasting and Dynamic Models. Springer. A. Derivation of Variational Algorithm In this appendix e give some details of the variational algorithm outlined in Section 3.1, hich calculates a distribution q 1:T ˆ 1:T to maximie the loer bound on

log pd 1:T. The first term of the righthand side of 5 is E q log p t t 1 = V T 2 1 2σ 2 = V T 2 log σ 2 + log 2π E q t t 1 T t t 1 log σ 2 + log 2π 1 2σ 2 1 T σ 2 Tr m t m t 1 2 Ṽt + 1 2σ 2 TrṼ0 TrṼT using the Gaussian quadratic form identity E m,v x µ T Σ 1 x µ = The second term of 5 is m µ T Σ 1 m µ + TrΣ 1 V. E q log pd t t = n t E q t log exp t n t m t n ˆζ 1 t t + n t n t log ˆζ t exp m t + Ṽt/2 Next, e maximie ith respect to ˆ s : lˆ, ˆν ˆ = s 1 T mt σ 2 m t m t 1, ˆ m t 1, s ˆ s + n t n tˆζ 1 t exp m t + Ṽt/2 mt ˆ. s The forard-backard equations for m t can be used to derive a recurrence for m t / ˆ s. The forard recurrence is m t ˆ s = ˆν 2 t mt 1 v t 1 + σ 2 + ˆν t 2 ˆ + s ˆν 2 t 1 v t 1 + σ 2 + ˆν t 2 δ s,t, ith the initial condition m 0 / ˆ s = 0. The backard recurrence is then m t 1 σ 2 mt 1 ˆ = s V t 1 + σ 2 ˆ + s 1 σ 2 V t 1 + σ 2 mt ˆ s, ith the initial condition m T / ˆ s = m T / ˆ s. here n t = n t, introducing additional variational parameters ˆζ 1:T. The third term of 5 is the entropy Hq = = 1 2 1 2 log Ṽt + T log 2π 2 log Ṽt + TV 2 log 2π. To maximie the loer bound as a function of the variational parameters e use a conjugate gradient algorithm. First, e maximie ith respect to ˆζ; the derivative is l ˆζ t = n t ˆζ 2 t Setting to ero and solving for ˆζ t gives exp m t + Ṽt/2 n t ˆζ t. ˆζ t = exp m t + Ṽt/2.