Mean Field Theory for Sigmoid Belief Networks. Abstract

Size: px
Start display at page:

Download "Mean Field Theory for Sigmoid Belief Networks. Abstract"


1 Journal of Artæcal Intellgence Research 4 è1996è Submtted 11è95; publshed 3è96 Mean Feld Theory for Sgmod Belef Networks Lawrence K. Saul Tomm Jaakkola Mchael I. Jordan Center for Bologcal and Computatonal Learnng Massachusetts Insttute of Technology 79 Amherst Street, E Cambrdge, MA Abstract We develop a mean æeld theory for sgmod belef networks based on deas from statstcal mechancs. Our mean æeld theory provdes a tractable approxmaton to the true probablty dstrbuton n these networks; t also yelds a lower bound on the lkelhood of evdence. We demonstrate the utlty of ths framework on a benchmark problem n statstcal pattern recognton the classæcaton of handwrtten dgts. 1. Introducton Bayesan belef networks èpearl, 1988; Laurtzen & Spegelhalter, 1988è provde a rch graphcal representaton of probablstc models. The nodes n these networks represent random varables, whle the lnks represent causal næuences. These assocatons endow drected acyclc graphs èdagsè wth a precse probablstc semantcs. The ease of nterpretaton aæorded by ths semantcs explans the growng appeal of belef networks, now wdely used as models of plannng, reasonng, and uncertanty. Inference and learnng n belef networks are possble nsofar as one can eæcently compute èor approxmateè the lkelhood of observed patterns of evdence èbuntne, 1994; Russell, Bnder, Koller, & Kanazawa, 1995è. There exst provably eæcent algorthms for computng lkelhoods n belef networks wth tree or chan-lke archtectures. In practce, these algorthms also tend to perform well on more general sparse networks. However, for networks n whch nodes have many parents, the exact algorthms are too slow èjensen, Kong, & Kjaefulæ, 1995è. Indeed, n large networks wth dense or layered connectvty, exact methods are ntractable as they requre summng over an exponentally large number of hdden states. One approach to dealng wth such networks has been to use Gbbs samplng èpearl, 1988è, a stochastc smulaton methodology wth roots n statstcal mechancs ègeman & Geman, 1984è. Our approach n ths paper reles on a dæerent tool from statstcal mechancs namely, mean æeld theory èpars, 1988è. The mean æeld approxmaton s well known for probablstc models that can be represented as undrected graphs so-called Markov networks. For example, n Boltzmann machnes èackley, Hnton, & Sejnowsk, 1985è, mean æeld learnng rules have been shown to yeld tremendous savngs n tme and computaton over samplng-based methods èpeterson & Anderson, 1987è. The man motvaton for ths work was to extend the mean æeld approxmaton for undrected graphcal models to ther drected counterparts. Snce belef networks can be transformed to Markov networks, and mean æeld theores for Markov networks are well known, t s natural to ask why a new framework s requred at all. The reason s that probablstc models whch have compact representatons as DAGs mayhave unweldy representatons as undrected graphs. As we shall see, avodng ths complexty and workng drectly on DAGs requres an extenson of exstng methods. In ths paper we focus on sgmod belef networks èneal, 1992è, for whch the resultng mean æeld theory s most straghtforward. These are networks of bnary random varables whose local cæ1996 AI Access Foundaton and Morgan Kaufmann Publshers. All rghts reserved.

2 Saul, Jaakkola, & Jordan condtonal dstrbutons are based on log-lnear models. We develop a mean æeld approxmaton for these networks and use t to compute a lower bound on the lkelhood of evdence. Our method apples to arbtrary partal nstantatons of the varables n these networks and makes no restrctons on the network topology. Note that once a lower bound s avalable, a learnng procedure can maxmze the lower bound; ths s useful when the true lkelhood tself cannot be computed eæcently. A smlar approxmaton for models of contnous random varables s dscussed by Jaakkola et al è1995è. The dea of boundng the lkelhood n sgmod belef networks was ntroduced n a related archtecture known as the Helmholtz machne èhnton, Dayan, Frey, & Neal 1995è. A fundamental advance of ths work was to establsh a framework for approxmaton that s especally conducve to learnng the parameters of layered belef networks. The close connecton between ths dea and the mean æeld approxmaton from statstcal mechancs, however, was not developed. In ths paper we hope not only to elucdate ths connecton, but also to convey a sense of whch approxmatons are lkely to generate useful lower bounds whle, at the same tme, remanng analytcally tractable. We develop here what s perhaps the smplest such approxmaton for belef networks, notng that more sophstcated methods èjaakkola & Jordan, 1996a; Saul & Jordan, 1995è are also avalable. It should be emphaszed that approxmatons of some form are requred to handle the multlayer neural networks used n statstcal pattern recognton. For these networks, exact algorthms are hopelessly ntractable; moreover, Gbbs samplng methods are mpractcally slow. The organzaton of ths paper s as follows. Secton 2 ntroduces the problems of nference and learnng n sgmod belef networks. Secton 3 contans the man contrbuton of the paper: a tractable mean æeld theory. Here we present the mean æeld approxmaton for sgmod belef networks and derve alower bound on the lkelhood of nstantated patterns of evdence. Secton 4 looks at a mean æeld algorthm for learnng the parameters of sgmod belef networks. For ths algorthm, we gve results on a benchmark problem n pattern recognton the classæcaton of handwrtten dgts. Fnally, secton 5 presents our conclusons, as well as future ssues for research. 2. Sgmod Belef Networks The great vrtue of belef networks s that they clearly exhbt the condtonal dependences of the underlyng probablty model. Consder a belef network deæned over bnary random varables S =ès 1 ;S 2 ;:::;S N è. We denote the parents of S by paès è çfs 1 ; S 2 ;:::S,1 g; ths s the smallest set of nodes for whch P ès js 1 ;S 2 ;:::;S,1 è=p ès jpaès èè: è1è In sgmod belef networks èneal, 1992è, the condtonal dstrbutons attached to each node are based on log-lnear models. In partcular, the probablty that the th node s actvated s gven by P ès =1jpaèS èè = ç X j J j S j + h 1 A ; è2è where J j and h are the weghts and bases n the network, and çèzè = 1 1+e,z è3è s the sgmod functon shown n Fgure 1. In sgmod belef networks, wehave J j = 0 for S j 62 paès è; moreover, J j = 0 for j ç snce the network's structure s that of a drected acyclc graph. The sgmod functon n eq. è2è provdes a compact parametrzaton of the condtonal probablty dstrbutons 1 n eq. è2è used to propagate belefs. In partcular, P ès jpaès èè depends on paès è only through a sum of weghted nputs, where the weghts may be vewed as the parameters n a 1. The relaton to nosy-or models s dscussed n appendx A. 62

3 Mean Feld Theory for Sgmod Belef Networks σ(z) z Fgure 1: Sgmod functon çèzè = ë1 + e,z ë,1.ifz s the sum of weghted nputs to node S, then P ès = 1jzè = çèzè s the condtonal probablty that node S s actvated. logstc regresson èmccullagh & Nelder, 1983è. The condtonal probablty dstrbuton for S may be summarzed as: hç P ç exp J j js j + h S P ès jpaès èè = h P : è4è 1 + exp J j js j + h Note that substtutng S = 1 n eq. è4è recovers the result n eq. è2è. Combnng eqs. è1è and è4è, we may wrte the jont probablty dstrbuton over the varables n the network as: P èsè = Y = Y P ès jpaès èè 8 é : exp è5è hç P ç 9 J j js j + h S = h P J j js j + h ; : è6è 1 + exp The denomnator n eq. è6è ensures that the probablty dstrbuton s normalzed to unty. We now turn to the problem of nference n sgmod belef networks. Absorbng evdence dvdes the unts n the belef network nto two types, vsble and hdden. The vsble unts èor ëevdence nodes"è are those for whch we have nstantated values; the hdden unts are those for whch we do not. When there s no possble ambguty,we wll use H and V to denote the subsets of hdden and vsble unts. Usng Bayes' rule, nference s done under the condtonal dstrbuton P èhjv è= P èh; V è P èv è ; è7è where P èv è= X H P èh; V è è8è s the lkelhood of the evdence V. In prncple, the lkelhood may be computed by summng over all 2 jhj conæguratons of the hdden unts. Unfortunately, ths calculaton s ntractable n large, densely connected networks. Ths ntractablty presents a major obstacle to learnng parameters for these networks, as nearly all procedures for statstcal estmaton requre frequent estmates of the lkelhood. The calculatons for exact probablstc nference are beset by the same dæcultes. 63

4 Saul, Jaakkola, & Jordan Unable to compute P èv èorwork drectly wth P èhjv è, we wll resort to an approxmaton from statstcal physcs known as mean æeld theory. 3. Mean Feld Theory The mean æeld approxmaton appears under a multtude of guses n the physcs lterature; ndeed, t s ëalmost as old as statstcal mechancs" èitzykson & Drouæe, 1991è. Let us breæy explan howt acqured ts name and why t s so ubqutous. In the physcal models descrbed by Markov networks, the varables S represent localzed magnetc moments èe.g., at the stes of a crystal lattceè, and the sums P j J js j + h represent local magnetc æelds. Roughly speakng, n certan cases a central lmt theorem may be appled to these sums, and a useful approxmaton s to gnore the æuctuatons n these æelds and replace them by ther mean value hence the name, ëmean æeld" theory. In some models, ths s an excellent approxmaton; n others, a poor one. Because of ts smplcty, however, t s wdely used as a ærst step n understandng many types of physcal phenomena. Though ths explans the phlologcal orgns of mean æeld theory, there are n fact many ways to derve what amounts to the same approxmaton èpars, 1988è. In ths paper we present the formulaton most approprate for nference and learnng n graphcal models. In partcular, we vew mean æeld theory as a prncpled method for approxmatng an ntractable graphcal model by a tractable one. Ths s done va a varatonal prncple that chooses the parameters of the tractable model to mnmze an entropc measure of error. The basc framework of mean æeld theory remans the same for drected graphs, though we have found t necessary to ntroduce extra mean æeld parameters n addton to the usual ones. As n Markov networks, one ænds a set of nonlnear equatons for the mean æeld parameters that can be solved by teraton. In practce, we have found ths teraton to converge farly quckly and to scale well to large networks. Let us now return to the problem posed at the end of the last secton. There we found that for many belef networks, t was ntractable to decompose the jont dstrbuton as P èsè = P èhjv èp èv è, where P èv è was the lkelhood of the evdence V. For the purposes of probablstc modelng, mean æeld theory has two man vrtues. Frst, t provdes a tractable approxmaton, QèHjV è ç P èhjv è, to the condtonal dstrbutons requred for nference. Second, t provdes a lower bound on the lkelhoods requred for learnng. Let us ærst consder the orgn of the lower bound. Clearly, for any approxmatng dstrbuton QèHjV è, we have the equalty: ln P èv è = ln X H = ln X H P èh; V è QèHjV è æ è9è ç ç P èh; V è : è10è QèHjV è To obtan a lower bound, we now apply Jensen's nequalty ècover & Thomas, 1991è, pushng the logarthm through the sum over hdden states and nto the expectaton: X ç P èh; V è ç ln P èv è ç QèHjV èln : è11è QèHjV è H It s straghtforward to verfy that the dæerence between the left and rght hand sde of eq. è11è s the Kullback-Lebler dvergence ècover & Thomas, 1991è: X ç ç QèHjV è KLèQjjP è= QèHjV èln : è12è P èhjv è H Thus, the better the approxmaton to P èhjv è, the tghter the bound on ln P èv è. 64

5 Mean Feld Theory for Sgmod Belef Networks Antcpatng the connecton to statstcal mechancs, we wll refer to QèHjV è as the mean æeld dstrbuton. It s natural to dvde the calculaton of the bound nto two components, both of whch are partcular averages over ths approxmatng dstrbuton. These components are the mean æeld entropy and energy; the overall bound s gven by ther dæerence: ln P èv è ç è, X H QèHjV èlnqèhjv è!, è, X H QèHjV èlnp èh; V è! : è13è Both terms havephyscal nterpretatons. The ærst measures the amount of uncertanty n the meanæeld dstrbuton and follows the standard deænton of entropy. The second measures the average value 2 of, ln P èh; V è; the name ëenergy" arses from nterpretng the probablty dstrbutons n belef networks as Boltzmann dstrbutons 3 at unt temperature. In ths case, the energy of each network conæguraton s gven èup to a constantè by mnus the logarthm of ts probablty under the Boltzmann dstrbuton. In sgmod belef networks, the energy has the form X X X , ln P èh; V è=, J j S S j, h S + ln4 1 + exp J j S j + h A5 ; è14è j as follows from eq. è6è. The ærst two terms n ths equaton are famlar from Markov networks wth parwse nteractons èhertz, Krogh, & Palmer, 1991è; the last term s pecular to sgmod belef networks. Note that the overall energy s nether a lnear functon of the weghts nor a polynomal functon of the unts. Ths s the prce we pay n sgmod belef networks for dentfyng P èhjv è as a Boltzmann dstrbuton and the log-lkelhood P èv è as ts partton functon. Note that ths dentæcaton was made mplctly n the form of eqs. è7è and è8è. The bound n eq. è11è s vald for any probablty dstrbuton QèHjV è. To make use of t, however, we must choose a dstrbuton that enables us to evaluate the rght hand sde of eq. è11è. Consder the factorzed X j QèHjV è= Y 2H ç S è1, ç è 1,S ; è15è n whch the bnary hdden unts fs g 2H appear as ndependent Bernoull varables wth adjustable means ç. A mean æeld approxmaton s obtaned by substtutng the factorzed dstrbuton, eq. è15è, for the true Boltzmann dstrbuton, eq. è7è. It may seem that ths approxmaton replaces the rch probablstc dependences n P èhjv èby an mpovershed assumpton of complete factorzablty. Though ths s true to some degree, the reader should keep n mnd that the values we choose for fç g 2H èand hence the statstcs of the hdden untsè wll depend on the evdence V. The best approxmaton of the form, eq. è15è, s found by choosng the mean values, fç g 2H, that mnmze the Kullback-Lebler dvergence, KLèQjjP è. Ths s equvalent to mnmzng the gap between the true log-lkelhood, ln P èv è, and the lower bound obtaned from mean æeld theory. The 2. A smlar average s performed n the E-step of an EM algorthm èdempster, Lard, & Rubn, 1977è; the dæerence here s that the average s performed over the mean æeld dstrbuton, QèHjV è, rather than the true posteror, P èh jv è. For a related dscusson, see Neal & Hnton è1993è. 3. Our termnology s as follows. Let S denote the degrees of freedom n a statstcal mechancal system. The energy of the system, EèSè, s a real-valued functon of these degrees of freedom, and the Boltzmann dstrbuton P èsè = e,æeèsè PS e,æeèsè deænes a probablty dstrbuton over the possble conæguratons of S. The parameter æ s the nverse temperature; t serves to calbrate the energy scale and wll be æxed to unty n our dscusson of belef networks. Fnally, the sum n the denomnator known as the partton functon ensures that the Boltzmann dstrbuton s normalzed to unty. 65

6 Saul, Jaakkola, & Jordan mean æeld bound on the log-lkelhood may be calculated by substtutng eq. è15è nto the rght hand sde of eq. è11è. The result of ths calculaton s ln P èv è ç X j, J j ç ç j + X X h ç, X ëç ln ç +è1, ç è lnè1, ç èë ; ç ç çç ln 1+e Pj Jj Sj+h where hæ ndcates an expectaton value over the mean æeld dstrbuton, eq. è15è. The terms n the ærst lne of eq. è16è represent the mean æeld energy, derved from eq. è14è; those n the second represent the mean æeld entropy. In a slght abuse of notaton, we have deæned mean values ç for the vsble unts; these of course are set to the nstantated values ç 2f0; 1g. Note that to compute the average energy n the mean æeld approxmaton, we must ænd the expected value of hln ë1+e z ë, where z = P j J js j + h s the sum of weghted nputs to the th unt n the belef network. Unfortunately, even under the mean æeld assumpton that the hdden unts are uncorrelated, ths average does not have a smple closed form. Ths term does not arse n the mean æeld theory for Markov networks wth parwse nteractons; agan, t s pecular to sgmod belef networks. In prncpal, the average may be performed by enumeratng the possble states of paès è. The result of ths calculaton, however, would be an extremely unweldy functon of the parameters n the belef network. Ths reæects the fact that n general, the sgmod belef network deæned by the weghts J j has an equvalent Markov network wth Nth order nteractons and not parwse ones. To avod ths complexty, we must develop a mean æeld theory that works drectly on DAGs. How we handle the expected value of hln ë1+e z ë s what dstngushes our mean æeld theory from prevous ones. Unable to compute ths term exactly, we resort to another bound. Note that for any random varable z and any real number ç, wehave the equalty: è16è æ æ ææ hlnë1 + e z ë = ln e çz e,çz è1 + e z è è17è E = çhz + D lnëe,çz + e è1,çèz ë : è18è We can upper bound the rght hand sde by applyng Jensen's nequalty n the opposte drecton as before, pullng the logarthm outsde the expectaton: E hlnë1 + e z ëççhz +ln De,çz + e è1,çèz : è19è Settng ç = 0 n eq. è19è gves the standard bound: hlnè1 + e z èçlnh1+e z. A tghter bound èseung, 1995è can be obtaned, however, by allowng non-zero values of ç. Ths s llustrated n Fgure 2 for the specal case where z s a Gaussan dstrbuted random varable wth zero mean and unt varance. The bound n eq. è19è has two useful propertes whch we state here wthout proof: èè the rght hand sde s a convex functon of ç; èè the value of ç whch mnmzes ths functon occurs n the nterval ç 2 ë0; 1ë. Thus, provded t s possble to evaluate eq. è19è for dæerent values of ç, the tghtest bound of ths form can be found by a smple one-dmensonal mnmzaton. The above bound can be put to mmedate use by attachng an extra mean æeld parameter ç to each unt n the belef network. We can then upper bound the ntractable terms n the mean æeld energy by ç ç çç 0 ln 1+e Pj Jj Sj+h ç X j J j ç j + h 1 A +ln D e,çz + e è1,çèz E ; è20è 66

7 Mean Feld Theory for Sgmod Belef Networks bound 0.8 exact ξ Fgure 2: Bound n eq. è19è for the case where z s normally dstrbuted wth zero mean and unt varance. In ths case, the exact result s hlnè1 + e z è =0:806; the bound gves mn ç nlnëe 2 1 ç2 + e 1 2 è1,çè2 ë at ç = 0 and gves 0:974. o = 0:818. The standard bound from Jensen's nequalty occurs P where z = J j js j + h. The expectatons nsde the logarthm can be evaluated exactly for the factoral dstrbuton, eq. è15è; for example, Y he,çz = e,çh j, 1, çj + ç j e,çjj æ : è21è A smlar result holds for he è1,çèz. Though these averages are tractable, we wll tend not to wrte them out n what follows. The reader, however, should keep n mnd that these averages do not present any dæculty; they are smply averages over products of ndependent random varables, as opposed to sums. Assemblng the terms n eqs. è16è and è20è gvesalower bound ln P èv è çl V, L V = X j, X ç X j X J j ç ç j + h ç, E ln De X,çz + e è1,çèz + X J j ç j + h 1 A ëç ln ç +è1, ç èlnè1, ç èë ; on the log-lkelhood of the evdence V. So far we have not specæed the parameters fç g 2H and fç g; n partcular, the bound n eq. è22è s vald for any choce of parameters. We naturally seek the values that maxmze the rght hand sde of eq. è22è. Suppose we æx the mean values fç g 2H and ask for the parameters fç g that yeld the tghtest possble bound. Note that the rght hand sde of eq. è22è does not couple terms wth ç that belong to dæerent unts n the network. The mnmzaton over fç g therefore reduces to N ndependent mnmzatons over the nterval ë0; 1ë. These can be done by anynumber of standard methods èpress, Flannery, Teukolsky, & Vetterlng, 1986è. To choose the means, we set the gradents of the bound wth respect to fç g 2H equal to zero. To ths end, let us deæne the ntermedate matrx: K j E, ln De,çz + e è1,çèz ; j è22è 67

8 Saul, Jaakkola, & Jordan S Fgure 3: The Markov blanket of unt S parents of ts chldren. ncludes ts parents and chldren, as well as the other where z s the weghted sum of nputs to th unt. Note that K j s zero unless S j s a parent of S ; n other words, t has the same connectvty as the weght matrx J j. Wthn the mean æeld approxmaton, K j measures the parental næuence of S j on S gven the nstantated evdence V. The degree of correlaton èpostve or negatveè s measured relatve to the other parents of S. The matrx elements of K may beevaluated by expandng the expectatons as n eq. è21è; a full dervaton s gven n appendx B. Settng the V equal to zero gves the ænal mean æeld equaton: ç = ç X h + j 1 ëj j ç j + J j èç j, ç j è+k j ëa ; è24è where çèæè s the sgmod functon. The argument of the sgmod functon may be vewed as an eæectve nput to the th unt n the belef network. Ths eæectve nput s composed of terms from the unt's Markov blanket èpearl, 1988è, shown n Fgure 3; n partcular, these terms take nto account the unt's nternal bas, the values of ts parents and chldren, and, through the matrx K j, the values of ts chldren's other parents. In solvng these equatons by teraton, the values of the nstantated unts are propagated throughout the entre network. An analogous propagaton of nformaton occurs n exact algorthms èlaurtzen & Spegelhalter, 1988è to compute lkelhoods n belef networks. Whle the factorzed approxmaton to the true posteror s not exact, the mean æeld equatons set the parameters fç g 2H to values whch make the approxmaton as accurate as possble. Ths n turn translates nto the tghtest mean æeld bound on the log-lkelhood. The overall procedure for boundng the log-lkelhood thus conssts of two alternatng steps: èè update fç g for æxed fç g; èè update fç g 2H for æxed fç g. The ærst step nvolves N ndependent mnmzatons over the nterval ë0; 1ë; the second s done by teratng the mean æeld equatons. In practce, the steps are repeated untl the mean æeld bound on the log-lkelhood converges 4 to a desred degree of accuracy. The qualty of the bound depends on two approxmatons: the complete factorzablty of the mean æeld dstrbuton, eq. è15è, and the logarthm bound, eq. è19è. How relable are these approxmatons n belef networks? To study ths queston, we performed numercal experments on the three layer belef network shown n Fgure 4. The advantage of workng wth such a small network è2x4x6è s that true lkelhoods can be computed by exact enumeraton. We consdered the partcular event that all the unts n the bottom layer were nstantated to zero. For ths event, we compared the mean æeld bound on the lkelhood to ts true value, obtaned by enumeratng the 4. It can be shown that asychronous updates of the mean æeld parameters lead to monotonc ncreases n the lower bound èjust as n the case of Markov networksè. 68

9 Mean Feld Theory for Sgmod Belef Networks Fgure 4: Three layer belef network è2x4x6è wth top-down propagaton of belefs. To model the mages of handwrtten dgts n secton 4, we used 8x24x64 networks where unts n the bottom layer encoded pxel values n 8x8 btmaps mean feld approxmaton unform approxmaton relatve error n log lkelhood relatve error n log lkelhood Fgure 5: Hstograms of relatve error n log-lkelhood over randomly generated three layer networks. At left: the relatve error from the mean æeld approxmaton; at rght: the relatve error f all states n the bottom layer are assumed to occur wth equal probablty. The log-lkelhood was computed for the event that the all the nodes n the bottom layer were nstantated to zero. states n the top two layers. Ths was done for random networks whose weghts and bases were unformly dstrbuted between -1 and 1. Fgure 5 èleftè shows the hstogram of the relatve error n log lkelhood, computed as L V = ln P èv è, 1; for these networks, the mean relatve error s 1.6è. Fgure 5 èrghtè shows the hstogram that results from assumng that all states n the bottom layer occur wth equal probablty; n ths case the relatve error was computed as èln 2,6 è= ln P èv è, 1. For ths ëunform" approxmaton, the root mean square relatve error s 22.6è. The large dscrepancy between these results suggests that mean æeld theory can provde a useful lower bound on the lkelhood n certan belef networks. Of course, what ultmately matters s the behavor of mean æeld theory n networks that solve meanngful problems. Ths s the subject of the next secton. 4. Learnng One attractve use of sgmod belef networks s to perform densty estmaton n hgh dmensonal nput spaces. Ths s a problem n parameter estmaton: gven a set of patterns over partcular unts n the belef network, ænd the set of weghts J j and bases h that assgn hgh probablty to these patterns. Clearly, the ablty to compute lkelhoods les at the crux of any algorthm for learnng the parameters n belef networks. 69

10 Saul, Jaakkola, & Jordan true log lkelhood lower bound true log lkelhood lower bound tranng tme tranng tme Fgure 6: Relatonshp between the true log-lkelhood and ts lower bound durng learnng. One possblty èat leftè s that both ncrease together. The other s that the true log-lkelhood decreases, closng the gap between tself and the bound. The latter can be vewed as a form of regularzaton. Mean æeld algorthms provde a strategy for dscoverng approprate values of J j and h wthout resort to Gbbs samplng. Consder, for nstance, the followng procedure. For each pattern n the tranng set, solve the mean æeld equatons for fç ;ç g and compute the assocated bound on the log-lkelhood, L V. Next, adapt the weghts n the belef network by gradent ascent 5 n the mean æeld bound, æj j = j æh = ; è25è è26è where ç s a sutably chosen learnng rate. Fnally, cycle through the patterns n the tranng set, maxmzng ther lkelhoods 6 for a æxed number of teratons or untl one detects the onset of overættng èe.g., by cross-valdatonè. The above procedure uses a lower bound on the log-lkelhood as a cost functon for tranng belef networks èhnton, Dayan, Frey, & Neal, 1995è. The fact that we have alower bound on the loglkelhood, rather than an upper bound, s of course crucal to the success of ths learnng algorthm. Adjustng the weghts to maxmze ths lower bound can aæect the true log-lkelhood n two ways èsee Fgure 6è. Ether the true log-lkelhood ncreases, movng n the same drecton as the bound, or the true log-lkelhood decreases, closng the gap between these two quanttes. For the purposes of maxmum lkelhood estmaton, the ærst outcome s clearly desrable; the second, though less desrable, can also be vewed n a postve lght. In ths case, the mean æeld approxmaton s actng as a regularzer, steerng the network toward smple, factoral solutons even at the expense of lower lkelhood estmates. We tested ths algorthm by buldng a maxmum-lkelhood classæer for mages of handwrtten dgts. The data conssted of examples of handwrtten dgts ë0-9ë compled by the U.S. Postal Servce Oæce of Advanced Technology. The examples were preprocessed to produce 8x8 bnary mages, as shown n Fgure 7. For each dgt, we dvded the avalable data nto a tranng set wth 700 examples and a test set wth 400 examples. We then traned a three layer network 7 èsee 5. Expressons for the gradents of L V are gven n the appendx B. 6. Of course, one can also ncorporate pror dstrbutons over the weghts and bases and maxmze an approxmaton to the log posteror probablty of the tranng set. 7. There are many possble archtectures that could be chosen for the purpose of densty estmaton; we used layered networks to permt a comparson wth prevous benchmarks on ths data set. 70

11 Mean Feld Theory for Sgmod Belef Networks Fgure 7: Bnary mages of handwrtten dgts: two and æve Table 1: Confuson matrx for dgt classæcaton. The entry n the th row and jth column counts the number of tmes that dgt was classæed as dgt j. Fgure 4è on each dgt, sweepng through each tranng set æve tmes wth learnng rate ç =0:05. The networks had 8 unts n the top layers, 24 unts n the mddle layer, and 64 unts n the bottom layer, makng them far too large to be treated wth exact methods. After tranng, we classæed the dgts n each test set by the network that assgned them the hghest lkelhood. Table 1 shows the confuson matrx n whch the jth entry counts the number of tmes dgt was classæed as dgt j. There were 184 errors n classæcaton èout of a possble 4000è, yeldng an overall error rate of 4.6è. Table 2 gves the performance of varous other algorthms on the same partton of ths data set. Table 3 shows the average log-lkelhood score of each network on the dgts n ts test set. ènote that these scores are actually lower bounds.è These scores are normalzed so that a network wth zero weghts and bases è.e., one n whch all 8x8 patterns are equally lkelyè would receve a score of -1. As expected, dgts wth relatvely smple constructons èe.g., zeros, ones, and sevensè are more easly modeled than the rest. Both measures of performance error rate and log-lkelhood score are compettve wth prevously publshed results èhnton, Dayan, Frey, & Neal, 1995è on ths data set. The success of the algorthm aærms both the strategy of maxmzng a lower bound and the utlty of the mean æeld approxmaton. Though smlar results can be obtaned va Gbbs samplng, ths seems to requre consderably more computaton than methods based on maxmzng a lower bound èfrey, Dayan, & Hnton, 1995è. 71

12 Saul, Jaakkola, & Jordan algorthm classæcaton error nearest neghbor 6.7è back-propagaton 5.6è wake-sleep 4.8è mean æeld 4.6è Table 2: Classæcaton error rates for the data set of handwrtten dgts. The ærst three were reported by Hnton et al è1995è. dgt log-lkelhood score all Table 3: Normalzed log-lkelhood score for each network on the dgts n ts test set. To obtan the raw score, multply by 400 æ 64 æ ln 2. The last row shows the score averaged across all dgts. 5. Dscusson Endowng networks wth probablstc semantcs provdes a unæed framework for ncorporatng pror knowledge, handlng mssng data, and performng nference under uncertanty. Probablstc calculatons, however, can quckly become ntractable, so t s mportant to develop technques that approxmate probablty dstrbutons n a æexble manner. Ths s especally true for networks wth multlayer archtectures and large numbers of hdden unts. Exact algorthms and Gbbs samplng methods are not generally practcal for such networks; approxmatons are requred. In ths paper we have developed a mean æeld approxmaton for sgmod belef networks. As a computatonal tool, our mean æeld theory has two man vrtues: ærst, t provdes a tractable approxmaton to the condtonal dstrbutons requred for nference; second, t provdes a lower bound on the lkelhoods requred for learnng. The problem of computng exact lkelhoods n belef networks s NP-hard ècooper, 1990è; the same s true for approxmatng lkelhoods to wthn a guaranteed degree of accuracy èdagum & Luby, 1993è. It follows that one cannot establsh unversal guarantees for the accuracy of the mean æeld approxmaton. For certan networks, clearly, the mean æeld approxmaton s bound to fal t cannot capture logcal constrants or strong correlatons between æuctuatng unts. Our prelmnary results, however, suggest that these worst-case results do not apply to all belef networks. It s worth notng, moreover, that all the above qualæcatons apply to Markov networks, and that n ths doman, mean æeld methods are already well-establshed. 72