Gaussian Processes for Regression: A Quick Introduction

Similar documents
How To Get A Loan From A Bank For Free

Machine Learning Applications in Grid Computing

INTEGRATED ENVIRONMENT FOR STORING AND HANDLING INFORMATION IN TASKS OF INDUCTIVE MODELLING FOR BUSINESS INTELLIGENCE SYSTEMS

An Integrated Approach for Monitoring Service Level Parameters of Software-Defined Networking

AUC Optimization vs. Error Rate Minimization

Analyzing Spatiotemporal Characteristics of Education Network Traffic with Flexible Multiscale Entropy

Use of extrapolation to forecast the working capital in the mechanical engineering companies

Fuzzy Sets in HR Management

Performance Evaluation of Machine Learning Techniques using Software Cost Drivers

International Journal of Management & Information Systems First Quarter 2012 Volume 16, Number 1

Online Bagging and Boosting

CRM FACTORS ASSESSMENT USING ANALYTIC HIERARCHY PROCESS

Lecture L9 - Linear Impulse and Momentum. Collisions

6. Time (or Space) Series Analysis

Calculating the Return on Investment (ROI) for DMSMS Management. The Problem with Cost Avoidance

Quality evaluation of the model-based forecasts of implied volatility index

Image restoration for a rectangular poor-pixels detector

Comment on On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes

Reliability Constrained Packet-sizing for Linear Multi-hop Wireless Networks

Applying Multiple Neural Networks on Large Scale Data

arxiv: v1 [math.pr] 9 May 2008

Calculation Method for evaluating Solar Assisted Heat Pump Systems in SAP July 2013

Lecture L26-3D Rigid Body Dynamics: The Inertia Tensor

Physics 211: Lab Oscillations. Simple Harmonic Motion.

PERFORMANCE METRICS FOR THE IT SERVICES PORTFOLIO

This paper studies a rental firm that offers reusable products to price- and quality-of-service sensitive

Exercise 4 INVESTIGATION OF THE ONE-DEGREE-OF-FREEDOM SYSTEM

Searching strategy for multi-target discovery in wireless networks

Vectors & Newton's Laws I

Factored Models for Probabilistic Modal Logic

Investing in corporate bonds?

Standards and Protocols for the Collection and Dissemination of Graduating Student Initial Career Outcomes Information For Undergraduates

AN ALGORITHM FOR REDUCING THE DIMENSION AND SIZE OF A SAMPLE FOR DATA EXPLORATION PROCEDURES

RECURSIVE DYNAMIC PROGRAMMING: HEURISTIC RULES, BOUNDING AND STATE SPACE REDUCTION. Henrik Kure

Leak detection in open water channels

ON SELF-ROUTING IN CLOS CONNECTION NETWORKS. BARRY G. DOUGLASS Electrical Engineering Department Texas A&M University College Station, TX

Construction Economics & Finance. Module 3 Lecture-1

Design of Model Reference Self Tuning Mechanism for PID like Fuzzy Controller

Preference-based Search and Multi-criteria Optimization

Managing Complex Network Operation with Predictive Analytics

The Virtual Spring Mass System

A framework for performance monitoring, load balancing, adaptive timeouts and quality of service in digital libraries

Cooperative Caching for Adaptive Bit Rate Streaming in Content Delivery Networks

A quantum secret ballot. Abstract

Adaptive Modulation and Coding for Unmanned Aerial Vehicle (UAV) Radio Channel

An Approach to Combating Free-riding in Peer-to-Peer Networks

Investing in corporate bonds?

Modeling operational risk data reported above a time-varying threshold

Generating Certification Authority Authenticated Public Keys in Ad Hoc Networks

Evaluating Inventory Management Performance: a Preliminary Desk-Simulation Study Based on IOC Model

Data Streaming Algorithms for Estimating Entropy of Network Traffic

Factor Model. Arbitrage Pricing Theory. Systematic Versus Non-Systematic Risk. Intuitive Argument

Software Quality Characteristics Tested For Mobile Application Development

Modified Latin Hypercube Sampling Monte Carlo (MLHSMC) Estimation for Average Quality Index

Implementation of Active Queue Management in a Combined Input and Output Queued Switch

Media Adaptation Framework in Biofeedback System for Stroke Patient Rehabilitation

Non-Price Equilibria in Markets of Discrete Goods

An Innovate Dynamic Load Balancing Algorithm Based on Task

Data Set Generation for Rectangular Placement Problems

Markovian inventory policy with application to the paper industry

ADJUSTING FOR QUALITY CHANGE

ASIC Design Project Management Supported by Multi Agent Simulation

The Velocities of Gas Molecules

Real Time Target Tracking with Binary Sensor Networks and Parallel Computing

The Research of Measuring Approach and Energy Efficiency for Hadoop Periodic Jobs

HW 2. Q v. kt Step 1: Calculate N using one of two equivalent methods. Problem 4.2. a. To Find:

PREDICTION OF MILKLINE FILL AND TRANSITION FROM STRATIFIED TO SLUG FLOW

Extended-Horizon Analysis of Pressure Sensitivities for Leak Detection in Water Distribution Networks: Application to the Barcelona Network

Method of supply chain optimization in E-commerce

Binary Embedding: Fundamental Limits and Fast Algorithm

Airline Yield Management with Overbooking, Cancellations, and No-Shows JANAKIRAM SUBRAMANIAN

Energy Proportionality for Disk Storage Using Replication

Pure Bending Determination of Stress-Strain Curves for an Aluminum Alloy

Halloween Costume Ideas for the Wii Game

( C) CLASS 10. TEMPERATURE AND ATOMS

Research Article Performance Evaluation of Human Resource Outsourcing in Food Processing Enterprises

Pricing Asian Options using Monte Carlo Methods

An Improved Decision-making Model of Human Resource Outsourcing Based on Internet Collaboration

A magnetic Rotor to convert vacuum-energy into mechanical energy

ABSTRACT KEYWORDS. Comonotonicity, dependence, correlation, concordance, copula, multivariate. 1. INTRODUCTION

REQUIREMENTS FOR A COMPUTER SCIENCE CURRICULUM EMPHASIZING INFORMATION TECHNOLOGY SUBJECT AREA: CURRICULUM ISSUES

The Fundamentals of Modal Testing

Evaluating the Effectiveness of Task Overlapping as a Risk Response Strategy in Engineering Projects

Online Appendix I: A Model of Household Bargaining with Violence. In this appendix I develop a simple model of household bargaining that

CPU Animation. Introduction. CPU skinning. CPUSkin Scalar:

An improved TF-IDF approach for text classification *

A Gas Law And Absolute Zero

Lecture 3: Linear methods for classification

Impact of Processing Costs on Service Chain Placement in Network Functions Virtualization

Transcription:

Gaussian Processes for Regression A Quick Introduction M Ebden, August 28 Coents to arkebden@engoacuk MOTIVATION Figure illustrates a typical eaple of a prediction proble given soe noisy observations of a dependent variable at certain values of the independent variable, what is our best estiate of the dependent variable at a new value,? If we epect the underlying function to be linear, and can ake soe assuptions about the input data, we ight use a least-squares ethod to fit a straight line (linear regression) Moreover, if we suspect ay also be quadratic, cubic, or even nonpolynoial, we can use the principles of odel selection to choose aong the various possibilities Gaussian process regression (GPR) is an even finer approach than this Rather than claiing relates to soe specific odels (eg ), a Gaussian process can represent obliquely, but rigorously, by letting the data speak ore clearly for theselves GPR is still a for of supervised learning, but the training data are harnessed in a subtler way As such, GPR is a less paraetric tool However, it s not copletely free-for, and if we re unwilling to ake even basic assuptions about, then ore general techniques should be considered, including those underpinned by the principle of aiu entropy; Chapter 6 of Sivia and Skilling (26) offers an introduction 5 5? y 5 5 2 25 6 4 2 8 6 4 2 2 Figure Given si noisy data points (error bars are indicated with vertical lines), we are interested in estiating a seventh at

( ( ( ( ( Y WW 2 DEFINITION OF A GAUSSIAN PROCESS Gaussian processes (GPs) etend ultivariate Gaussian distributions to infinite diensionality Forally, a Gaussian process generates data located throughout soe doain such that any finite subset of the range follows a ultivariate Gaussian distribution Now, the observations in an arbitrary data set,! "#%$'&, can always be iagined as a single point sapled fro soe ultivariate ( -variate) Gaussian distribution, after enough thought Hence, working backwards, this data set can be partnered with a GP Thus GPs are as universal as they are siple Very often, it s assued that the ean of this partner GP is zero everywhere What )*'+, relates one observation to another in such cases is just the covariance function, ( A popular choice is the squared eponential, -# + /2 +, 34 57698; () %< = where the aiu allowable covariance is defined as / 3 this should be high for functions which cover a broad range on the ais If?>'+, then ( -* +, approaches this aiu, eaning is nearly perfectly correlated with '+@ This is good for our function to look sooth, neighbours ust be alike Now if is distant fro '+, we have instead ( -* +A B>C, ie the two points cannot see each other So, for eaple, during interpolation at new values, distant observations will have negligible effect How uch effect this separation has will depend on the length paraeter, <, so there is uch fleibility built into () Not quite enough fleibility though the data are often noisy as well, fro easureent errors and so on Each observation can be thought of as related to an underlying function through a Gaussian noise odel D -FE G#/ $ H (2) soething which should look failiar to those who ve done regression before Re- gression is the search for Purely for siplicity of eposition in the net page, we take the novel approach of folding the noise into ( -# '+,, by writing )* + / 3 '+, 4I576J8 K< L/ = $7M -# + I (3) where M -# '+, is the Kronecker delta function (When ost people use Gaussian processes, they keep /G$ separate fro ( -* +, However, our redefinition of ( )* +@ is equally suitable for working with probles of the sort posed in Figure So, given observations, our objective is to predict, not the actual ; their epected values are identical according to (2), but their variances differ owing to the observational noise process eg in Figure, the epected value of, and of, is the dot at ) To prepare for GPR, we calculate the covariance function, (3), aong all possible cobinations of these points, suarizing our findings in three atrices 2;# ( ;* TS S!S ( 2;# '$ VXW N POQ # ( * TS S!S ( # '$ RQQ (4) '$2# I ( $ * US S!S ( '$2#'$ N [Z ( %*2 ( %* \S!S S ( K* $G ^] N _ ( * I N (5) Confir for yourself that the diagonal eleents of are / 3 `/ $, and that its etree off-diagonal eleents tend to zero when spans a large enough doain 2

N Z QO Q QQ QRQ N W WW W W Y 3 HOW TO REGRESS USING GAUSSIAN PROCESSES Since the key assuption in GP odelling is that our data can be represented as a saple fro a ultivariate Gaussian distribution, we have that 8 a =cb EedGf- 8 N Nhg N i =kj (6) on given the data, how likely is a certain prediction for? As where l indicates atri transposition We are of course interested in the conditional probability eplained ore slowly in the Appendi, the probability follows a Gaussian distribution n b E N NLp N i N Nqp N g H (7) Our best estiate for is the ean of this distribution N NLp (8) and the uncertainty in our estiate is captured in its variance r;skt! N _ We re now ready to tackle the data in Figure There are vu observations, at wh y% o B B Gz%y N N p N g (9) G { G oy G o ] We know /G$D G } fro the error bars With judicious choices of / 3 and < (ore on this later), we have enough to calculate a covariance atri using (4) zk {a ~z zk y VXW {a zk y%u }K{ z y%u zk y { G ~azt }%{ y z; yk {o~ Gz% {a yk z; yku G y G az {o~ yku z; N Fro (5) we also have iƒ zk and N Z G }o~tgzk %} }oy {ou yk~ ] 2 Fro (8) and (9), G y and r;skt v 3 Figure shows a data point with a question ark underneath, representing the estiation of the dependent variable at G We can repeat the above procedure for various other points spread over soe portion of the ais, as shown in Figure 2 (In fact, equivalently, we could N avoid the repetition by perforing the above procedure once with suitably larger N and N _ atrices In this case, since there are, test points spread over the ais, _ would be of size,,) Rather than plotting siple error bars, we ve decided to plot %ugˆ r;skt, giving a 95% confidence interval 3

( 5 5 y 5 5 2 25 6 4 2 8 6 4 2 2 Figure 2 The solid line indicates an estiation of for, values of Pointwise 95% confidence intervals are shaded 4 GPR IN THE REAL WORLD The reliability of our regression is dependent on how well we select the covariance function Clearly if its paraeters call the Š <Œi/ 3 #/G$ & are not chosen sensibly, the result is nonsense Our aiu a posteriori estiate of occurs when n w is at its greatest Bayes theore tells us that, assuing we have little prior knowledge about what should be, this corresponds to aiizing Ž % " given by Ž o " n w g NLp Ž % n N n n w, Ž % K () Siply run your favourite ultivariate optiization algorith (eg conjugate gradients, Nelder-Mead siple, etc) on this equation and you ve found a pretty good choice for ; in our eaple, <- and / 3 oz It s only pretty good because, of course, Thoas Bayes is rolling in his grave Why coend just one answer for, when you can integrate everything over the any different possible choices for? Chapter 5 of Rasussen and Willias (26) presents the equations necessary in this case Finally, if you feel you ve grasped the toy proble in Figure 2, the net two eaples handle ore coplicated cases Figure 3(a), in addition to a long-ter downward trend, has soe fluctuations, so we ight use a ore sophisticated covariance function -* + / '+, 3 4I576 8 K< / = 3 '+, 4I576J8 K< L/ = $ M -# + I () The first ter takes into account the sall vicissitudes of the dependent variable, and the second ter has a longer length paraeter ( < > u%< ) to represent its long-ter 4

( (a) (b) 5 6 4 2 4 3 2 y y 2 4 2 3 4 5 6 2 3 4 5 2 3 4 5 6 7 Figure 3 Estiation of (solid line) for a function with (a) short-ter and long-ter dynaics, and (b) long-ter dynaics and a periodic eleent Observations are shown as crosses trend Covariance functions can be grown in this way ad infinitu, to suit the copleity of your particular data The function looks as if it ight contain a periodic eleent, but it s difficult to be sure Let s consider another function, which we re told has a periodic eleent The solid line in Figure 3(b) was regressed with the following covariance function )* + v/ 3 4I576J8 '+, K< = 4I576 # š7% œa + ^ ž&ÿl/ $M )* + I (2) The first ter represents the hill-like trend over the long ter, and the second ter gives periodicity with frequency œ This is the first tie we ve encountered a case -* +,? > for where and '+ can be distant and yet still see each other (that is, ( + ) What if the dependent variable has other dynaics which, a priori, you epect to )*'+, N can be, provided is positive appear? There s no liit to how coplicated ( definite Chapter 4 of Rasussen and Willias (26) offers a good outline of the range of covariance functions you should keep in your toolkit Hang on a inute, you ask, isn t choosing a covariance function fro a toolkit a lot like choosing a odel type, such as linear versus cubic which we discussed at the outset? Well, there are indeed siilarities In fact, there is no way to perfor regression without iposing at least a odicu of structure on the data set; such is the nature of generative odelling However, it s worth repeating that Gaussian processes do allow the data to speak very clearly For eaple, there eists ecellent theoretical justification for the use of () in any settings (Rasussen and Willias (26), Section 43) You will still want to investigate carefully which covariance functions are appropriate for your data set Essentially, choosing aong alternative functions is a way of reflecting various fors of prior knowledge about the physical process under investigation 5

g g ª 5 DISCUSSION We ve presented a brief outline of the atheatics of GPR, but practical ipleentation of the above ideas requires the solution of a few algorithic hurdles as opposed to those of data analysis If you aren t a good coputer prograer, then the code for Figures and 2 is at ftp//ftprobotsoacuk/pub/outgoing/ebden/isc/gptutzip, and ore general code can be found at http//wwwgaussianprocessorg/gpl We ve erely scratched the surface of a powerful technique (MacKay, 998) First, although the focus has been on one-diensional inputs, it s siple to accept those of higher diension Whereas would then change fro a scalar to a vector, ( )* + would reain a scalar and so the aths overall would be virtually unchanged Second, the zero vector representing the ean of the ultivariate Gaussian distribution in (6) can be replaced with functions of Third, in addition to their use in regression, GPs are applicable to integration, global optiization, iture-of-eperts odels, unsupervised learning odels, and ore see Chapter 9 of Rasussen and Willias (26) The net tutorial will focus on their use in classification REFERENCES MacKay, D (998) In CM Bishop (Ed), Neural networks and achine learning (NATO ASI Series, Series F, Coputer and Systes Sciences, Vol 68, pp 33-66) Dordrecht Kluwer Acadeic Press Rasussen, C and C Willias (26) Gaussian Processes for Machine Learning MIT Press Sivia, D and J Skilling (26) Data Analysis A Bayesian Tutorial (second ed) Oford Science Publications APPENDIX Iagine a data saple taken fro soe ultivariate Gaussian distribution with zero ean and a covariance given by atri Now decopose arbitrarily into two consecutive subvectors and in other words, writing E f- b would be the sae as writing 8 =Db Eed f" 8 ª =«j (3) where, ª, and are the corresponding bits and pieces that ake up Interestingly, the conditional distribution of given is itself Gaussian-distributed If the covariance atri were diagonal or even block diagonal, then knowing wouldn t tell us anything about specifically, n E f" b On the other hand, if were nonzero, then soe atri algebra leads us to n E p b p g H ª (4) p The ean,, is known as the atri of regression coefficients, and the variance, ª, is the Schur copleent of in p In suary, if we know soe of, we can use that to infor our estiate of what the rest of ight be, thanks to the revealing off-diagonal eleents of 6

N Gaussian Processes for Classification A Quick Introduction M Ebden, August 28 Prerequisite reading Gaussian Processes for Regression OVERVIEW As entioned in the previous docuent, GPs can be applied to probles other than, it can regression For eaple, if the output of a GP is squashed onto the range represent the probability of a data point belonging to one of say two types, and voilà, we can ascertain classifications This is the subject of the current docuent The big difference between GPR and GPC is how the output data,, are linked to the underlying function outputs, They are no longer connected siply via a noise process as in (2) in the previous docuent, but are instead now discrete say L precisely for one class and B for the other In principle, we could try fitting a GP that produces an output of approiately for soe values of and approiately B for others, siulating this discretization Instead, we interpose the GP between the data and a squashing function; then, classification of a new data point involves two steps instead of one Evaluate a latent function which odels qualitatively how the likelihood of one class versus the other changes over the ais This is the GP using any sigoidal function, 2 Squash the output of this latent function onto G prob D n Writing these two steps scheatically, data, GP 7 latent function, %n sigoid H* class probability, The net section will walk you through ore slowly how such a classifier operates Section 3 eplains how to train the classifier, so perhaps we re presenting things in reverse order! Section 4 handles classification when there are ore than two classes Before we get started, a quick note on Although other fors will do, here we will prescribe it to be the cuulative Gaussian distribution, This ± -shaped function satisfies our needs, apping high into ²>, and low into «> A second quick note, revisiting (6) and (7) in the first docuent confir for yourself that, if there were no noise ( /G$D v ), the two equations could be rewritten as 8 = b N g E d f- 8 N N _ =Ÿj () and n b E N NLp N _ N NLp N g H (2)

ˆ $ 2 USING THE CLASSIFIER Suppose we ve trained a classifier fro input data, w, and their corresponding epertlabelled output data, And suppose that in the process we fored soe GP outputs corresponding to these data, which have soe uncertainty but ean values given by We re now ready to input a new data point,, in the left side of our scheatic, in order to deterine at the other end the probability 2 of its class ebership In the first step, finding the probability n is siilar to GPR, ie we adapt (2) n ² E N NLp N i N N + p N g I (3) N ( + will be eplained soon, N but for now consider it to be very siilar to ) In the second step, we squash to find the probability of class ebership, The epected value is µ %n * (4) This is the integral of a cuulative Gaussian ties a Gaussian, which can be solved analytically By Section 39 of Rasussen and Willias (26), the solution is d An eaple is depicted in Figure r;skt! ;j (5) 3 TRAINING THE GP IN THE CLASSIFIER Our objective now is to find N and +, so that we know everything about the GP producing (3), the first step of the classifier The second step of the classifier does not require training as it s a fied sigoidal function Aong the any GPs which could be partnered with our data set, naturally we d like to copare their usefulness quantitatively Considering the outputs of a certain GP, how likely they are to be appropriate for the training data can be decoposed using Bayes theore n w Ÿ n n w- n w- (6) Let s focus on the two factors in the nuerator Assuing the data set is iid, n ² ¹ º ¹ n ¹ I (7) Dropping the subscripts in the product, n is infored by our sigoid function, Specifically, n is by definition, and to coplete the picture, n? B» A terse way of cobining these two cases is to write n The second factor in the nuerator is n w- This is related to the output of the first step of our scheatic drawing, but first we re interested in the value of n w- which aiizes the posterior probability n w This occurs when the derivative 2

Predictive probability 9 8 7 6 5 4 3 2 2 2 Latent function f() 8 6 4 2 2 4 6 8 2 2 Figure (a) Toy classification dataset, where circles and crosses indicate class ebership of the input (training) data, at locations w is for one class and B for another, but for illustrative purposes we pretend ¼ë instead of B in this figure The solid line is the (ean) probability ) prob ƒ n!, ie the answer to our proble after successfully perforing GPC (b) The corresponding distribution of the latent function, not constrained to lie between and of (6) with respect to is zero, or equivalently and ore siply, when the derivative of its logarith is zero Doing this, and using the sae logic that produced () in the previous docuent, we find that N9½ Ž o " is the best for our proble Unfortunately, n I (8) where appears on both sides of the equation, so we ake an initial guess (zero is fine) and go through a few iterations The answer to (8) can be used directly in (3), so we ve found one of the two quantities we seek therein The variance of is given by the negative second derivative of the logarith of (6), N p which turns out to be ¾ p, with ¾À ½Á½ Ž % 2 n Making a Laplace approiation, we pretend n w is Gaussian distributed, ie n w bâ n w Ÿ LE N p ¾ p H (9) (This assuption is occasionally inaccurate, so if it yields poor classifications, better ways of characterizing the uncertainty in should be considered, for eaple via epectation propagation) 3

Ê g Now for a subtle point The fact that can vary eans that using (2) directly is inappropriate in particular, its ean is correct but its variance no longer tells the whole N story This is why we use the adapted version, (3), with + N instead of Since the varying quantity in (2), N, is being ultiplied by N p, we N N add N p cov N p N g to the variance in (2) Siplification leads to (3), in which +' N ¾ p With the GP now copletely specified, we re ready to use the classifier as described in the previous section GPC in the Real World As with GPR, the reliability of our classification is dependent on how well we select the covariance function in the GP in our first step The paraeters are ë <Œi/ 3 &, one fewer now because / $ [ However, as usual, is optiized by aiizing n w, or (oitting on the righthand side of the equation), n w µ n This can be siplified, using a Laplace approiation, to yield n w g Nqp Ž o " n n w- * () Ž % n N n S n Nqp ¾ n H () This is the equation to run your favourite optiizer on, as perfored in GPR 4 MULTI-CLASS GPC We ve described binary classification, where the nuber of possible classes,, is just two In the case of Äà classes, one approach is to fit an for each class In the first of the two steps of classification, our GP values are concatenated as!!2 $! 2 $! 2 Å! 2 Å $ g (2) Let be a vector of the sae length as which, for each Æ!2, is for the class which is the label N and for the other Ç N entries Let grow to being block diagonal in the atrices! 2 N Å is a lengthening So the first change we see for Èà of the GP Section 35 of Rasussen and Willias (26) offers hints on keeping the coputations anageable The second change is that the (erely one-diensional) cuulative Gaussian distribution is no longer sufficient to describe the squashing function in our classifier; instead we use the softa function For the Æ th data point, É ¹ n ¹ ² -É ¹ 4I576 ž ¹ É É^Ë 4 576 ¹ É Ë (3) where ¹ is a nonconsecutive subset of, viz ¹ Ä ¹ ¹!!2 ¹ Å & We can suarize our results with Ì!!!2* $ *!!"* $!"* Å!2* $ Å & Now that we ve presented the two big changes needed to go fro binary- to ulticlass GPC, we continue as before Setting to zero the derivative of the logarith of the coponents in (6), we replace (8) with N Ì H N p (4) The corresponding variance is ¾ p as before, but now ¾Š diag Ì ÎÍ Í, where Í is a v L atri obtained by stacking vertically the diagonal atrices diag Ì É, if Ì É is the subvector of Ì pertaining to class 4

$ Å With these quantities estiated, we have enough to generalize (3) to ž É n ² LEÏ N É N É p É N diag _ N É g N É ¾ É p p N É g Ð (5) where É N, É, and ¾ É represent the class-relevant inforation only Finally, () is replaced with n w ² g N p g Ñ ¹ º Ž % hò Ñ É º 4I576 É ¹ Ó Ž o Ïin N n S n N p ¾ We won t present an eaple of ulti-class GPC, but hopefully you get the idea n Ð (6) 5 DISCUSSION As with GPR, classification can be etended to accept values with ultiple diensions, while keeping ost of the atheatics unchanged Other possible etensions include using the epectation propagation ethod in lieu of the Laplace approiation as entioned previously, putting confidence intervals on the classification probabilities, calculating the derivatives of (6) to aid the optiizer, or using the variational Gaussian process classifiers described in MacKay (998), to nae but four etensions Second, we repeat the Bayesian call fro the previous docuent to integrate over a range of possible covariance function paraeters This should be done regardless of how uch prior knowledge is available see for eaple Chapter 5 of Sivia and Skilling (26) on how to choose priors in the ost opaque situations Third, we ve again spared you a few practical algorithic details; coputer code is available at http//wwwgaussianprocessorg/gpl, with eaples ACKNOWLEDGMENTS Thanks are due to Prof Stephen Roberts and ebers of his Pattern Analysis Research Group, as well as the ALADDIN project (wwwaladdinprojectorg) REFERENCES MacKay, D (998) In CM Bishop (Ed), Neural networks and achine learning (NATO ASI Series, Series F, Coputer and Systes Sciences, Vol 68, pp 33-66) Dordrecht Kluwer Acadeic Press Rasussen, C and C Willias (26) Gaussian Processes for Machine Learning MIT Press Sivia, D and J Skilling (26) Data Analysis A Bayesian Tutorial (second ed) Oford Science Publications 5