+ + Analysis of Functional MRI Time-Series. K.J. Friston, P. Jezzard, and R. Turner

Size: px
Start display at page:

Download "+ + Analysis of Functional MRI Time-Series. K.J. Friston, P. Jezzard, and R. Turner"

Transcription

1 + Human Brain Mapping 1: (1994) + Analysis of Functional MR Time-Series K.J. Friston, P. Jezzard, and R. Turner The Neurosciences nstitute, La Jolla, California (K.J.F.) and Laboratory of Cardiac Energetics, NHLB, NH, Bethesda, Ma ryland (P. J., R.T.) + + Abstract: A method for detecting significant and regionally specific correlations between sensory input and the brain's physiological response, as measured with functional magnetic resonance imaging (MR), is presented in this paper. The method involves testing for correlations between sensory input and the hemodynamic response after convolving the sensory input with an estimate of the hernodynamic response function. This estimate is obtained without reference to any assumed input. To lend the approach statistical validity, it is brought into the framework of statistical parametric mapping by using a measure of cross-correlations between sensory input and hemodynamic response that is valid in the presence of intrinsic autocorrelations. These autocorrelations are necessarily present, due to the hemodynamic response function or temporal point spread function. B 1994 Wiley-Liss, nc. Key words: functional MR, time-series, statistical parametric mapping, significance, visual, cross- correlations, autocorrelations + + NTRODUCTON This article is about the analysis of functional magnetic resonance imaging (MR) data obtained during activation studies of the human brain. The problem addressed is how to identify regionally specific and significant correlations between a time-dependent sensorimotor or cognitive parameter and measured changes in neurophysiology [Bandettini et al., 1993; Friston et al., 1993al. This problem has a number of well-established solutions in position emission tomography (PET) functional imaging, in which activation studies are almost universally analyzed using some form of statistical parametric mapping [Friston et al., However, some special issues need to be considered when dealing with MR data. Received for publication September 9, 1993; revision accepted January 18,1994. Address reprint requests to K.J. Friston at his present address: MRC Cyclotron Unit, Hammersmith Hospital, DuCane Road, London W12 OHS, England. Statistical parametric maps (SPMs) are images whose voxel values are distributed, under the null hypothesis, according to some known probability density function. The null hypothesis is usually that there are no significant activations or correlations between sensorimotor parameters and central physiology. SPMs are treated as smooth multidimensional statistical processes and thresholded such that the probability of acquiring an activated region by chance (over the entire SPM) is suitably small (e.g.,.5). The methods for determining the appropriate threshold have only recently been developed [Friston et al., 1991; Worsley et al., Two fundamental aspects of MR data that bear directly on detecting significant correlations are: The hemodynamic response to sensory input (evoked changes in neuronal activity) is transient, delayed, and dispersed in time. Because the sampling interval of some MR techniques is typically much shorter than the timeconstants of hemodynamic changes, the resulting time-series can show substantial autocorrelation. o 1994 Wiley-Liss, nc.

2 4 Friston et al. 4 Transient aspects of the hernodynamic response The transient nature of the hemodynamic response is usually attributed to a physiological uncoupling of regional cerebral perfusion and oxygen metabolism [Fox et al., The result is a time-dependent (but uncharacterized) change in the relative amounts of venous oxy- and deoxy-hemoglobin. Due to the differential magnetic susceptibility of oxy- and deoxyhemoglobin, a transient change in intravoxel dephasing is observed. This change subtends the measured signal [Kwong et al., 1992; Ogawa et al., 1992; Bandettini et al., 1992). The physiological mechanisms that mediate between neuronal activity and physiology at the level of perfusion and cerebral metabolism, are known to have time-constants in the millisecond (ms) to seconds (s) range. n vivo optical imaging of microcirculatory events in the visual cortex of monkeys suggests the following sequence of activitydependent changes: 2-4 ms after the onset of neuronal activity, highly localized oxygen delivery occurs, followed 3-4 ms later by an increase in blood volume. After 1, ms, a substantial rise in oxyhemoglobin is seen [Frostig et al., MR data concur with this latter observation, suggesting a rise in relative oxyhemoglobin that is maximal after 4-1 s [Bandettini, The delay and dispersion associated with the hemodynamic response to the onset of neuronal activity is, therefore, substantial when compared with the sampling interval typical of fast [e.g., echo-planar imaging, (EP)] MR times-series (1 ms-5 s). This is important because the correlations between evoked changes in neuronal activity and measured hemodynamics will be displaced and dispersed (smeared) in time. Simply correlating a sensory parameter (reflecting input at a neuronal level) with the hemodynamic response will miss significant crosscorrelations that are distributed in time according to the delay and dispersion of the hernodynamic response function. For example, if the delay rendered a sinusoidal sensory input and the hernodynamic response out of phase by ~/2, the correlation would be zero. The hemodynamic response function can be thought of as a temporal point spread function that not only smooths sensory input but also applies a shift in time. n other words, these functions describe the physiological response to a point (delta function or impulse) input, if one were able to present such a stimulus. Clearly, to assess the true "correlation" between a sensory param- eter and hemodynamic response, the sensory parameter must first be subject to the same delay and dispersion as that mediating between neuronal activity and hemodynamics. More formally, the correlation of interest is between the MR time-series and the sensory input convolved with the hemodynamic response function. This is the first observation on which the proposed approach is predicated. Stationary aspects of the hernodynamic response Due to the dispersive nature of the response function, the hemodynamics at any voxel will be inherently smooth or autocorrelated in time. This autocorrelation will be evident even in the absence of evoked changes in neuronal activity. One can think of hemodynamics as the result of convolving a neuronal process with an effective hernodynamic response function, where the neuronal process comprises evoked transients and intrinsic activity. n mathematical terms, this can be modeled as: x(t) = A{E(v) + ~(t)) (1) where x(t) is the observed signal at a particular time (t), A{.} is a convolution operator modeling the effect of the response function, E(V) is the evoked neuronal transient as a function of time from stimulus onset (v), and z(t) is an uncorrelated neuronal process representing fast intrinsic neuronal dynamics. The reason for characterizing the intrinsic component in this way comes from the observation that autocorrelations in neuronal dynamics are most substantial in the 1-1-ms range [Aertsen and Preissl, 1991; Nelson et al., and are, therefore, very fast compared with hemodynamics. Clearly, if neural dynamics showed slower autocorrelations, the effective hernodynamic response would embody some elements attributable to these slow neuronal changes. This model highlights the fact that observed autocorrelations have at least two components: an evoked component due to the convolved neuronal transient that is phase-locked to the stimulus or task onset, and intrinsic autocorrelations that result from intrinsic neuronal activity. To the extent that the above model is true, the autocorrelations and response function are directly related. n particular, when no evoked changes in neuronal activity occur (or any components phaselocked to stimulus onset have been removed), the intrinsic autocorrelations can be used to determine the response function. The relationship (between intrinsic autocorrelations and the effective response function) furnishes a way of estimating the response function

3 + Analysis of Functional MR Time-Series + directly from the physiological data that avoids any assumptions about the input or evoked neuronal activity. This is the second tenet on which the proposed approach is based and is important because it ensures that no bias exists when testing these assumptions post hoc. Finally, temporal smoothness or autocorrelations are important from the point of view of detecting significant correlations between the input and observed response. Even if no neuronal response is evoked, the presence of intrinsic autocorrelations will increase the probability of a spurious and high correlation coefficient. This follows from the fact that the effective degrees of freedom associated with a measured correlation will be smaller than the number of time points or scans. n simple terms, for two time-series of fixed length, the probability of obtaining a high correlation coefficient, by chance, increases with smoothness. When assessing the significance of cross-correlations between two time-series, it is therefore necessary to account explicitly for the autocorrelations expected under the null hypothesis (in this case the intrinsic autocorrelations). This is the third tenet of our approach. The stationariness of the intrinsic autocorrelations (and implicitly the response function) referred to above are in time. Stationariness in time does not imply stationariness in space. n other words, it is possible for the response function and the autocorrelative behavior of hemodynamics to vary from region to region, or voxel to voxel. The relationship between intrinsic autocorrelations (both spatial and temporal) and the effective degrees of freedom is a general one and affects the analysis of all functional imaging data, using any form of statistical parametric mapping. n what follows, this theme occurs twice-in deriving a statistical quotient, which tests for significant temporal cross-correlations in the presence of intrinsic autocorrelations; and thresholding the resulting statistical parametric maps, which are spatially autocorrelated. We present below: the theoretical aspects of assessing the significance of correlations between sensory (cognitive or motor) parameters and hemodynamic responses measured with MR an application to real data, and an evaluation of the hemodynamic response function used in the preceding two subsections. THEORY The objective of the described approach is to produce a statistical parametric map that can be thresh- olded to identify significant correlations between a given input parameter and regional hemodynamics. This requires a statistical parameter that tests the significance of the above correlation, and a technique for thresholding the SPM that renders the probability of identifying a significant region by chance suitably small (e.g.,.5). The latter techniques have already been established in the context of functional imaging, using the theory of stationary Gaussian processes with known or measurable autocorrelation [Friston et al., The problem, therefore, reduces to defining a statistic that tests for the presence of significant crosscorrelations in the presence of intrinsic autocorrelations. The following relies on the theory of stationary stochastic processes and, in particular, their representation in the frequency domain. The interested reader will find an excellent review of the important standard results (a number of which are used below) in Cox and Miller [198, pp Covariance refers to the average or expectation of the product of two processes x and y (E{x(t). y(t)}), after normalization to zeromean. Correlations refer to the same thing but when the processes are normalized to unit variance. Correlations can be thought of as normalized covariances. Cross-correlation pxy(7) and cross-covariancefunctions yxy(~) are functions that represent the magnitude of correlation or covariance as a function of the temporal displacement T (lag) between the processes x(t) and y(t) (E(x(t). y(t + T)}). Autocorrelation and autocovari- ance functions, pxx(7) and yxx(~), are simply the correlations or covariances of a process with itself at some time T later (E{x(t). x(t + T))). Clearly yxx() is the variance of x(t). Hernodynamic response function n what follows, the expressions are in continuous time and assume that real fh4r time-series are reasonably approximated by continuous space-time processes. Let the observed MR time-series, at any point in the brain, be modeled by a function of time x(t) and let a time-dependent sensorimotor or cognitive parameter of interest be similarly modeled by c(t). We will refer to this parameter as the contrast. n analyses of (co-)variance, the contrast is used to test for a specific profile of time-dependent changes. Both here and in general, the contrast has zero-mean and unit variance. The effects of delay and dispersion of the hemodynamic response are modeled by a convolution operator A(.], e.g.: A{c(t)} = c(t - T ). h(t) dt

4 + Friston et al. + The function h(t] is the (impulse) response function and has an equivalent representation in frequency space (o = 2~rf where f is frequency) called the transfer function H(w): The transfer function is simply the Fourier transform of the response function h(t]. The particular form of h{t} reflects our assumptions (or knowledge) about the effective hemodynamic response function. t is a function of delay T. The average or expected delay is simply the expectation associated with h[t]. Similarly, the dispersion is the variance of h[~]. n the following, we assume a Poisson form for h{~]: The Poisson distribution is a parsimonious choice because its mean and variance are equal and are defined by a single parameter A. Keep in mind that the values of T in Equation 3 are real positive integers. n what follows, T has units of seconds. (f the repeat time is not an integer valued number of seconds, then Equation 3 could be replaced by a y-distribution with appropriate moments.) t is, of course, possible to use forms of h(t] that do not assume a relationship between delay and dispersion (see Discussion). We will validate the choice of a Poisson form in a subsequent section. To account for the effects of delay and dispersion on the cross-correlations between x(t) and c(t), we apply A{.] to c(t) and examine the correlation between x(t) and h(c(t)]. The contrast c(t) convolved with the hemodynamic response function h(t} represents a new contrast, which has been corrected for delay and dispersive effects, and will be referred to as y(t). The (power) spectral density of y(t) [gy(o)] is simply related to the spectral density of c(t) [g,(o)] and the transfer function H(o) [Cox and Miller, 1981: Testing for significant cross-correlations in the presence of intrinsic autocorrelations n this subsection, we observe that yx,(), the covariance of interest, has approximately a Gaussian distribution. The variance of this distribution (over many realizations at one voxel or many voxels in one realization) is the same as the variance of yxy(7) over time T. The variance of yxy(7) is its spectral density, integrated over frequencies. Because yxy(7) is eff ectively the contrast, convolved with the response function, convolved with the MR time-series, its spectral density, under the null hypothesis, is simply the three- way product of the spectral densities of c(t), h(7), and x(t). The spectral density of x(t), under the null hypothesis of no evoked neuronal activity, is specified by the intrinsic autocorrelations. Given that these spectral densities are either known or can be estimated, the distribution of yxy(7), under the null hypothesis, can be determined. These arguments are now presented more formally. The cross-covariance between the MR time-series x(t) and the convolved contrast y(t) will itself be a stationary stochastic process with zero-mean. Furthermore, because yxy(7) is obtained by convolving x(t) with y(t), yxy(7) represents a weighted sum of x(t) and, by central limit theorem, will tend to a Gaussian distribution [this assumption is only strictly true when c(t) has non-zero values over an interval that is large compared with the width of yxx(7)]. The variance of Yxy(4 is: where gxy(o) is the cross-spectral density of processes x(t) and y(t) [Cox and Miller, gxy(w) can be interpreted as the second-order probabilistic characteristics (variance) of the orthogonal process sxy(o), in the frequency domain, that is equivalent to yxy(~) in the time domain, where (cf. Parseval's theorem): and where * denotes complex conjugate and sx(w) and sy(o) are the spectral representations of processes x(t) and y(t). Let [u] = u.u*. Now if s,(w) and s,(w) are independent {equivalently ifxft) and y(t) are independent): [cf. Theorem 6F in Grimmett and Welsh, Equation 7 expresses the fact that the cross-spectral density is the product of the densities of the two underlying processes, given that they are independent (i.e., no systematic phase relationship exists). Combining Equations 5 and 7: (5)

5 + Analysis of Functional MR Time-Series + Equation 8 expresses the variance of the crosscovariance function as an integral (sum) over all frequencies. The variance at each frequency, under the null hypothesis, is the product of the variances (spectral densities) of the underlying processes. The stationariness of yxy(.) implies that the variance of yxy(7) is not a function of time and, therefore, V(yxy()] (over multiple realizations) = V{yx,(~)} (over time). So, the quotient: differences. A.related device, called the shift predictor, is found in the analysis of separable neuronal spiketrains recorded with multiunit electrodes. The shift predictor is used to remove the confounding effect of stimulus-locked firing, when assessing the CYOSScorrelations between two units. n the present case, we wish to remove confounding stimulus-locked effects on estimated autocorrelations. n the following xp(t) will refer to the difference in (p) phase-shifted MR time-series. From Equations 1 and 2, will, under the null hypothesis, have (roughly) a Gaussian distribution of unit variance and zero-mean. This is also called the z-score. A statistical parametric map of (() is referred to as a SPMjz}. c() can be thought of as the correlafion between x(t) and y(t) times the square root of the effective degrees of freedom (v). To compute v, one needs to estimate gy(o) and gx(o). The estimation of gy(o) is straightforward. However, the estimation of gx(w) due to intrinsic autocorrelations [gx(w) under the null hypothesis] requires a little more thought. where q is the stimulus or task period (or cycle length), and p is an integer greater than. Because z(t) is modeled as an uncorrelated stochastic process (innovation), the spectral and auto-correlative properties of xp(t) correspond to those of x(t) in the absence of evoked transients. Equivalently, under the null hypothesis, Pxx(.> = Pxpxpb) and gx(w>/varlx(t)l = gxp(4/ var{xp(t)l. Although it is possible to estimate gxp(o) by subjecting xp(t) to Fourier transformation, we use the following expedient method. f one assumes that the intrinsic autocorrelations [pxpxp(h)] can be modeled by a Gaussian autocorrelation function of the form: Estimating g,(w), gx(w), and the hemodynamic response function h(t) then: Clearly, if c(t) and h(7) were sufficiently well behaved, gy(w) could be determined analytically as in Equation 4 [using g,(o) and H(o)]. However, in practice it is considerably easier simply to compute the spectral density of y(t) [after convolving c(t) with h(~)] using fast Fourier transforms, and thereby estimate g,(w) directly. The estimation of gx(o), the spectral density of the MR time-series, under the null hypothesis, is more problematic, given that the short and noisy time-series that are typically available, and that the null hypothesis is (hopefully) seldom true and the autocorrelations (and spectral densities) of x(t) will contain both evoked and intrinsic components. To obtain an estimate of gx(w), under fhe null hypofhesis, it is necessary to discount evoked contributions. This is simply achieved by removing components that are phase-locked to the stimulus onset. n our work, this is done by analyzing the time-series from one stimulus-cycle minus the time-series from another, namely, the phase-shifted [Cox and Miller, The constants of proportionality implicit in Equation 11 cancel in Equation 9 and do not affect v. The parameter ut reflects the intrinsic smoothness of x(t) and is estimated in a computationally efficient way using the equality: ut = var(x,(t)]. [2 var{d~,/dt]j-*/~. (12) This relationship follows from Equation 11 and the standard result: V[dx/dt} = -Y;~:(O) [Cox and Miller, Refer to Friston et al. [1991] for a more thorough discussion. Whatever phase-shifting strategy is used, xp(t) will always be shorter than x(t). From a practical point of view, this means that the low-frequency components of gx(w) cannot be estimated directly. Modeling gx(o) with a Gaussian (or any other) distribution provides for an estimate of these low components

6 Friston et al. A process with autocovariance given by Equation 11 is obtained by convolving a random uncorrelated process with a Gaussian point spread function of parameter ut. f one were to assume (see Equation 1) that the MR time-series were a result of convolving an unknown but uncorrelated underlying neuronal process with the temporal point spread function h(t), then the estimate of ut also specifies A (the delay or dispersion). This follows from the fact that the Gaussian and Poisson functions are very similar for large values of A (they are identical in the limit) where: The empirically derived value of ut, based on the data set described below, was.92 scan. This corresponds to a delay and dispersion (A) of 7.69 s, which falls comfortably in the reported ranges [Bandettini, There is no direct information about delay (as opposed to dispersion) in ut. The reason that our hemodynamic response function can be entirely specified by ut depends on the Poisson assumption implicit in Equation 3. To summarize this section: A single parameter ut can be estimated from the ratio of the variances of the time-series and its derivative. This parameter (temporal smoothness) not only provides an efficient and complete estimate of the spectral density of the timeseries (assuming the density is Gaussian), but it can also be used to fix the parameter of the response function (assuming the function is Poisson). This single parameter is all that one needs to estimate, in calculating the effective degrees of freedom (v) for the correlation of interest. Thresholding the SPM[z) The final stage of analysis involves identifying a threshold that protects against experiment-wise false positives. The key thing to note is that, due to the spatial autocorrelations in the SPM(z], the number of voxels expected to be above threshold (u) is far more than the number of regions (or maxima) above threshold. Because we are concerned with activated regions (connected subsets of the total excursion set = t() 2 u) and not voxels, a Bonferroni correction is not appropriate (a Bonferroni correction is based on the expected number of voxels). Current approaches [Friston et al., 1991; Worsley et al., use the fact that the relationship between the probability of getting at least one region above u (P) and the expected number of regions (or maxima) (k), tends to equality as u gets very large. The expected number of maxima depends on the threshold (u), smoothness (us), and size of the SPM (S). For an SPM{z] of D dimensions: [Hasofer, 1978; Adler and Hasofer, us is the spatial smoothness of the SPM and is estimated according to Equation 12, but using partial derivatives in space over rows and columns. The appropriate threshold is chosen such that P =.5. Refer to Friston et al. (19911 for a derivation of an equivalent equation (directly in terms of regions) in the context of functional imaging. A related approach based on the Euler characteristic has also been devised [Worsley et al., t is, of course, acceptable to present SPMs thresholded without this correction for multiple nonindependent comparisons. n this case, the SPM is referred to as descriptive, and an uncorrected onetailed threshold of P =.1 is usually recommended (i.e., 3.9 for SPM{z]). Thresholding and interpreting SPMs comprise, a branch of spatial statisks that is receiving a lot of attention. The thresholding described in this paper represents a simple and established approach, but is not necessarily the most powerful. Refer to Friston et al. [in press] for a more current analysis. The theories presented above used processes in continuous time and relied on a number of approximations (usually exact in the limit). To assess the robustness of the expressions presented here, we applied the results to a simulated MR data set. Validation using simulated data The parameters of the simulated data were chosen to correspond to the real MR data analyzed in the next section. Sixty images of 32 X 32 voxels were created by convolving an uncorrelated random number field with a three-dimensional Gaussian kernel of parameter 1.4 voxels, within the images, and.9 over time. The data were phase-shifted and subtracted, assuming a stimulus-cycle period of 2 scans. n the example presented here, and in those below, the differences between all possible pairs of stimuluscycles were analyzed and the results averaged over all pairs. According to Equation 12, the estimate of temporal smoothness was.95 scan and compares well with the expected value of.9. Assuming a repeat time of 3 s, these simulated data would have a temporal smoothness (crj of 2.71 s. This degree of smoothness would be produced by a temporal point spread function of 158

7 + Analysis of Functional MR Time-Series + Poisson parameter 7.37 s. ((7) was computed, according to Equation 9, for all voxels, using a square-wave contrast c(t) period of 2 scans (1 scans on and 1 scans off ) and a Poisson response function with X = 7.37 s. The resulting distribution of {(T) was determined by pooling across voxels and lag (.). This simulated distribution was compared with the theoretically predicted Gaussian distribution. The results of this simulation are shown in Figure 1. The response function is shown (top left) with the resulting contrast before [c(t)] and after [y(t)] convolution with h(.) (top right). The agreement between the empirical and theoretical distributions of ((7) is clearly evident (lower left). The corresponding SPM{z] of {(O) is shown in the lower right. The highest value of (() obtained in this realization was The time-series from the corresponding voxel is displayed with the contrasts (dotted line, top right). These high values (and apparent following of the contrast) can easily occur by chance and should not be overinterpreted in real data. No value exceeded the corrected threshold (broken vertical line, bottom left), which was 3.79 according to Equation 14 and the post hoc estimate of spatial smoothness (Equation 12). This estimated spatial smoothness (averaged over both dimensions) was 1.45 voxels and compares favorably with the expected value of 1.4 voxels. ANALYSS OF MR TME-SERES Data acquisition The data used to illustrate the method were a time-series of 64 gradient-echo EP single coronal slices (5 mm thick, 64 x 64 voxels) through the calcarine sulcus and extrastriate areas. mages were obtained every 3 s from a single subject using a 4.-T whole body system, fitted with a small (27-cm diameter) z-gradient coil (TE 25 ms, acquisition time 41 ms). Photic stimulation (at 16 Hz) was provided by goggles fitted with light-emitting diodes. The stimulation was off for the first ten scans (3 s), on for the second ten, off for the third, and so on. mages were reconstructed without phase correction. The data were interpolated from 64 x 64 voxels to 128 x 128 voxels. Each voxel thus represented 1.25 x 1.25 x 5 mm of cerebral tissue. Data preprocessing mage manipulations and data analysis were performed in Matlab (Mathworks nc., Sherborn, MD). The first four scans were removed to avoid magnetic saturation effects. The scans were corrected for (slight) subject movement using nonlinear minimization and a computationally efficient cubic interpolation algorithm [Keys, mages were translated and rotated to minimize the sum of squares between each of the 64 images and their average (both scaled to the same mean intensity) using the Levenberg-Marquardt method [More, Only the 36 x 6 voxel subpartitions (of the original images) containing the brain were subject to further analysis. All the time-series, for each voxel, were normalized to a mean of zero and convolved, in the time domain, with a Gaussian filter (full width at half maximum = 1.5 voxels) to suppress thermal noise. Computing (() The real data were treated in an identical fashion to the simulated data. The temporal smoothness (a7) was estimated using Equation 12 to be.92, as mentioned above. This estimate was the mean over all intracerebra1 voxels and corresponded to a value of 7.69 s for A, the parameter of our assumed Poisson response function h(7). To demonstrate that modeling intrinsic autocorrelations with a Gaussian form is reasonable, the empirical intrinsic autocorrelation functions and corresponding spectral density [gx(w)] are shown in Figure 2. These functions were obtained by averaging estimates over the differences between all pairs of 2 scan stimulation periods (phase-shifted differences). The data were windowed with a Hanning function before being subject to Fourier transform. The solid lines in Figure 2 correspond to the autocorrelations (and spectral density) that would be obtained by convolving an uncorrelated innovation with a Poisson function of parameter 7.69 s. The determination of the appropriate Poisson parameter does not require computation of either the autocorrelation function or spectral density, but uses the variance of the first derivatives as in Equation 12. These data are presented only to validate the Gaussian approximation for gx(). The contrast c(t) used in this example was, again, a square-wave that was positive during photic stimulation and negative otherwise. c(t) had zero-mean and unit variance. The measured correlation between y(t) and the MR time-series x(t) was computed for all voxels and scaled by,/u according to Equation 9 to produce ((). Jv was 4.58, corresponding to 21 effective degrees of freedom. n other words, one accrues a degree of freedom every 2.85 scans. The reason v is the same for all voxels is that we assumed that the response function and intrinsic autocorrelations (and

8 Friston et al n " response function '. '. J / / 2 4 time {scans} convolved contrast {y(t)} -* t.. time {scans} c o.4.- c, 3 P.- L c, $ zeta(h) SPM{z) Analysis of a simulated data set. Top left: The assumed hemodynamic response function h($, which has a Poisson distribution with parameter 7.37 s. Top right: The original square-wave contrast [broken line, c(t)], the contrast convolved with assumed response function [solid line, y(t)], and the response of the simulated voxel data x(t) that best correlated with the convolved contrast (dotted line). Bottom left: The distribution (over voxels and time) of [(T) Figure. calculated according to Equation 9 (dotted line). This distribution is indistinguishable from that predicted theoretically (solid line). The broken vertical line is the threshold that any value would have to reach to be considered significant at P =.5 according to Equation 14. Bottom right: The corresponding SPM(z), a statistical parametric map of ((). The gray scale is arbitrary and has been scaled to the maximum and minimum of the SPM. spectral density) were the same everywhere. Clearly, one does not have to assume this (see Discussion). The results of this analysis are given in Figure 3. The hemodynamic response function, based on the empirically determined "smoothness" of the time-series, is displayed to the left of the contrast before and after convolution. The dotted line in the top right graph is the response of the voxel showing the greatest correlation. The distribution of (() (bottom left) is dramati- cally skewed, with almost all voxels showing a marked positive correlation with the convolved contrast. The highest value of i() was The corresponding SPM{z] is displayed in the bottom right. Thresholding the SPM The spatial smoothness (a,) was computed according to Equation 12 over all the rows and columns of 16 +

9 + Analysis of Functional MR Time-Series + time {seconds} frequency { c ycl es/sec} Actual and assumed intrinsic autocorrelation and spectral densities. Empirical estimates of the intrinsic autocorrelation function and spectral density compared with those based on temporal smoothness. Left: The autocorrelation function of the difference between the MR time-series from one stimulus cycle and another (phaseshifted differences), averaged over all voxels and all pairs of cycles Figure 2. (broken line). The autocorrelation function obtained by convolving an uncorrelated innovation with a Poisson function based on the measured temporal smoothness is also shown (solid line). Right: The corresponding empirical (dots) and assumed (solid line) spectral densities of the phase-shifted differences. the SPM{z]. The means of these estimates was 1.46 voxels, or 1.81 mm. With a search volume (S) of 2,16 voxels, and a P value of.5, the threshold according to Equation 14 was Figure 4 shows the results of this thresholding. The picture on the left illustrates that the number of voxels above the threshold far exceeds the number of regions. The reason for this is, of course, spatial smoothness in the underlying process. This discrepancy between the expected number of suprathreshold voxels and the expected number of regions in the excursion set means that a Bonferroni correction based on voxel expectation is not appropriate. This follows from the fact that one requires the expected number of regions to be.5. The striate and extrastriate regions constituting the excursion set can all be considered significant in a strict statistical sense. They include V1 bilaterally and (on the right) V2 dorsal to the calcarine fissure. Extrastriate regions in the fusiform, lingual, and dorsolateral cortices are also evident. A null analysis To demonstrate the specificity of the approach, we repeated an identical analysis but changed the period of the contrast c(t) from 2 scans to 16 scans. This simple manipulation eliminated all the significant voxels. n fact, the highest value of {(O) was only The results of this analysis are shown in Figure 5 using the same format as in Figures 1 and 3. VALDATON OF THE HEMODYNAMC RESPONSE FUNCTON Validation in the time domain Thus far, it has been assumed that the hemodynamic response function h(7) is reasonably correct, and we have proceeded to assess the ability of the contrast c(t) to explain the observed changes. One can take the opposite approach and estimate h(7) by assuming c(t) is correct. n previous sections, we have assumed a Poisson form for h(7) and have used the empirically determined temporal smoothness to specify an appropriate parameter. n this section, we present a post hoc validation of h(7) by computing it directly, using the contrast c(t) as the input function and the brain s response as the output function. Although this could be done at each voxel, a more robust estimate is obtained if some global response is used. This global or

10 Friston et al. response function convolved contrast (y(t)} o.5i / /, ' time {seconds} ' time (seconds}.8 C.6.- w S g.4 L +-' cn e.2, ' zeta() SPM{z} Figure 3. Analysis of a real data set. As for Figure, but using real data. n this case, the distribution (lower left) is for (() (dotted line) and is dramatically skewed with respect to the distribution predicted under the iiull hypothesis (solid line). distributed output function was the response of the first spatial mode (principal component or eigenimage) based on the between-voxel covariances. The first spatial mode or eigenimage is defined by the first eigenvector of the covariance matrix of the MR time-series. f x corresponds to a matrix with 6 rows (one for each scan) and 2,16 columns (one for each voxel) of the time-series for each voxel, then the eigenvector solution e is simply computed using singular value decomposition [Golub and Van Loan, such that: xtx.e = e.r (15) where denotes transposition and R is a diagonal matrix of eigenvalues. The column of e corresponding to the largest eigenvalue is the first spatial mode or eigenimage el. This mode constitutes a distributed brain system with high intracovariance or functional connectivity. Refer to Friston et al. [1993b] for a complete discussion. The response of el is given by: r = x'e, (16) where r is a column vector of length 6. Using least-squares deconvolution, one can directly estimate the response function that best transforms input c(t) to output r. The results of this deconvolution are shown 162

11 + Analysis of Functional MR Time-Series + signficiant covariances Figure 4. Thresholding the SPM[z]. Left: The SPM(z) from Figure 3 is presented as a surface rendering, with the corrected threshold (corrected for multiple nonindependent comparisons according to Equation 14). Right: The resulting excursion set is highlighted and includes V and several extrastriate regions according to the atlas of Talairach and Tournoux [ in Figure 6, where the memory of the convolution has been restricted to ten scans. The first eigenimage or mode (top left) reveals high loadings in the same areas identified by the more conventional (hypothesisled) analysis of the previous sections. The corresponding response is shown with the contrast (top right). The results of the least-squares deconvolution (dots) agree remarkably with the hemodynamic response function based only on the temporal autocorrelations (broken line). The predicted and actual responses of the first mode are compared in the lower right. The predicted time courses are simply the contrast convolved with h(7) (broken line) and the empirically determined response function (solid line). t is apparent that the empirical response function has a slightly more protracted tail, with some hint of biphasic structure. One might conjecture that this feature may reflect macrovascular effects as remote regions are drained. Validation in the frequency domain The hemodynamic response function behaves essentially as a low-pass filter in the detection of neuronal firing [Bandettini, n other words, higherfrequency inputs are more severely attenuated than low-frequency inputs. The dependency of output variance on input frequency is captured by the (squared) transfer function 1H(w) seen in Figure 7 (left). This example is for a Poisson parameter of 5 s and demonstrates the downwards modulation of output variance with increasing frequency of a periodic input. For example, the variance of a periodic input at.1 cycles/s (a stimulus cycle length of 1 s) is attenuated to about 2 percent. The hemodynamic response function can provide quite a powerful prediction of the brain s capacity to follow a periodic stimulus when presented at increasing frequencies. Bandettini et al. have examined this phenomenon using fast, selfpaced, sequential finger-thumb opposition at different on-off switching frequencies, ranging from period lengths of 5 s (e.g./ 25 s of finger opposition and 25 at rest) to 2 s. The data reported in Bandettini [1993] are in terms of relative activation or amplitude of response. These data are in excellent agreement with theoretical predictions based on a Poisson hemodynamic response function (Fig. 7, left). The theoretical values were derived by simply convolving a squarewave contrast of appropriate frequency with a Poisson response function (parameter 5 s) and taking the maximum amplitude difference. The Poisson parameter used here (5 s) is smaller than that calculated on the basis of our photic stimulation data and may reflect a number of differences, including: differences in the neuronal response to varying task conditions

12 + Friston et al n. -.A. ( - i 1. / time {seconds}.8 c L a m , a,,... *... 1 SPM{z} Figure 5. Null analysis of real data. Exactly as for Figure 3, but with a change to the contrast (the on-off cycle period has been reduced from 2 scans to 16). The impact on the distribution of (() (lower left) is clearly evident. differences due to relative microvascular and macrovascular contributions to the hemodynamic response (secondary to different acquisition parameters), or differences in motor vs. visual cortex, hemodynamic response. Validation in terms of phase shifts A simple and powerful way to estimate the delay associated with the hemodynamic response is to compute the phase shifts of the largest Fourier component of the input sequence at each voxel. The results of this analysis are shown in Figure 8 as a distribution of phase differences between the contrast [c(t)j and the signal [x(t)] over all intracranial voxels. Distribution is broad, but centered on the delay estimated on the basis of temporal smoothness alone. Anecdotally it appears that the longer delays (greater phase-differences) are characteristic of regions that would be more subject to macrovascular effects (see next section). Effects of changing h n this final section, we present a brief analysis of the effects of changing X. To assess the sensitivity of

13 + Analysis of Functional MR Time-Series + 1 st eigenimage {mode} input and response (1st mode} % 1- -*..... e p -1-! v) c.2 =.2 c + S Q).1 v) c v) /. i t. delay = 7.7 se /.... *..-.. r time {seconds} time (seconds} Estimating the response function directly. Top left: The first spatial mode or eigenimage of the real data. The gray scale is arbitrary and has been scaled to the image maximum. Darker means a higher loading. Only positive loadings are shown. Note the correspondence between this profile and the significantly activated regions in Figure 3. Top right: The assumed input c(t) (solid line) and the response of the first mode or eigenimage (dots). Lower left: The Figure 6. response function based only on the intrinsic temporal smoothness (a function of the variance of the first derivative) of the MR data (broken line) and the response function obtained by least-squares deconvolution (dots), based on an assumed input c(t) and the first mode s response. Lower right: The actual response (dots) and the response predicted using the Poisson response function (dotted line) and the least-squares deconvolution estimate (broken line). detecting significant correlations to changes in the parameter A, we repeated the analysis based on {(O) while systematically increasing A from to 16 s in.1-s steps. We used the exceedence proportion as an index of the overall or omnibus sensitivity to significant correlations. The exceedence proportion is the proportion of voxels in the SPM{z} that are above threshold. This exceedence proportion has been used for some years as a rough guide to the omnibus significance of SPMs [Friston et a]., 1991 and has recently been the subject of some rigorous statistical analysis [Worsley and Vandal, submitted]. The results of this analysis are presented in Figure 9 (top). The first thing to note is that the greatest exceedence proportion is obtained for values of A in the range 6 to 8 s. The value used in the previous + 165

14 6 Friston et al. modulation tranfer function input frequency { cycles/sec} - on-off cycle period {seconds} Figure 7. Transfer functions. Left: The modulation transfer function expressed as the square of the frequency representation of a hernodynamic response function. Right: Relative activation expressed as a function of on-off cycle length. The data are copied from the graphical data in Bandettini [ 19931, and the solid line represents that predicted with a hernodynamic response function of Poisson form and parameter 5 s. section (7.69) is marginally suboptimal (the best value was 6.6 s). One explanation for this was found in a closer analysis of these results. f the optimal value of A is computed for each voxel, one observes a systematic and spatially ordered change as one moves from striate cortex to extrastriate cortex, with the extrastriate cortices preferring a shorter delay and dispersion. n other words, some regions of occipital cortex respond in a more acute and phasic way to the same sensory input than do others. Figure 8 (lower right) shows these voxel-specific optimal values of A for all voxels that showed a reasonable correlation with the convolved contrast (noncorrected threshold of P =.1, lower left). This result is not surprising, given the reported evidence for phase shifts in response profiles (of several seconds) in MR data [Bandettini, What is surprising here is that the extrastriate regions appear to respond before primary visual cortex does. Obviously many potential neuronal and hemodynamic explanations exist for this (e.g., slice reentrant large draining vessels). However, one interesting possibility is that V1 responses are modulated by reentrant connections [Edelman, from extrastriate regions. This particular aspect of interre- gional dynamics will be addressed in a subsequent paper. n conclusion, based on the exceedence proportion, the approach appears robust as long as A is within a second or so of the real delay and dispersion. DSCUSSON We have described what we consider to be a reasonable characterization of MR time-series in terms of regionally specific correlations between a sensorimotor or cognitive input and the hemodynamic response. This approach is based on the correlations between the MR signal and the input sequence convolved with an estimate of the hemodynamic response function. The hemodynamic response function (in this paper) was assumed to be Poisson and was determined using the observed autocorrelations in the physiological data (having removed stimulus-locked components). To give this approach statistical validity, it was brought into the framework of statistical parametric mapping by using a statistical parameter that tests for significant cross-correlations (between sensory input and physiological response) in the presence

15 Analysis of Functional MR Time-Series analysis of delay in terms of phase r 3 c a u 25-2 Y- a.- > 2- w ca Q) n phase difference (seconds) Figure 8. Phase differences and delay. The distribution of phase differences between the contrast and the response at each voxel. The phase difference was computed for the largest Fourier component of the contrast (input). The vertical broken line is the delay estimated on the basis of temporal smoothness. of intrinsic autocorrelations; and using established thresholding techniques to render the probability of finding a significant region, by dhance, suitably small (.5). n particular, the proposed statistic (() has a Gaussian distribution, under the null hypothesis, and the resulting map of <(O) constitutes a SPM{z). c(o) is simply the correlation between convolved sensory input and hemodynamic response, scaled by a factor that can be thought of as the square root of the effective degrees of freedom w. w is a function of the distribution and overlap of the two processes in frequency space. One important aspect of this work is that the hemodynamic response function, or temporal point spread function, is estimated without reference to any assumed input (it is estimated using intrinsic autocorrelations in the physiological data only). Any estimation based on the sensory input would clearly result in a rather circular exercise, in that the technique would be biased towards the input function used. A number of alternative approaches to the problem of detecting significant cross-correlations exist, some of which we have evaluated. One obvious approach is to use the maximum cross-correlation between c(t) and x(t) over some reasonable interval YJT),,,~~. This is 167

16 + Friston et al. +.- = Q Q a, c $.2 a, Q) X a n a rg es t zeta () U lambda (seconds} corresponding lambda threshold {u} = 3.9 Effects of changing the response function. Top: The exceedence proportion (number of voxels above threshold P =.1) as a function of A. Lower left The SPM(z) thresholded at a descriptive (noncorrected) level of P =.1 (one-tailed). n this instance, the voxel values are the largest (() obtained over a range of A. Lower right: The optimal value of A for all voxels surviving a P =.1 Figure 9. threshold (voxels highlighted in the lower left). The optimal value of A is defined as that which provides the greatest ((). The gray scale is arbitrary with small values of A (faster and more acute responses) being brighter. Note that V (centrornedial dark areas) appear to be subject to the greatest delay and dispersion. simply achieved by computing y,,(~) and thresholding in space and time according to Equation 14. For slice data, the SPM{y,,(7)) represents a three-dimensional process. For volume MR data, the corresponding SPM would be four-dimensional. We have tried this approach and found it to be less sensitive and robust than the one we have presented. Another test for significant cross-correlations would be to integrate y,,(~) over some reasonable interval, or, more generally, take a weighted sum over an interval. n fact, the method described in this paper is exactly equivalent to taking a weighted sum of ycx(7), where the weighting is defined by the hemodynamic response function h(7). This can be seen by noting that the weighted sum of the cros~-covariance function is obtained by the three-way convolution [c(t) O x(t)] h(7). By the commutivity of the convolution operator (O), this is equal to [c(t) O h(~)] x(t). The latter expression represents the cross-correlation between the convolved contrast and x(t), which is precisely what has been used. The choice of a Poisson form for the hemodynamic response function means that delay and dispersion are totally confounded. We have examined the effects of varying the assumed delay and dispersion separately, using the exceedence proportion as a measure

17 Analysis of Functional MR Time-Series of omnibus sensitivity. n these analyses, we used the y-distribution, which has two parameters and allows the independent manipulation of mean and variance. We did not report the results of this analysis because they did not contribute any significant insight. n general, the exceedence proportion showed exactly the same dependence on delay as for the Poisson form. The dependence on dispersion was much less marked. We could have used a Gaussian response function and set the delay equal to dispersion (for large values of A, this and the Poisson function are the same). The aesthetic advantage of the Poisson function is that it only exists for positive lag (7). Obviously, there are many variants on the details we have presented here. The central tenets of the approach are: Regonal specific associations are assessed between neuronal activity and a stimulus or task parameter in terms of correlations. The appropriate correlation is between the observed hemodynamic response and the hypothesized input, which has been convolved with a response function. The response function is specified without reference to an assumed input. Care is taken to ensure that, in the absence of any response, intrinsic temporal and spatial autocorrelations do not subtend spurious correlations (i.e., the experiment-wise false-positive rate is maintained at a suitably low level). Within these constraints, a number of variants deserve mention. First, in the analysis above, it was assumed that the response function and intrinsic autocorrelations were the same for all voxels. An identical analysis performed in parallel for each voxel would allow for regional variation in the response function. Our analyses (see last section) suggest that these regional variations can be substantial. This extension would certainly increase sensitivity, but would require long time-series, to ensure robust estimates of the intrinsic spectral density gx(o). A second variant would involve estimating the response function on the basis of independent data; for example, a second study or dividing the same study into two sections and using the SVD or PCA approach and least-squares deconvolution (Haxby J, personal communication). Although we used temporal smoothness to estimate both the intrinsic spectral density of the MR signal and the response function, it is important to note that the first application is mandatory; the second is not. Any response function will, under the null hypothesis, render the distribution of {(O) Gaussian. Despite the fact that sensitivity to real change will be optimized when the dispersion or variance of the response function "matches" the intrinsic autocorrelations, no statistical requirement exists for this matching to be implemented. n other words, the response function is a personal choice. Complicated biphasic response functions are attractive in the sense that they are capable of modeling the "undershoot" effect. However, the data did not suggest that this extension was justified in the examples above. One particular constraint imposed by a Poisson form is that delay and dispersion are yoked. This may be inappropriate for some data, and this constraint should not be considered a cornerstone of the proposed method. One particular fallacy of linkmg the delay and dispersion is revealed when considering the effect of temporal smoothing of the data to increase signal to noise. Here, the underlying response function will be subject to this smoothing; and its effective dispersion will increase. This would be correctly detected and modeled by post hoc estimates of smoothness. However, the "correct" delay will not have changed. n some instances, it may be best to fix the response function to some appropriate delay and allow its variance to be dictated by uf (e.g., using the y-distribution). n the method presented, the impact of thermal noise was suppressed by temporal smoothing at a data preprocessing stage. With longer time-series, it may be possible to adopt a more formal approach by segregating the data into an uncorrelated white (or Raleigh) noise and a physiological autocorrelated process. Many linear devices exist for this sort of analysis (e.g., autoregressive and moving average models). However, they are usually not considered in the context of very short time-series. Correlations or categories? The principle of correlating sensorial or behavioral parameters with cerebral excitation has a long history in neuroscience, from the days of Goltz and Ferrier [eg, Ferrier, 1875; Goltz, 1881; Phillips et al., to modern-day approaches that use exogenous stimulation, e.g., magneto-simulation or endogenous activity (PET and MR). Correlations in microelectrode recording and functional imaging data have received special attention recently, as they are a possible index of functional and effective connectivity [Gerstein and Perkel, 1969; Friston et al., Although a fundamental equivalence exists between testing for specific time-dependent changes with contrasts in analysis of (co-)variance of PET data and correlating a time-dependent parameter with

18 + Friston et al. + MR data (refer to Friston et al. [1993a] for a concrete illustration of this point), it is important to acknowledge the differences: MR data represent time-series that lend themselves directly to signal processing. For example, in this paper, using a few standard results from the theory of stochastic processes, one can solve the problem of detecting cross-correlations in the presence of autocorrelations and avoid what could have been a discursive and dense statistical analysis. This paper has focused exclusively on correlations for detecting significant activations. An alternative, found in the recent functional MR literature, is to categorize the MR time-series into activation and baseline scans and express the mean difference in some form. Researchers have had profound reservations about this approach related to the observation that the residual MR signal around the mean for each stimulus-cycle is not normally distributed [Baker J, personal communication]. The analysis presented in this paper concurs with these reservations. Although the residual variability about the convolved contrast y(t) is normally distributed, the probability that it will be similarly distributed about a horizontal line running from stimulus onset to offset is low. The MR time-series reflects a true dynamic response to an input, characterized by its response function. To pretend that this response following stimulus onset constitutes a single category is not really tenable. The proper way to categorize these time-series conceptually is in terms of offset from stimulus onset. For example, the set of first scans after stimulus onset is a different category from the set of second scans following stimulus onset. The residuals about the means of these categories will be normally distributed. The convolved contrast implicitly assumes the latter form of categorization because only these scans have the same values of y(t). mplicit in phase-shifting (as a tool to estimate intrinsic autocorrelations) is the requirement that stimulus cycles are continuously repeated. A fundamental point to be made here is: f implementation of intrasubject signal averaging in its broadest sense is the goal, then multiple presentations of the same stimulus-cycle are obligatory (the more the better). This places significant constraints on experimental design. The key theme, again, is that the object that is averaged is a complete stimulus-cycle, not repeated scans within a cycle. A failure to recognize this will sacrifice all the rich nonlinear information about an evoked neuronal and hemodynamic transient for the apparent ease of simple categorical comparisons. ronically, this non- linearity invalidates a parametric approach to averaging scans from the same stimulus-cycle (see above). n the context of MR data, correlational approaches offer many advantages over alternative approaches. One key issue is that the correlation between sensory input and hemodynamics will not be sensitive to any artifact that is orthogonal (independent) to the input. These artifacts include movement artifacts and lowfrequency aliasing artifacts due to an interaction between the repeat time and cardiac, respiratory, and heart rate variability cycles. The proposed approach has the following interesting feature regarding movement artifacts. Because the effect of movement on signal is not subject to delay or dispersion (whereas the effect of evoked neuronal activity is), convolving the contrast with a hemodynamic response function will render the correlations insensitive to movement associated with changing stimuli or tasks. n conclusion, we hope to have presented some reasonable solutions to a relatively simple problemhow to identify regionally specific correlations between sensory, cognitive, or behavioral parameters and cerebral physiology measured with functional MR. ACKNOWLEDGMENTS We wish to thank Jack Belliveau, John Baker, Jim Haxby, and Nick Lange for helpful discussions about these issues. The work by K.J.F. was performed as part of the nstitute Fellows in Theoretical Neurobiology research program at The Neurosciences nstitute, which is supported by the Neurosciences Research Foundation. The Foundation receives major support for this research from the J.D. and C.T. MacArthur Foundation and the Lucille P. Markey Charitable Trust. K.J.F. was a W.M. Keck Foundation Fellow. REFERENCES Adler RJ, Hasofer AM (1981): The Geometry of Random Fields. New York: John Wiley & Sons. Aertsen A, Preissl H (1991): Dynamics of activity and connectivity in physiological neuronal networks. n: Schuster HG (ed): Non Linear Dynamics and Neuronal Networks. New York: VCH Publishers, pp Bandettini PA (1993): MR studies of brain activation: Temporal characteristics. n: Functional MR of the Brain. Berkely: Society of Magnetic Resonance in Medicine, California, p Bandettini PA, Wong EC, Hinks RS, Tikofsky RS, Hyde JS (1992): Time course EP of human brain function during task activation. Magn Reson Med 25: Bandettini PA, Jesmanowicz A, Wong EC, Hyde JS (1993) Processing strategies for time course data sets in functional MR of the human brain. Magn Reson Med 3:

19 + Analysis of Functional MR Time-Series + Cox DR, Miller HD (198): The theory of stochastic processes. New York: Chapman and Hall. Edelman GM (1978): Group selection and phasic reentrant signaling: A theory of higher brain function. n: Edelman GM, Mountcastle VB (eds): The Mindful Brain. Cambridge: MT Press, pp Ferrier D (1875): Experiments on the brain of monkeys. Proc R SOC Lond, 23: Fox PT, Raichle ME, Mintum MA, Dence C (1988): Nonoxidative glucose consumption during focal physiologic neural activity. Science 241 : 462. Friston KJ, Frith CD, Liddle PF, Dolan RJ, Lammertsma AA, Frackowiak RSJ (199): The relationship between global and local changes in PET scans. J Cereb Blood Flow Metab 1: Friston KJ, Frith CD, Liddle PF, Frackowiak RSJ (1991): Comparing functional (PET) images: The assessment of significant change. J Cereb Blood Flow Metab Friston KJ, Jezzard P, Frackowiak RSJ, Turner R (1993a): Characterizing focal and distributed physiological changes with MR and PET. n: Functional MR of the Brain-Syllabus. Berkely: Society of Magnetic Resonance in Medicine, California, pp Friston KJ, Frith CD, Liddle PF, Frackowiak RSJ (1993b): Functional connectivity: The principal component analysis of large (PET) data sets. J Cereb Blood Flow Metab 13:5-14. Friston KJ, Worsley KJ, Frackowiak RSJ, Mazziotta JC, Evans AC (in press): Assessing the significance of focal activations using their spatial extent. Human Brain Mapping. Frostig RD, Lieke RR, Ts o DY, Grindvald A (199): Cortical functional architecture and local coupling between neuronal activity and the microcirculation revealed by in vivo high resolution optical imaging of intrinsic signals. Proc Natl Acad Sci USA Gerstein GL, Perkel DH (1969): Simultaneously recorded trains of action potentials: Analysis and functional interpretation. Science 164: , Goltz F (1881): n: MacCormac W (ed): Transactions of the 7th nternational Medical Congress, Vol. 1. London: Kolkmann, pp Golub GH, Van Loan CF (1991): Matrix computations, 2nd ed. Baltimore: The Johns Hopkins University Press, pp Grimmett G, Welsh D (1986): Probability and ntroduction. Oxford: Clarendon Press. Hasofer AM (1978): Upcrossings of random fields. Suppl Adv Appl Prob 1:1421. Keys RG (1981): Cubic convolution interpolation for digital image processing. EEE Trans Acoust Speech Signal Process 29: Kwong KK, Belliveau JW, Cheder DA, Goldberg E, Weisskoff RM, Poncelet BP, Kennedy DN, Hoppel BE, Cohen MS, Turner R, Cheng HM, Brady TJ, Rosen BR (1992): Dynamic magnetic resonance imaging of human brain activity during primary sensory stimulation. Proc Natl Acad Sci USA 89: More JJ (1977): The Levenberg-Marquardt algorithm: mplementation and Theory. n: Watson GA (ed): Numerical Analysis. Lecture Notes in Mathematics 63. Berlin: Springer-Verlag, pp Nelson J, Salin PA, Munk MHJ, Arzi M, Bullier J (1992): Spatial and temporal coherence in cortico-cortical connections: A crosscorrelation study in areas 17 and 18 in the cat. Visual Neurosci 9: Ogawa S, Tank DW, Menon R, Ellermann JM, Kim SG, Merkle H, Ugurbil K (1992): ntrinsic signal changes accompanying sensory stimulation: Functional brain mapping with magnetic resonance imaging. Proc Natl Acad Sci USA 89: Phillips CG, Zeki S, Barlow HB (1984): Localization of function in the cerebral cortex: Past, present and future. Brain 17: Talairach P, Tournoux J (1988): A Stereotactic Coplanar Atlas of the Human Brain. Stuttgart: Thieme. Worsley KJ, Vandal (submitted): Tests for changes in random fields with applications to medical images. Worsley KJ, Evans AC, Marrett S, Neelin P (1992): A threedimensional statistical analysis for rcbf activation studies in human brain. J Cereb Blood Flow Metab 12:

Chapter 10 Introduction to Time Series Analysis

Chapter 10 Introduction to Time Series Analysis Chapter 1 Introduction to Time Series Analysis A time series is a collection of observations made sequentially in time. Examples are daily mortality counts, particulate air pollution measurements, and

More information

The Wondrous World of fmri statistics

The Wondrous World of fmri statistics Outline The Wondrous World of fmri statistics FMRI data and Statistics course, Leiden, 11-3-2008 The General Linear Model Overview of fmri data analysis steps fmri timeseries Modeling effects of interest

More information

Subjects: Fourteen Princeton undergraduate and graduate students were recruited to

Subjects: Fourteen Princeton undergraduate and graduate students were recruited to Supplementary Methods Subjects: Fourteen Princeton undergraduate and graduate students were recruited to participate in the study, including 9 females and 5 males. The mean age was 21.4 years, with standard

More information

Probability and Random Variables. Generation of random variables (r.v.)

Probability and Random Variables. Generation of random variables (r.v.) Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly

More information

Univariate and Multivariate Methods PEARSON. Addison Wesley

Univariate and Multivariate Methods PEARSON. Addison Wesley Time Series Analysis Univariate and Multivariate Methods SECOND EDITION William W. S. Wei Department of Statistics The Fox School of Business and Management Temple University PEARSON Addison Wesley Boston

More information

GE Medical Systems Training in Partnership. Module 8: IQ: Acquisition Time

GE Medical Systems Training in Partnership. Module 8: IQ: Acquisition Time Module 8: IQ: Acquisition Time IQ : Acquisition Time Objectives...Describe types of data acquisition modes....compute acquisition times for 2D and 3D scans. 2D Acquisitions The 2D mode acquires and reconstructs

More information

Validating cluster size inference: random field and permutation methods

Validating cluster size inference: random field and permutation methods NeuroImage 20 (2003) 2343 2356 www.elsevier.com/locate/ynimg Validating cluster size inference: random field and permutation methods Satoru Hayasaka and Thomas E. Nichols* Department of Biostatistics,

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

The continuous and discrete Fourier transforms

The continuous and discrete Fourier transforms FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1

More information

SGN-1158 Introduction to Signal Processing Test. Solutions

SGN-1158 Introduction to Signal Processing Test. Solutions SGN-1158 Introduction to Signal Processing Test. Solutions 1. Convolve the function ( ) with itself and show that the Fourier transform of the result is the square of the Fourier transform of ( ). (Hints:

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

Introduction. Karl J Friston

Introduction. Karl J Friston Introduction Experimental design and Statistical Parametric Mapping Karl J Friston The Wellcome Dept. of Cognitive Neurology, University College London Queen Square, London, UK WCN 3BG Tel 44 00 7833 7456

More information

Advanced Signal Processing and Digital Noise Reduction

Advanced Signal Processing and Digital Noise Reduction Advanced Signal Processing and Digital Noise Reduction Saeed V. Vaseghi Queen's University of Belfast UK WILEY HTEUBNER A Partnership between John Wiley & Sons and B. G. Teubner Publishers Chichester New

More information

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Computational Optical Imaging - Optique Numerique. -- Deconvolution -- Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method

More information

Appendix 4 Simulation software for neuronal network models

Appendix 4 Simulation software for neuronal network models Appendix 4 Simulation software for neuronal network models D.1 Introduction This Appendix describes the Matlab software that has been made available with Cerebral Cortex: Principles of Operation (Rolls

More information

Obtaining Knowledge. Lecture 7 Methods of Scientific Observation and Analysis in Behavioral Psychology and Neuropsychology.

Obtaining Knowledge. Lecture 7 Methods of Scientific Observation and Analysis in Behavioral Psychology and Neuropsychology. Lecture 7 Methods of Scientific Observation and Analysis in Behavioral Psychology and Neuropsychology 1.Obtaining Knowledge 1. Correlation 2. Causation 2.Hypothesis Generation & Measures 3.Looking into

More information

Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt.

Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt. Medical Image Processing on the GPU Past, Present and Future Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt.edu Outline Motivation why do we need GPUs? Past - how was GPU programming

More information

Statistical Considerations in Magnetic Resonance Imaging of Brain Function

Statistical Considerations in Magnetic Resonance Imaging of Brain Function Statistical Considerations in Magnetic Resonance Imaging of Brain Function Brian D. Ripley Professor of Applied Statistics University of Oxford ripley@stats.ox.ac.uk http://www.stats.ox.ac.uk/ ripley Acknowledgements

More information

runl I IUI%I/\L Magnetic Resonance Imaging

runl I IUI%I/\L Magnetic Resonance Imaging runl I IUI%I/\L Magnetic Resonance Imaging SECOND EDITION Scott A. HuetteS Brain Imaging and Analysis Center, Duke University Allen W. Song Brain Imaging and Analysis Center, Duke University Gregory McCarthy

More information

5 Factors Affecting the Signal-to-Noise Ratio

5 Factors Affecting the Signal-to-Noise Ratio 5 Factors Affecting the Signal-to-Noise Ratio 29 5 Factors Affecting the Signal-to-Noise Ratio In the preceding chapters we have learned how an MR signal is generated and how the collected signal is processed

More information

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper

More information

Nonlinear Event-Related Responses in fmri

Nonlinear Event-Related Responses in fmri Nonlinear Event-Related Responses in fmri Karl J. Friston, Oliver Josephs, Geraint Rees, Robert Turner This paper presents an approach to characterizing evoked hemodynamic responses in fmrl based on nonlinear

More information

THE HUMAN BRAIN. observations and foundations

THE HUMAN BRAIN. observations and foundations THE HUMAN BRAIN observations and foundations brains versus computers a typical brain contains something like 100 billion miniscule cells called neurons estimates go from about 50 billion to as many as

More information

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York BME I5100: Biomedical Signal Processing Linear Discrimination Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

More information

Measuring Line Edge Roughness: Fluctuations in Uncertainty

Measuring Line Edge Roughness: Fluctuations in Uncertainty Tutor6.doc: Version 5/6/08 T h e L i t h o g r a p h y E x p e r t (August 008) Measuring Line Edge Roughness: Fluctuations in Uncertainty Line edge roughness () is the deviation of a feature edge (as

More information

STATISTICA Formula Guide: Logistic Regression. Table of Contents

STATISTICA Formula Guide: Logistic Regression. Table of Contents : Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

More information

State of Stress at Point

State of Stress at Point State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

More information

Thresholding of Statistical Maps in Functional Neuroimaging Using the False Discovery Rate 1

Thresholding of Statistical Maps in Functional Neuroimaging Using the False Discovery Rate 1 NeuroImage 15, 870 878 (2002) doi:10.1006/nimg.2001.1037, available online at http://www.idealibrary.com on Thresholding of Statistical Maps in Functional Neuroimaging Using the False Discovery Rate 1

More information

Diffusione e perfusione in risonanza magnetica. E. Pagani, M. Filippi

Diffusione e perfusione in risonanza magnetica. E. Pagani, M. Filippi Diffusione e perfusione in risonanza magnetica E. Pagani, M. Filippi DW-MRI DIFFUSION-WEIGHTED MRI Principles Diffusion results from a microspic random motion known as Brownian motion THE RANDOM WALK How

More information

Cognitive Neuroscience. Questions. Multiple Methods. Electrophysiology. Multiple Methods. Approaches to Thinking about the Mind

Cognitive Neuroscience. Questions. Multiple Methods. Electrophysiology. Multiple Methods. Approaches to Thinking about the Mind Cognitive Neuroscience Approaches to Thinking about the Mind Cognitive Neuroscience Evolutionary Approach Sept 20-22, 2004 Interdisciplinary approach Rapidly changing How does the brain enable cognition?

More information

Applications of random field theory to electrophysiology

Applications of random field theory to electrophysiology Neuroscience Letters 374 (2005) 174 178 Applications of random field theory to electrophysiology James M. Kilner, Stefan J. Kiebel, Karl J. Friston The Wellcome Department of Imaging Neuroscience, Institute

More information

Mining Information from Brain Images

Mining Information from Brain Images Mining Information from Brain Images Brian D. Ripley Professor of Applied Statistics University of Oxford ripley@stats.ox.ac.uk http://www.stats.ox.ac.uk/ ripley/talks.html Outline Part 1: Characterizing

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not. Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

More information

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm 1 Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm Hani Mehrpouyan, Student Member, IEEE, Department of Electrical and Computer Engineering Queen s University, Kingston, Ontario,

More information

Prentice Hall Algebra 2 2011 Correlated to: Colorado P-12 Academic Standards for High School Mathematics, Adopted 12/2009

Prentice Hall Algebra 2 2011 Correlated to: Colorado P-12 Academic Standards for High School Mathematics, Adopted 12/2009 Content Area: Mathematics Grade Level Expectations: High School Standard: Number Sense, Properties, and Operations Understand the structure and properties of our number system. At their most basic level

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

SIMPLIFIED PERFORMANCE MODEL FOR HYBRID WIND DIESEL SYSTEMS. J. F. MANWELL, J. G. McGOWAN and U. ABDULWAHID

SIMPLIFIED PERFORMANCE MODEL FOR HYBRID WIND DIESEL SYSTEMS. J. F. MANWELL, J. G. McGOWAN and U. ABDULWAHID SIMPLIFIED PERFORMANCE MODEL FOR HYBRID WIND DIESEL SYSTEMS J. F. MANWELL, J. G. McGOWAN and U. ABDULWAHID Renewable Energy Laboratory Department of Mechanical and Industrial Engineering University of

More information

Convolution, Correlation, & Fourier Transforms. James R. Graham 10/25/2005

Convolution, Correlation, & Fourier Transforms. James R. Graham 10/25/2005 Convolution, Correlation, & Fourier Transforms James R. Graham 10/25/2005 Introduction A large class of signal processing techniques fall under the category of Fourier transform methods These methods fall

More information

Geography 4203 / 5203. GIS Modeling. Class (Block) 9: Variogram & Kriging

Geography 4203 / 5203. GIS Modeling. Class (Block) 9: Variogram & Kriging Geography 4203 / 5203 GIS Modeling Class (Block) 9: Variogram & Kriging Some Updates Today class + one proposal presentation Feb 22 Proposal Presentations Feb 25 Readings discussion (Interpolation) Last

More information

2. Simple Linear Regression

2. Simple Linear Regression Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according

More information

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

By choosing to view this document, you agree to all provisions of the copyright laws protecting it. This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal

More information

Algebra 1 2008. Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard

Algebra 1 2008. Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard Academic Content Standards Grade Eight and Grade Nine Ohio Algebra 1 2008 Grade Eight STANDARDS Number, Number Sense and Operations Standard Number and Number Systems 1. Use scientific notation to express

More information

Chapter 8 - Power Density Spectrum

Chapter 8 - Power Density Spectrum EE385 Class Notes 8/8/03 John Stensby Chapter 8 - Power Density Spectrum Let X(t) be a WSS random process. X(t) has an average power, given in watts, of E[X(t) ], a constant. his total average power is

More information

Solutions to Exam in Speech Signal Processing EN2300

Solutions to Exam in Speech Signal Processing EN2300 Solutions to Exam in Speech Signal Processing EN23 Date: Thursday, Dec 2, 8: 3: Place: Allowed: Grades: Language: Solutions: Q34, Q36 Beta Math Handbook (or corresponding), calculator with empty memory.

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

More information

Masters research projects. 1. Adapting Granger causality for use on EEG data.

Masters research projects. 1. Adapting Granger causality for use on EEG data. Masters research projects 1. Adapting Granger causality for use on EEG data. Background. Granger causality is a concept introduced in the field of economy to determine which variables influence, or cause,

More information

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing

More information

Visual area MT responds to local motion. Visual area MST responds to optic flow. Visual area STS responds to biological motion. Macaque visual areas

Visual area MT responds to local motion. Visual area MST responds to optic flow. Visual area STS responds to biological motion. Macaque visual areas Visual area responds to local motion MST a Visual area MST responds to optic flow MST a Visual area STS responds to biological motion STS Macaque visual areas Flattening the brain What is a visual area?

More information

Functional neuroimaging. Imaging brain function in real time (not just the structure of the brain).

Functional neuroimaging. Imaging brain function in real time (not just the structure of the brain). Functional neuroimaging Imaging brain function in real time (not just the structure of the brain). The brain is bloody & electric Blood increase in neuronal activity increase in metabolic demand for glucose

More information

The Image Deblurring Problem

The Image Deblurring Problem page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation

More information

business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar

business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar business statistics using Excel Glyn Davis & Branko Pecar OXFORD UNIVERSITY PRESS Detailed contents Introduction to Microsoft Excel 2003 Overview Learning Objectives 1.1 Introduction to Microsoft Excel

More information

IN current film media, the increase in areal density has

IN current film media, the increase in areal density has IEEE TRANSACTIONS ON MAGNETICS, VOL. 44, NO. 1, JANUARY 2008 193 A New Read Channel Model for Patterned Media Storage Seyhan Karakulak, Paul H. Siegel, Fellow, IEEE, Jack K. Wolf, Life Fellow, IEEE, and

More information

Partial Least Squares (PLS) Regression.

Partial Least Squares (PLS) Regression. Partial Least Squares (PLS) Regression. Hervé Abdi 1 The University of Texas at Dallas Introduction Pls regression is a recent technique that generalizes and combines features from principal component

More information

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS 1. Bandwidth: The bandwidth of a communication link, or in general any system, was loosely defined as the width of

More information

Statistics Graduate Courses

Statistics Graduate Courses Statistics Graduate Courses STAT 7002--Topics in Statistics-Biological/Physical/Mathematics (cr.arr.).organized study of selected topics. Subjects and earnable credit may vary from semester to semester.

More information

PITFALLS IN TIME SERIES ANALYSIS. Cliff Hurvich Stern School, NYU

PITFALLS IN TIME SERIES ANALYSIS. Cliff Hurvich Stern School, NYU PITFALLS IN TIME SERIES ANALYSIS Cliff Hurvich Stern School, NYU The t -Test If x 1,..., x n are independent and identically distributed with mean 0, and n is not too small, then t = x 0 s n has a standard

More information

Composite performance measures in the public sector Rowena Jacobs, Maria Goddard and Peter C. Smith

Composite performance measures in the public sector Rowena Jacobs, Maria Goddard and Peter C. Smith Policy Discussion Briefing January 27 Composite performance measures in the public sector Rowena Jacobs, Maria Goddard and Peter C. Smith Introduction It is rare to open a newspaper or read a government

More information

BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I BNG 202 Biomechanics Lab Descriptive statistics and probability distributions I Overview The overall goal of this short course in statistics is to provide an introduction to descriptive and inferential

More information

1 Example of Time Series Analysis by SSA 1

1 Example of Time Series Analysis by SSA 1 1 Example of Time Series Analysis by SSA 1 Let us illustrate the 'Caterpillar'-SSA technique [1] by the example of time series analysis. Consider the time series FORT (monthly volumes of fortied wine sales

More information

SIGNAL PROCESSING & SIMULATION NEWSLETTER

SIGNAL PROCESSING & SIMULATION NEWSLETTER 1 of 10 1/25/2008 3:38 AM SIGNAL PROCESSING & SIMULATION NEWSLETTER Note: This is not a particularly interesting topic for anyone other than those who ar e involved in simulation. So if you have difficulty

More information

http://www.jstor.org This content downloaded on Tue, 19 Feb 2013 17:28:43 PM All use subject to JSTOR Terms and Conditions

http://www.jstor.org This content downloaded on Tue, 19 Feb 2013 17:28:43 PM All use subject to JSTOR Terms and Conditions A Significance Test for Time Series Analysis Author(s): W. Allen Wallis and Geoffrey H. Moore Reviewed work(s): Source: Journal of the American Statistical Association, Vol. 36, No. 215 (Sep., 1941), pp.

More information

A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks

A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks H. T. Kung Dario Vlah {htk, dario}@eecs.harvard.edu Harvard School of Engineering and Applied Sciences

More information

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( ) Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

More information

Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones

Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones Final Year Project Progress Report Frequency-Domain Adaptive Filtering Myles Friel 01510401 Supervisor: Dr.Edward Jones Abstract The Final Year Project is an important part of the final year of the Electronic

More information

Processing Strategies for Real-Time Neurofeedback Using fmri

Processing Strategies for Real-Time Neurofeedback Using fmri Processing Strategies for Real-Time Neurofeedback Using fmri Jeremy Magland 1 Anna Rose Childress 2 1 Department of Radiology 2 Department of Psychiatry University of Pennsylvania School of Medicine MITACS-Fields

More information

Marketing Mix Modelling and Big Data P. M Cain

Marketing Mix Modelling and Big Data P. M Cain 1) Introduction Marketing Mix Modelling and Big Data P. M Cain Big data is generally defined in terms of the volume and variety of structured and unstructured information. Whereas structured data is stored

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

The Statistical Analysis of fmri Data

The Statistical Analysis of fmri Data Statistical Science 2008, Vol. 23, No. 4, 439 464 DOI: 10.1214/09-STS282 Institute of Mathematical Statistics, 2008 The Statistical Analysis of fmri Data Martin A. Lindquist Abstract. In recent years there

More information

Matrices and Polynomials

Matrices and Polynomials APPENDIX 9 Matrices and Polynomials he Multiplication of Polynomials Let α(z) =α 0 +α 1 z+α 2 z 2 + α p z p and y(z) =y 0 +y 1 z+y 2 z 2 + y n z n be two polynomials of degrees p and n respectively. hen,

More information

MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL. by Michael L. Orlov Chemistry Department, Oregon State University (1996)

MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL. by Michael L. Orlov Chemistry Department, Oregon State University (1996) MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL by Michael L. Orlov Chemistry Department, Oregon State University (1996) INTRODUCTION In modern science, regression analysis is a necessary part

More information

9 Hedging the Risk of an Energy Futures Portfolio UNCORRECTED PROOFS. Carol Alexander 9.1 MAPPING PORTFOLIOS TO CONSTANT MATURITY FUTURES 12 T 1)

9 Hedging the Risk of an Energy Futures Portfolio UNCORRECTED PROOFS. Carol Alexander 9.1 MAPPING PORTFOLIOS TO CONSTANT MATURITY FUTURES 12 T 1) Helyette Geman c0.tex V - 0//0 :00 P.M. Page Hedging the Risk of an Energy Futures Portfolio Carol Alexander This chapter considers a hedging problem for a trader in futures on crude oil, heating oil and

More information

Short-time FFT, Multi-taper analysis & Filtering in SPM12

Short-time FFT, Multi-taper analysis & Filtering in SPM12 Short-time FFT, Multi-taper analysis & Filtering in SPM12 Computational Psychiatry Seminar, FS 2015 Daniel Renz, Translational Neuromodeling Unit, ETHZ & UZH 20.03.2015 Overview Refresher Short-time Fourier

More information

1 Short Introduction to Time Series

1 Short Introduction to Time Series ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

More information

fmri 實 驗 設 計 與 統 計 分 析 簡 介 Introduction to fmri Experiment Design & Statistical Analysis

fmri 實 驗 設 計 與 統 計 分 析 簡 介 Introduction to fmri Experiment Design & Statistical Analysis 成 功 大 學 心 智 影 像 研 究 中 心 功 能 性 磁 振 造 影 工 作 坊 fmri 實 驗 設 計 與 統 計 分 析 簡 介 Introduction to fmri Experiment Design & Statistical Analysis 陳 德 祐 7/5/2013 成 功 大 學. 國 際 會 議 廳 Primary Reference: Functional Magnetic

More information

Signal to Noise Instrumental Excel Assignment

Signal to Noise Instrumental Excel Assignment Signal to Noise Instrumental Excel Assignment Instrumental methods, as all techniques involved in physical measurements, are limited by both the precision and accuracy. The precision and accuracy of a

More information

Chapter 6: Multivariate Cointegration Analysis

Chapter 6: Multivariate Cointegration Analysis Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration

More information

Threshold Autoregressive Models in Finance: A Comparative Approach

Threshold Autoregressive Models in Finance: A Comparative Approach University of Wollongong Research Online Applied Statistics Education and Research Collaboration (ASEARC) - Conference Papers Faculty of Informatics 2011 Threshold Autoregressive Models in Finance: A Comparative

More information

3.5.1 CORRELATION MODELS FOR FREQUENCY SELECTIVE FADING

3.5.1 CORRELATION MODELS FOR FREQUENCY SELECTIVE FADING Environment Spread Flat Rural.5 µs Urban 5 µs Hilly 2 µs Mall.3 µs Indoors.1 µs able 3.1: ypical delay spreads for various environments. If W > 1 τ ds, then the fading is said to be frequency selective,

More information

THEORY, SIMULATION, AND COMPENSATION OF PHYSIOLOGICAL MOTION ARTIFACTS IN FUNCTIONAL MRI. Douglas C. Noll* and Walter Schneider

THEORY, SIMULATION, AND COMPENSATION OF PHYSIOLOGICAL MOTION ARTIFACTS IN FUNCTIONAL MRI. Douglas C. Noll* and Walter Schneider THEORY, SIMULATION, AND COMPENSATION OF PHYSIOLOGICAL MOTION ARTIFACTS IN FUNCTIONAL MRI Douglas C. Noll* and Walter Schneider Departments of *Radiology, *Electrical Engineering, and Psychology University

More information

2. MATERIALS AND METHODS

2. MATERIALS AND METHODS Difficulties of T1 brain MRI segmentation techniques M S. Atkins *a, K. Siu a, B. Law a, J. Orchard a, W. Rosenbaum a a School of Computing Science, Simon Fraser University ABSTRACT This paper looks at

More information

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become

More information

Dynamic data processing

Dynamic data processing Dynamic data processing recursive least-squares P.J.G. Teunissen Series on Mathematical Geodesy and Positioning Dynamic data processing recursive least-squares Dynamic data processing recursive least-squares

More information

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Analysis/resynthesis with the short time Fourier transform

Analysis/resynthesis with the short time Fourier transform Analysis/resynthesis with the short time Fourier transform summer 2006 lecture on analysis, modeling and transformation of audio signals Axel Röbel Institute of communication science TU-Berlin IRCAM Analysis/Synthesis

More information

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance

More information

Signal Extraction Technology

Signal Extraction Technology Signal Extraction Technology Technical bulletin Introduction Masimo SET pulse oximetry is a new and fundamentally distinct method of acquiring, processing and reporting arterial oxygen saturation and pulse

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint

More information

12: Analysis of Variance. Introduction

12: Analysis of Variance. Introduction 1: Analysis of Variance Introduction EDA Hypothesis Test Introduction In Chapter 8 and again in Chapter 11 we compared means from two independent groups. In this chapter we extend the procedure to consider

More information

Appendix 1: Time series analysis of peak-rate years and synchrony testing.

Appendix 1: Time series analysis of peak-rate years and synchrony testing. Appendix 1: Time series analysis of peak-rate years and synchrony testing. Overview The raw data are accessible at Figshare ( Time series of global resources, DOI 10.6084/m9.figshare.929619), sources are

More information

Spatial smoothing of autocorrelations to control the degrees of freedom in fmri analysis

Spatial smoothing of autocorrelations to control the degrees of freedom in fmri analysis Spatial smoothing of autocorrelations to control the degrees of freedom in fmri analysis K.J. Worsley March 1, 25 Department of Mathematics and Statistics, McGill University, 85 Sherbrooke St. West, Montreal,

More information

Statistics in Retail Finance. Chapter 6: Behavioural models

Statistics in Retail Finance. Chapter 6: Behavioural models Statistics in Retail Finance 1 Overview > So far we have focussed mainly on application scorecards. In this chapter we shall look at behavioural models. We shall cover the following topics:- Behavioural

More information

Point Lattices in Computer Graphics and Visualization how signal processing may help computer graphics

Point Lattices in Computer Graphics and Visualization how signal processing may help computer graphics Point Lattices in Computer Graphics and Visualization how signal processing may help computer graphics Dimitri Van De Ville Ecole Polytechnique Fédérale de Lausanne Biomedical Imaging Group dimitri.vandeville@epfl.ch

More information

Wiring optimization in the brain

Wiring optimization in the brain Wiring optimization in the brain Dmitri B. Chklovskii Sloan Center for Theoretical Neurobiology The Salk Institute La Jolla, CA 92037 mitya@salk.edu Charles F. Stevens Howard Hughes Medical Institute and

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information