The interpretation of diagnostic tests



Similar documents
Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Interpretation of Somers D under four simple models

MedicalBiostatistics.com

The Variability of P-Values. Summary

Biostatistics: Types of Data Analysis

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

PS 271B: Quantitative Methods II. Lecture Notes

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

DATA INTERPRETATION AND STATISTICS

Evaluation of Diagnostic Tests

BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

STATISTICA Formula Guide: Logistic Regression. Table of Contents

11. Analysis of Case-control Studies Logistic Regression

Tutorial 5: Hypothesis Testing

Adequacy of Biomath. Models. Empirical Modeling Tools. Bayesian Modeling. Model Uncertainty / Selection

Local classification and local likelihoods

Principles of Hypothesis Testing for Public Health

SENSITIVITY ANALYSIS AND INFERENCE. Lecture 12

Regression Modeling Strategies

Service courses for graduate students in degree programs other than the MS or PhD programs in Biostatistics.

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

Statistics Graduate Courses

Quantitative Methods for Finance

Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics

Linear Classification. Volker Tresp Summer 2015

AP Physics 1 and 2 Lab Investigations

X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)

Permutation Tests for Comparing Two Populations

II. DISTRIBUTIONS distribution normal distribution. standard scores

Statistical Rules of Thumb

7 Generalized Estimating Equations

How To Write A Data Analysis

Descriptive Statistics. Purpose of descriptive statistics Frequency distributions Measures of central tendency Measures of dispersion

Lecture 3: Linear methods for classification

Statistics. Measurement. Scales of Measurement 7/18/2012

Least Squares Estimation

Assumptions. Assumptions of linear models. Boxplot. Data exploration. Apply to response variable. Apply to error terms from linear model

A Bayesian hierarchical surrogate outcome model for multiple sclerosis

How To Understand The Theory Of Probability

Health Care and Life Sciences

Statistical Machine Learning

From the help desk: Bootstrapped standard errors

Linear Threshold Units

Measures of diagnostic accuracy: basic definitions

Geostatistics Exploratory Analysis

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

How To Check For Differences In The One Way Anova

11 Linear and Quadratic Discriminant Analysis, Logistic Regression, and Partial Least Squares Regression

List of Examples. Examples 319

Dongfeng Li. Autumn 2010

Fairfield Public Schools

SUMAN DUVVURU STAT 567 PROJECT REPORT

Simple Linear Regression Inference

UNIVERSITY OF NAIROBI

DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization. Learning Goals. GENOME 560, Spring 2012

Analyzing Research Data Using Excel

Exploratory Data Analysis

Organizing Your Approach to a Data Analysis

CHAPTER 3 EXAMPLES: REGRESSION AND PATH ANALYSIS

Descriptive Statistics

Curriculum Map Statistics and Probability Honors (348) Saugus High School Saugus Public Schools

SAS Software to Fit the Generalized Linear Model

BayesX - Software for Bayesian Inference in Structured Additive Regression

Diagrams and Graphs of Statistical Data


Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

A Review of Methods for Missing Data

Additional sources Compilation of sources:

Study Design and Statistical Analysis

Exploratory data analysis (Chapter 2) Fall 2011

Knowledge Discovery and Data Mining

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Combining diagnostic test results to increase accuracy

Non-Inferiority Tests for Two Means using Differences

Algebra 1 Course Information

Methods for Meta-analysis in Medical Research

STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables

It is important to bear in mind that one of the first three subscripts is redundant since k = i -j +3.

Gamma Distribution Fitting

LOGISTIC REGRESSION ANALYSIS

Center for Advanced Studies in Measurement and Assessment. CASMA Research Report

NCSS Statistical Software

Individual patient data meta-analysis of continuous diagnostic markers

STATS8: Introduction to Biostatistics. Data Exploration. Babak Shahbaba Department of Statistics, UCI

NCSS Statistical Software

Tips for surviving the analysis of survival data. Philip Twumasi-Ankrah, PhD

Chicago Booth BUSINESS STATISTICS Final Exam Fall 2011

Part II Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Part II

STA 4273H: Statistical Machine Learning

Overview Classes Logistic regression (5) 19-3 Building and applying logistic regression (6) 26-3 Generalizations of logistic regression (7)

9. Sampling Distributions

South Carolina College- and Career-Ready (SCCCR) Probability and Statistics

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary

Transcription:

Statistical Methods in Medical Research 1999; 8: 113±134 The interpretation of diagnostic tests David E Shapiro Center for Biostatistics in AIDS Research, Harvard School of Public Health, Boston, Massachusetts, USA Laboratory diagnostic tests are central in the practice of modern medicine. Common uses include screening a speci c population for evidence of disease and con rming or ruling out a tentative diagnosis in an individual patient. The interpretation of a diagnostic test result depends on both the ability of the test to distinguish diseased from nondiseased subjects and the particular characteristics of the patient and setting in which the test is being used. This article reviews statistical methodology for assessing laboratory diagnostic test accuracy and interpreting individual test results, with an emphasis on diagnostic tests that yield a continuous measurement. The article begins with a summary of basic concepts and terminology, then brie y discusses study design and reviews methods for assessing the accuracy of a single diagnostic test, comparing the accuracy of two or more diagnostic tests and interpreting individual test results. 1 Introduction Laboratory diagnostic tests are central in the practice of modern medicine. Common uses include screening a speci c population for evidence of disease and con rming or ruling out a tentative diagnosis in an individual patient. 1 The interpretation of a diagnostic test result depends on both the ability of the test to distinguish diseased from nondiseased subjects and the particular characteristics of the patient and setting in which the test is being used. 2 This article reviews statistical methodology for assessing laboratory diagnostic test accuracy and interpreting individual test results. For convenience, a diagnostic test will be called continuous, dichotomous, or ordinal according to whether the test yields a continuous measurement (e.g. blood pressure), a dichotomous result (e.g. HIV-positive or HIV-negative), or an ordinal outcome (e.g. con dence rating for presence of disease ± de nitely, probably, possibly, probably not, de nitely not). The main focus here is on continuous diagnostic tests, since the majority of laboratory diagnostic tests are of this type 1 and since much new methodology for such tests has appeared recently in the statistical literature. Although dichotomous diagnostic tests also are common in laboratory medicine, they will only be discussed brie y here, because much of the methodology involves analysis of binomial proportions and 2 2 tables, which is likely to be more familiar to the reader. Many of the methods proposed for continuous tests borrow heavily from the vast literature on ordinal diagnostic tests, which are common in the radiological imaging eld; methodology for ordinal tests has been reviewed in depth elsewhere, 3,4 including a recent issue of Statistical Methods in Medical Research (volume 7, number 4, December 1998), and will only be discussed here when pertinent to continuous tests. This article is organized as follows: Section 2 summarizes some basic concepts and terminology. Section 3 discusses study design and methods for assessing the accuracy Address for correspondence: DE Shapiro, Center for Biostatistics in AIDS Research, 651 Huntington Avenue, Boston, MA 02115-6017, USA. E-mail: shapiro@sdac.harvard.edu Ó Arnold 1999 0962-2802(99)SM180RA

114 DE Shapiro of a single diagnostic test. Section 4 reviews methods for comparing the accuracy of two or more diagnostic tests. Section 5 describes methods for interpreting individual test results. Finally, Section 6 contains some concluding remarks. 2 Basic concepts and terminology The interpretation of a diagnostic test depends on two factors: (1) the intrinsic ability of the diagnostic test to distinguish between diseased patients and nondiseased subjects (discriminatory accuracy); and (2) the particular characteristics of the individual and setting in which the test is applied. Each of these factors will be discussed in turn. 2.1 Discriminatory accuracy The discriminatory accuracy of a diagnostic test is commonly assessed by measuring how well it correctly identi es n 1 subjects known to be diseased (D+) and n 0 subjects known to be nondiseased (D ). If the diagnostic test gives a dichotomous result for each subject, e.g. positive (T+) or negative (T ), the data can be summarized in a 2 2 table of test result versus true disease status (Figure 1). In the medical literature, discriminatory accuracy is commonly measured by the conditional probabilities of correctly classifying diseased patients [P(T+ D+) = TP=n 1 in Figure 1, called the sensitivity or true positive rate (TPR)] and nondiseased subjects [P(T D ) =TN=n 0, called the speci city or true negative rate (TNR)]; equivalently, the Figure 1 The 2 2 table of frequencies summarizing the results of a dichotomous diagnostic accuracy study. TP represents the number of true positives (diseased patients who test positive), FP the false positives, TN the true negatives, and FN the false negatives.

The interpretation of diagnostic tests 115 probabilities of type I error [P(T+ D ) = FP/n 0 =1 speci city, called the false positive rate (FPR)] and type II error [P(T D+) = FN/n 1 =1 sensitivity, called the false negative rate (FNR)] may be also be used. Many other indices of accuracy have been proposed for dichotomous diagnostic tests, including several measures of association in the 2 2 table familiar to biostatisticians, such as the odds ratio and kappa statistic. 5 One commonly-used accuracy index, the overall percentage of correct test results P c = (TP+TN)/N, deserves comment here because it can be misleading. 2 Unlike sensitivity and speci city, P c depends on the prevalence of disease in the study sample (n 1 =N); in fact, P c is actually a prevalence-weighted average of sensitivity and speci city. Consequently, when the prevalence is high or low, P c can be high even when the speci city or sensitivity, respectively, is quite low. For example, in screening pregnant women for HIV infection in the USA, where the prevalence in many areas is < 1%, an HIV test that classi ed all women as HIV-negative, and therefore identi ed none of the HIV-positive women, would have P c > 99% even though its sensitivity would be 0%. Many laboratory diagnostic tests give a quantitative result X rather than a dichotomous result. The degree of overlap in the distributions of test results for diseased [(f(x D+)] and nondiseased subjects [f(x D )] determines the ability of the test to distinguish diseased from nondiseased. In screening situations, where the test is used to separate patients into two groups ± those requiring further investigation and/or intervention and those who do not ± it is common to de ne a threshold or decision limit and classify the patient as diseased if X >and nondiseased if X < (Figure 2). 1 Each decision limit yields a different 2 2 table of dichotomized test result versus true disease status; thus, sensitivity and speci city can be estimated for each decision limit. However, as decreases, sensitivity increases but speci city Probability density 0.0 0.1 0.2 0.3 FNR D D+ FPR µ γ µ 0 1 X level Figure 2 Hypothetical distributions of diagnostic test results X for diseased and nondiseased subjects. The vertical line at X ˆ indicates the decision limit for a positive test. The shaded area to the right of is the FPR; the shaded area to the left of is the FNR.

116 DE Shapiro decreases; as decreases, speci city increases at the expense of sensitivity. Thus, there is a trade-off between sensitivity and speci city as the decision limit varies, which must be accounted for in assessing discriminatory accuracy. A useful graphical summary of discriminatory accuracy is a plot of sensitivity versus either speci city or FPR as the decision limit varies, which is called the receiver operating characteristic (ROC) curve (Figure 3). The upper left corner of the graph represents perfect discrimination (TPR = 1, FPR = 0), while the diagonal line where TPR = FPR represents discrimination no better than chance. The ROC curve of a diagnostic test is invariant with respect to any monotone transformation of the test measurement scale for both diseased and nondiseased subjects; in other words, the ROC curve does not depend on the scale of measurement for the test, which makes it useful for comparing diagnostic tests of different scales, including tests for which the decision limit scale is latent rather than quantitative, such as medical imaging techniques. Since it is often useful to have a single number that summarizes a test's accuracy, various summary indices based on the ROC curve have been proposed; the most popular such summaries are the area under the ROC curve (AUC), which represents both the average sensitivity over all values of FPR and the probability that test results for a randomly-selected diseased subject and nondiseased subject will be ordered correctly in magnitude; the partial AUC, which is the AUC over a range of FPR that is relevant to a particular setting; and the sensitivity at a xed FPR. 2.2 Context-speci c interpretation The assessment of discriminatory accuracy focuses on the performance of the diagnostic test when disease status is known; however, when interpreting the diagnostic test result for an individual patient, typically the true disease status is unknown and it is of interest to use the test result to estimate the posterior probability TPR 0.0 0.2 0.4 0.6 0.8 1.0 γ 0.0 0.2 0.4 0.6 0.8 1.0 FPR Figure 3 The ROC curve corresponding to the test result distributions shown in Figure 2. The point labelled on the curve corresponds to the decision limit shown in Figure 2.

The interpretation of diagnostic tests 117 of disease. For interpretation of a dichotomous test result, the quantities of interest are the posterior probability of disease given a positive test result [P(D+ T+) = TP/m 1 in Table 1], called the positive predictive value (PPV), and the posterior probability of no disease given a negative test result [P(D T ) = TN/m 0 ], called the negative predictive value (NPV). Using Bayes theorem, one can calculate the PPV and NPV based on the sensitivity and speci city of the diagnostic test and the prior probability of disease (prevalence); if probabilities are converted to odds [odds = probability/ (1 probability)], this calculation has a particularly simple form: posterior odds of disease = prior odds of disease likelihood ratio, where the likelihood ratio is f(t+ D+)/f(T+ D ) = sensitivity/(1 speci city). For tests that yield a quantitative result X, one could determine the PPV and NPV as in the dichotomous case by dichotomizing the test result at a xed decision limit. However, this method has two limitations: it discards information about the likelihood of disease, since it ignores the distance of X from, i.e. it yields the same estimate of PPV or NPV whether X just exceeds or is well beyond ; also, this method requires selecting a decision limit that is optimal in some sense for the particular setting. An alternative approach that avoids these dif culties is to calculate the likelihood ratio L(X)=f(X D+)/f(X D ) corresponding to the actual test result, and use it to convert the patient's prior probability of disease into a posterior probability of disease P(D+ X). It is important to remember that PPV, NPV, and posterior probability of disease are speci c to a particular patient or setting, and cannot necessarily be generalized to a different population or setting, because these measures depend strongly on the prevalence of disease. For example, an HIV screening test that is 82% sensitive and > 98% speci c would have a PPV of 88% if the prevalence were 10%, as in a high risk setting, but a PPV of < 3% if the prevalence were only 0:044%, as might be the case in screening the general population. 1 3 Assessing the accuracy of a single diagnostic test This section begins with a brief overview of design considerations for studies of discriminatory accuracy. It then reviews methods for continuous diagnostic tests, including estimation of the ROC curve, summary indices, and incorporating covariates, and concludes with a brief discussion of methods for dichotomous diagnostic tests. Context-speci c assessment of diagnostic test performance will be discussed in Section 5. 3.1 Study design Study design considerations for diagnostic test accuracy studies have received much attention in the literature, but have been reviewed elsewhere, 2,4,6 so only a brief summary of the major issues will be given here. Boyd 6 reviews common study designs for diagnostic test accuracy studies, including prospective and retrospective designs. Results of such studies are subject to several potential biases; 4,7,8 some of these biases can be avoided by careful study design, and some can be corrected to some degree in the analysis. Begg 4 identi es the most

118 DE Shapiro prominent and important biases as those concerned with issues relating to the reference test, or `gold standard', used to determine true disease status. Veri cation bias in the reported sensitivity and speci city estimates occurs if the selection of patients to receive the reference test is in uenced by the result of the diagnostic test being studied. A prospective study design in which all subjects in a de ned cohort receive the reference test is preferred, but selective veri cation may be unavoidable for some diseases, such as when the reference test is invasive; methods for correction of veri cation bias exist and have been reviewed recently by Zhou. 9 Another important source of bias is an imperfect reference test; many bias-correction methods have been proposed. 2,4,10±12 The interpretation of the reference test and diagnostic test results should be independent and blinded. 8 Failed or uninterpretable test results should be included in the analysis, because such results may affect the test's usefulness. 4,8 Selection of appropriate samples of diseased and nondiseased subjects is critical, since sensitivity and speci city can vary according to the `case-mix' or disease spectrum in the study cohort. 2,4,6,13 Adequate sample size is also critical; methods for determining sample size for diagnostic accuracy studies have been reviewed recently by Obuchowski. 13 Many diagnostic accuracy studies in the medical literature have serious methodologic aws. A recent review 14 of diagnostic test studies published in four prominent general medical journals between 1978 and 1993 found that the percentage of studies that ful lled criteria for each of seven methodological standards ranged from 8% (reported test indexes for clinical subgroups) to 46% (avoided veri cation bias). The proportions of studies meeting individual standards and meeting multiple standards increased over time during the review period, but in the most recent interval, 1990±93, only one methodological standard was ful lled by more than 50% of the studies, and only 44% of the studies met at least three of the standards. 3.2 ROC curve estimation Let X and Y denote the continuous diagnostic test result for nondiseased and diseased populations with cumulative distribution functions F and G, respectively. Then, the sensitivity at decision limit t is the survivor function F t ˆ1 F t, and FPR is G t ˆ1 G t. The ROC curve of the diagnostic test is a plot of F t ; G t for all possible values of t, or, equivalently, a plot of p; q, where p = FPR 0; 1 and q ˆTPRˆ G F 1 p. Suppose diagnostic test results x 1...x n0 are available for n 0 nondiseased and y 1...y n1 for n 1 diseased subjects. Many approaches have been proposed for estimating the ROC curve, including parametric, semiparametric, and nonparametric methods; each category will be reviewed in turn. Parametric approaches One parametric approach is to assume that F and G follow a parametric family of distributions, e.g. normal or lognormal, and t these distributions to the observed (possibly transformed) test results. 1,15 A major advantage of this distributional approach is that it yields a smooth ROC curve that is completely speci ed by a small number of parameters, which form the basis for statistical inference; hypothesis tests,

The interpretation of diagnostic tests 119 variance estimates, and con dence intervals for the fully parametric binormal ROC curve and its parameters are well-developed. The parametric method, however, is quite sensitive to departures from the distributional assumptions. 15 An alternative parametric approach is to specify the functional form of the ROC curve, e.g. hyperbolic 16 or weighted exponential. 17 This approach is closely related to the distributional approach, in that specifying the underlying test result distributions determines the form of the ROC curve; e.g. the symmetric `bilogistic' ROC curve obtained by assuming that F and G follow logistic distributions with equal scale parameters is hyperbolic in form. 5 Note that in tting an ROC curve directly in ROC space, the tting algorithm needs to take into account the errors in both FPR and TPR; e.g. simple least squares minimizing vertical deviations would be inappropriate. 2 Semi-parametric approaches Metz et al. 18 propose a semi-parametric algorithm, LABROC4, that groups the ordered continuous test measurements into runs of diseased and nondiseased subjects and then ts a binormal ROC curve using the Dorfman±Alf maximum likelihood algorithm for ordinal data. 19 The LABROC4 approach assumes that the underlying distributions of the grouped nondiseased and diseased test measurements can be transformed to normal distributions by an unspeci ed monotone transformation of the test measurement axis. The resulting binormal ROC curve is of the form TPRˆ a b 1 (FPR)), where is the standard normal cumulative distribution function, a is the scaled difference in means and b the ratio of standard deviations of the latent normal distributions of diseased and nondiseased. Large-sample normaltheory standard errors and con dence intervals for the ROC curve parameters are provided. The assumption of latent normal distributions for grouped data in the LABROC4 method is less strict than the assumption of explicit distributional form for the continuous measurements in the fully parametric approach. However, the LABROC4 method does still assume that a single monotone transformation exists that would make both latent distributions normal simultaneously; this assumption can be checked graphically by assessing linearity of the raw data on normal probability axes ( 1 (FPR) versus 1 (TPR)). Hanley 20 demonstrated that by transformation one can make many highly non-normal pairs of distributions resemble the binormal model; however, Zou et al. 21 constructed a simulated dataset in which G was bimodal but F was unimodal (as might occur if the diagnostic test identi ed only one of two possible manifestations of a disease), and found that the LABROC4 ROC curve departed substantially from the ROC curves obtained by nonparametric methods (described next). Hsieh and Turnbull 22 proposed a generalized least squares procedure for tting the binormal model to grouped continuous data and compared it to the Dorfman±Alf algorithm; 19 they also described a minimum-distance estimator of the binormal ROC curve that does not require grouping the data. Nonparametric approaches The simplest nonparametric approach involves estimating F and G by the empirical distribution functions ^F n0 t ˆ#fx i < tg=n 0 and ^G n1 t ˆ#fy i < tg=n 1 for the nondiseased and diseased subjects, respectively. This approach is free of distributional

120 DE Shapiro assumptions in that it depends only on the ranks of the observations in the combined sample, 23 but the resulting empirical ROC curve is a series of horizontal and vertical steps (in the absence of ties), which can be quite jagged. 2 Hsieh and Turnbull 22 derive asymptotic properties of this estimator. If the data contain ties between test measurements of nondiseased and diseased subjects, corresponding sections of the empirical ROC curve are unde ned; 2 Le 24 proposes a modi ed estimator of the empirical ROC curve based on mid-ranks that overcomes this dif culty and shows that it converges uniformly to the true ROC curve. An alternative nonparametric approach is to t a smoothed ROC curve using kernel density estimation of F and G 21,25 ^F t ˆ 1 X n 0 n 0 h iˆ1 K t x i h where K ˆ R k, the kernel k is a mean 0 density function, and h is the `bandwidth' which controls the amount of smoothing; ^G t is de ned similarly based on y 1...y n1. Zou et al. 21 suggested using a biweight kernel k t ˆ15=16 1 t 2 2 for t in ( 1,1), with bandwidth h n ˆ 0:9 min(sd,iqr/1.34)n 1=5, where SD and IQR are the sample standard deviation and interquartile range, respectively; separate bandwidths h n0 and h n1 are determined for ^F and ^G. This bandwidth is optimal for histograms that are roughly bell-shaped, so that a preliminary transformation of the test measurement scale might be required. Lloyd 25 suggested using the standard normal kernel with bandwidths h n1 =h n0 n 0 =n 1 1=3 ; this choice of bandwidth minimizes mean square error (MSE) if the overall smoothness of the two densities is similar. Lloyd also suggests using the bootstrap to reduce bias in the smooth ROC curve. Advantages of the kernel method are that it closely follows the details of the original data and is free of parametric assumptions, but disadvantages are that it is unreliable at the ends of the ROC curve, requires some ad hoc experimentation with choice of bandwidth, and does not work well when substantial parts of F or G are near zero. Simultaneous con dence bands for the entire ROC curve For the binormal model, Ma and Hall 26 give simultaneous con dence bands for the entire ROC curve by constructing Working±Hotelling con dence bands in probit coordinates, where the binormal ROC curve is linear, and transforming them back to ROC coordinates; the method generalizes to other location-scale parametric ROC models. Campbell 23 constructs xed-width con dence bands for the entire ROC curve based on the bootstrap; Li et al. 27 provide an alternative bootstrap simultaneous con- dence band for the entire ROC curve based on a vertical shift quantile comparison function. 3.3 Estimation of summary accuracy indices The ROC curve provides a graphical summary of discriminatory accuracy, but often a one-number summary index of discriminatory accuracy is desired. Many such summary indices have been proposed; the choice of an appropriate index depends on the application. The AUC is the most popular global index for the ordinal diagnostic tests common in radiology, where the test measurement scale is usually latent rather

than an observed continuous variable; however, the AUC has several limitations that may make it less useful for continuous diagnostic tests: 28 when two ROC curves cross, the two tests can have similar AUC even though one test has higher sensitivity for certain speci cities while the other test has better sensitivity for other speci cities; the AUC includes regions of ROC space that would not be practical interest (e.g. very high FPR, or very low TPR); and the AUC is actually the probability that the test results of a random pair of subjects, one diseased and one nondiseased, are ranked correctly (i.e. P(X < Y)), but in practice the test is not given to pairs of diseased and nondiseased subjects. Despite these limitations, since the AUC continues to be the focus of much methodologic work and it may be still be useful in certain applications, this section begins by reviewing methods for AUC estimation and inference then discusses alternative summary indices. AUC For normally distributed continuous test results, the AUC is 0 1 B 1 0 C @ q A 2 0 2 1 The interpretation of diagnostic tests 121 so the AUC can be estimated directly by substituting sample means and standard errors; 29 Wieand et al. obtain a variance estimator via the delta method. 29 The semiparametric LABROC algorithm estimates the AUC based on the binormal ROC curve parameters, with large-sample standard errors and con dence intervals. 18 An unbiased nonparametric estimate of P X < Y is the trapezoidal area under the empirical ROC curve, which equals the Mann±Whitney form of the two-sample Wilcoxon rank-sum statistic. 30,31 Several variance estimators for this nonparametric AUC estimator have been proposed; 31±34 Hanley and Hajian-Tilaki 35 recommend either using the variance based on the theory of U-statistics 33 or jackkni ng. 34 Cof n and Sukhatme 36 show that if the diagnostic test results are subject to measurement error, the nonparametric area estimator is biased downward, i.e. underestimates the true population area. They show that resampling methods such as the bootstrap and jackknife are not appropriate for correction of biases caused by nonsampling errors, and derive a bias-correction factor based on kernel-density estimation. Monte Carlo simulations indicated that for binormal and bigamma ROC models, with normal and non-normal measurement errors, the bias-corrected estimator had smaller bias and comparable MSE to the uncorrected estimator. However, they caution that although measurement errors also reduce the power of hypothesis tests based on the uncorrected nonparametric area estimate, it is unclear whether use of the biascorrected estimator would result in increased power due to its larger variance. Hajian-Tilaki et al. 37 assessed the bias of parametric and nonparametric AUC estimates for continuous test results generated by Monte Carlo simulation from binormal and nonbinormal models. The biases in both the parametric and nonparametric AUC estimates were found to be very small with both binormal and nonbinormal data. However, for nonbinormal data, standard error estimates by both parametric and nonparametric estimates exceeded the true standard errors. Hajian-

122 DE Shapiro Tilaki et al. did not evaluate the effect of the small biases and in ated standard errors on the power and size of hypothesis tests; however, in another simulation study in which the binormal model was applied to simulated ordinal rating data from the bilogistic model, Walsh 38 observed small biases in area estimates and in ated standard errors that together substantially altered the size and power of statistical tests comparing independent areas, suggesting that model misspeci cation can lead to altered test size and power loss. Zou et al. 21 obtain the area under their smooth kernel-density-based nonparametric ROC curve by numerical integration. Since the smoothing does not affect the true SE of AUC, to rst order, they use the U-statistic variance for the area under the smooth ROC, and calculate a large-sample normal theory con dence interval after applying a log (1 AUC) transformation to reduce skewness. They also provide a variance estimate and transformation-based con dence interval for the numerically-integrated partial area under the smooth nonparametric ROC curve. Lloyd 25 shows that for the smooth ROC curve based on standard-normal kernel density estimation, there is no advantage in terms of variance to determining the area under the curve by numerical integration rather than via the Mann±Whitney estimate. Whether the AUC is estimated parametrically from the binormal model, or nonparametrically using the Mann±Whitney statistic, con dence intervals are usually obtained by relying on asymptotic normality. Using Monte Carlo simulation, Obuchowski and Lieber 39 evaluated the coverage of the asymptotic con dence intervals and alternative ones based on the bootstrap and Student's t-distribution for a single AUC and for the difference between two AUC. They generated both continuous and ordinal data with sample sizes between 10 and 70 for both diseased and nondiseased subjects and AUC of moderate (0.8) and high (0.95) accuracy. They found that for the difference in AUC, asymptotic methods provided adequate coverage even with very small sample sizes (20 per group). In contrast, for a single AUC, the asymptotic methods do not provide adequate coverage for small samples, and for highly accurate diagnostic tests, quite large sample sizes (> 200 patients) are required for the asymptotic methods to be adequate. Instead, they recommend using one of three bootstrap methods, depending on the estimation approach (parametric versus nonparametric) and AUC (moderate versus high). Partial AUC If only a particular range of speci city or sensitivities values is relevant, the partial AUC may be a more appropriate accuracy index than the AUC. McClish 40 calculates the partial AUC or average sensitivity over a xed range of FPR by direct numerical integration, with variances based on the binormal model; Thompson and Zuccini 41 provide an alternative method based on integrating the bivariate normal probability density function. For settings in which high sensitivity is relevant, Jiang et al. 42 extend McClish's approach to obtain the average speci city over a range of high sensitivity. Sensitivity at xed speci city Greenhouse and Mantel 43 provide normal-theory and nonparametric tests of the hypothesis that a diagnostic test has at least a speci ed sensitivity (e.g. 90%) with FPR no higher than a speci ed value (e.g. 5%); however, their procedure does not

The interpretation of diagnostic tests 123 provide an explicit decision limit that has the desired sensitivity and speci city. Schafer 44 discusses two methods for estimating an explicit decision limit with prespeci ed speci city and/or sensitivity, one based on tolerance limits and the other an extension of the Greenhouse±Mantel method that applies when both sensitivity and speci city are pre-speci ed; he demonstrates that the extended Greenhouse±Mantel method can yield substantial gains in ef ciency compared to the tolerance-limit method. For the binormal model, McNeil and Hanley 34 give pointwise con dence intervals for sensitivity at xed FPR by transforming to probit coordinates ( 1 (FPR), 1 (TPR)), constructing large sample con dence intervals, and transforming back to ROC coordinates; the simultaneous Working±Hotelling bands of Ma and Hall 26 also have a pointwise interpretation (with a different con dence coef cient) as `vertical' con dence bands for TPR at xed FPR, or `horizontal' bands for FPR at xed TPR. Schafer 45 gives a nonparametric large-sample pointwise con dence interval for sensitivity at the estimated decision limit needed to obtain a pre-speci ed speci city, by inverting the Greenhouse±Mantel 43 test statistic. Zou et al. 21 suggested constructing a large-sample normal-theory con dence interval based on the kernel-estimated ROC curve in logit coordinates (logit(fpr), logit (TPR)), then transforming back to ROC coordinates; Lloyd 25 suggested the analogous procedure based on a probit transformation. A dif culty with con dence intervals for sensitivity at xed FPR is that the decision limit t on the test measurement scale that yields this FPR can only be estimated. An alternative is to construct a joint con dence region for the point (FPR, TPR) on the ROC curve corresponding to a given decision limit t. Hilgers 46 constructs a point estimator for t and joint con dence rectangle with con dence level 1 2 based on separate distribution-free tolerance intervals for F and G. Unlike the asymptotic methods, Hilgers' method is valid for any sample size; on the other hand, Schafer 45 found that substantial gains in ef ciency (i.e. narrower con dence intervals) can be obtained with the Greenhouse±Mantel method versus Hilgers' method when sample sizes are adequate for the Greenhouse±Mantel method to be valid. Zou et al. 21 suggest constructing joint con dence rectangles with coverage 1 2 by forming largesample con dence intervals for the kernel-based logit(^p) and logit(^q) and transforming them back to ROC coordinates. Campbell 23 uses Kolmogorov±Smirnov con dence bands for F and G to provide equal-sized joint con dence rectangles for each observed point on the ROC curve that have simultaneous coverage 1 2 for the true points (FPR, TPR). Other indices Hsieh and Turnbull 47 suggest using Youden's index ( = max t [sensitivity(t) + speci city(t)]) as a measure of diagnostic accuracy. They propose two nonparametric estimation methods for determining Youden's index and the corresponding `optimal' decision limit, based on estimating F and G with the empirical distribution functions and Gaussian kernel distribution estimates, respectively. However, this approach requires assumptions that limit its applicability: Youden's index essentially dichotomizes the continuous diagnostic test results at the decision limit where the costs of a false positive and false negative are equal 5 and it assumes F and G have monotone likelihood ratio, which often does not hold in practice (e.g. pairs of normal

124 DE Shapiro and logistic distributions have monotone likelihood ratio only when the scale parameters of the diseased and nondiseased distributions are equal). Two other indices, the projected length of the ROC curve and the area swept out by the ROC curve, have been proposed recently as alternatives to the AUC for continuous diagnostic tests, 48 but their statistical properties have not been worked out. 3.4 Incorporating covariate information External factors can affect the performance of diagnostic tests by in uencing the distributions of test measurements for diseased and/or nondiseased subjects. General approaches to incorporating covariate information into ROC analysis include strati cation and modelling. Strati cation Sukhatme and Beam 49 use strati cation of the diseased or nondiseased (or both) to assess the performance of a diagnostic marker against several types of diseased or nondiseased subjects. They develop methods for estimating and comparing stratumspeci c empirical ROC curves and associated nonparametric areas, and for ef cient estimation of the population ROC curve and associated area, when the diseased or nondiseased (or both) are strati ed. One limitation, pointed out by Le, 24 is that this approach can only be used with one binary or categorical covariate; it is less ef cient with a continuous covariate and very impractical when several covariates need to be simultaneously included. Modelling An alternative approach is to pool data across strata and use a regression modelling approach to adjust for confounding effects. Pepe 50 describes three general approaches to regression analysis of ROC curves with continuous test results: (1) modelling covariate effects on the test result, then calculating the induced covariate effect on the ROC curve; (2) modelling covariate effects on summary measures of accuracy, such as the area under the curve; and (3) directly modelling covariate effects on the ROC curve. Several methods of the rst type have been proposed. Pepe 50 models the continuous test result as belonging to a location-scale family of unspeci ed form, with mean function depending on both disease status and covariates (including covariates speci c to diseased or nondiseased), and variance depending only on disease status; the mean and variance parameters can be estimated by quasi-likelihood methods. In situations where it is reasonable to group the continuous test results into ordered categories, one could apply methods based on the proportional odds model 51 or the continuation ratio model, 52 although it is not clear whether either method can handle covariates that are speci c to the diseased or nondiseased subjects; the continuation ratio-based approach has several advantages over the proportional odds-based approach, in that it belongs to the class of generalized linear models, and yields ROC curves that are concave but not necessarily symmetric, and tted probabilities that are always between 0 and 1. The second approach is appropriate for settings in which the experimental design permits calculation of an ROC curve and its summary measure k (e.g. AUC or partial area) at each observed covariate level k ˆ 1;...; K. Pepe 50 suggests using standard

The interpretation of diagnostic tests 125 linear regression techniques to model the expected value of H0 1 ^ k as a linear function of the covariates at level k, where H0 1 is a monotone transformation that gives a linear predictor with unrestricted range inf; inf ; e.g. since the range of AUC is restricted to (0,1), the probit or logit transformation could be used to remove this restriction. Disadvantages of this approach are that it cannot incorporate continuous covariates or covariates that are speci c to diseased subjects (e.g. disease severity), and it requires suf cient data at each covariate level (or combination of levels) to estimate an ROC curve with adequate precision. The third approach differs from the rst approach in that it models the relationship between test results in the diseased and nondiseased populations, rather than the test result distributions themselves. Pepe 50 suggests modelling the ROC curve as ROC(t)=gf t ; cxg, where g is a known bivariate function in the range (0,1), increasing in t, and t is a parametric function on the domain (0,1). A necessary preliminary step is to estimate the test result distribution given relevant covariates in the population of nondiseased subjects, using parametric or nonparametric methods. Regression parameters are tted using an estimating equations approach. Choosing g to be a survivor function and to be a linear function of g 1 yields an ROC curve of the same form as one generated by assuming a location-scale model for the test results (Method 1 above). However, modelling the ROC curve directly gives much greater exibility than modelling the test results in that it allows a much broader range of ROC curve models, it can allow the effect of covariates to vary with FPR (interaction), the distributions of test results in the diseased and nondiseased do not need to be from the same family, and the method can model and compare ROC curves for tests with completely different types of test results; on the other hand, currently the method is more dif cult to implement, requires special programming, and lacks adequate modelchecking procedures and knowledge of large and small sample properties. Le 24 proposed using the Cox proportional hazards regression model with the midrank-based empirical ROC curve to evaluate the effect of covariates on sensitivity in studies where certain covariates, such as disease severity, are available only for the diseased subjects and not the nondiseased subjects; the method can also be used to model speci city when certain covariates are only available for nondiseased subjects. The method can identify important covariates, but the regression coef cients do not have an interpretation similar to relative risk in the survival analysis setting, and the method does rely on proportional hazards and linearity assumptions, which may be strong but testable. 3.5 Dichotomous diagnostic tests Most laboratory tests give a continuous measurement but some tests give a truly dichotomous result. For a single study in which n 0 nondiseased and n 1 diseased subjects are given the diagnostic test, sensitivity and speci city can be estimated as described in Section 2, and separate binomial con dence intervals calculated. Coughlin et al. 53 suggested using a logistic regression model with true disease status as a covariate to model sensitivity and speci city as a function of other covariates. Leisenring et al. 54 propose a more general approach using separate marginal regres-

126 DE Shapiro sion models for sensitivity and speci city with robust sandwich variance estimators; the method permits incorporation of different covariates for diseased and nondiseased subjects, and can accommodate clustered and unbalanced data. The literature on a particular dichotomous diagnostic test often contains several accuracy studies, each of which report different and possibly discrepant sensitivity and speci city estimates. Shapiro 55 reviews quantitative and graphical methods for metaanalysis of such collections of diagnostic accuracy studies. 4 Comparing the accuracy of diagnostic tests This section reviews methods for comparing the discriminatory accuracy of two or more diagnostic tests; its organization mirrors that of Section 3. The choice of method depends on whether the study used a parallel-group or `unpaired' design, in which each diagnostic test is applied to a different group of subjects, or a paired design, in which the diagnostic tests are applied to the same subjects; the paired design is more ef cient, but requires analysis methods that properly handle the within-subject correlation in the data. Many of the methods for correlated data can also be used with unpaired data. Beam 56 recently reviewed methods for analysing correlated data in ROC studies, with an emphasis on clustered ordinal data; however, many of the conceptual considerations and the nonparametric methodological strategies in his paper are also applicable to continuous data. 4.1 Comparing ROC curves Parametric and semiparametric ROC curves can be compared using hypothesis tests and con dence intervals for differences in the ROC curve parameters. Metz and Kronman 57 provided normal-theory methods for comparing unpaired binormal ROC curves. For paired data, Metz et al. 58 assume the latent test result distributions are jointly bivariate normal, and derive hypothesis tests for equality of the binormal ROC curve parameters; Metz et al. 59 extend this methodology to partially-paired designs, in which some but not all subjects were tested with both diagnostic tests. The method assumes that the paired and unpaired portions of the sample are statistically independent and are representative of the same population, and that failure to obtain data from both tests does not bias the case sample. Three different nonparametric approaches to comparing ROC curves have been proposed that would be useful in cases where the ROC curves cross or where the AUC does not seem to adequately re ect the situation. Campbell 23 provides a test of the hypothesis of no difference between two correlated ROC curves based on the sup norm and studies the distribution of the test statistic using the bootstrap. Venkatraman and Begg 60 provide a distribution-free permutation test procedure for testing whether two ROC curves based on paired continuous test results are identical at all operating points; further work is needed to extend this method to unpaired data. Moise et al. 61 suggest a hypothesis test based on the integrated absolute difference between two correlated ROC curves (i.e. the area between the two ROC curves), with standard error

estimated by bootstrapping. Simulation studies conducted by each author found that each of the three methods had superior power to the usual AUC-based test when the ROC curves crossed, but the three methods have not been compared with each other. 4.2 Comparing summary accuracy indices The interpretation of diagnostic tests 127 AUC and partial AUC Wieand et al. 29 gave a parametric test for difference in AUC for two paired tests with normally-distributed test results; the asymptotic variance of the test statistic was obtained using the delta method. Hanley and McNeil 31 provided a nonparametric test of difference in trapezoidal AUC for the unpaired design. Hanley and McNeil 32 developed a semiparametric test for comparing nonparametric areas under correlated ROC curves that accounts for the correlations induced by the paired design assuming an underlying binormal model holds. DeLong et al. 33 developed a fully nonparametric approach to comparing correlated AUC, based on the theory of U-statistics, in which all of the covariance terms are estimated nonparametrically, leading to an asymptotically normal test statistic. Mossman 62 suggests using resampling methods such as the bootstrap and jackknife for hypothesis testing to compare independent or correlated AUCs (or other summary indices). Parametric normal-theory methods for comparing diagnostic tests with respect to partial AUC are also available. 40±42 Wieand et al. 29 gave a class of nonparametric statistics for comparing two correlated or independent ROC curves with respect to average sensitivity over a restricted range of speci city (partial area), with test procedures and con dence intervals based on asymptotic normality; this class includes the full AUC and the sensitivity at a common speci city as special cases. Sensitivity at common speci city Greenhouse and Mantel 43 give parametric and nonparametric hypothesis tests for comparing sensitivities of continuous tests at a prespeci ed common speci city; methods are given for both independent and paired data, and all methods correctly take into account the uncertainty in the estimated FPR, which adds an additional variance component to the uncertainty in the estimated sensitivity. Linnet 63 modi es their methods to produce a combined parametric/nonparametric procedure that is appropriate when the test measurements are approximately normally distributed for nondiseased subjects but non-normally distributed for diseased subjects. He also points out that in the medical literature it is common to use the simple binomial test for difference in proportions, rather than the correct Greenhouse±Mantel procedures, to compare diagnostic tests with respect to sensitivity at an estimated speci city of 2.5%, i.e. at the upper limit of each test's reference range (the central interval containing 95% of the test results of the nondiseased population, discussed elsewhere in this issue 64 ). Linnet shows that using the simple binomial test in this way, which ignores the uncertainty in the estimated FPR, can increase its type I error to about seven times the nominal value of 0.05. Li et al. 27 provide tests for equality of two independent ROC curves at a xed FPR and over a range of FPR values based on a vertical shift quantile comparison function.

128 DE Shapiro Beam and Wieand 65 provide a method for comparing one or more continuous diagnostic tests to one discrete diagnostic test. The comparison of areas or partial areas under the curve in this setting may be inappropriate, because even if the sensitivities of a continuous and a discrete test are comparable at all decision limits available to the discrete test, the concavity of the ROC curve of the continuous test ensures it will have larger area under the curve than the discrete test. They avoid this problem by comparing the tests only at one or more decision limits available to the discrete test, i.e. comparing with sensitivity at a common speci city (or average sensitivity across a set of common speci cities). Their method is related to the Greenhouse±Mantel method, but it differs in that the common speci city cannot be arbitrarily chosen but rather must be estimated due to the discreteness of the test; on the other hand, the Beam-Wieand method permits comparison of more than two diagnostic tests, provided only one is discrete, and it allows comparison of sensitivity at more than one speci city of interest. 4.3 Incorporating covariate information Sukhatme and Beam 49 show how to compare two diagnostic tests when the diseased or nondiseased (or both) are strati ed. Pepe 50 points out that diagnostic tests can be compared using regression approaches that model the ROC curve or a summary index (e.g. AUC) by including a test indicator as one of the covariates in the model; however, regression approaches that model the test results themselves can only be used for comparison if the results of the two tests can be modelled sensibly in the same regression model. 4.4 Dichotomous diagnostic tests Comparison of sensitivity and speci city of two dichotomous diagnostic tests based on a paired design can be done using McNemar's test for paired binary data. 54 The marginal regression formulation of Leisenring et al. 54 described in Section 3 can be used to compare sensitivities and speci cities of two or more diagnostic tests, even if all tests are not carried out on all subjects, and it can accommodate clustered data and adjust for covariates; this method corresponds to McNemar's test when two diagnostic tests are given to all subjects in a simple paired design. Lachenbruch and Lynch 66 generalize McNemar's test to assess equivalence of sensitivity and speci city of twopaired dichotomous tests. 5 Methods for context-speci c interpretation This section discusses three aspects of context-speci c interpretation of diagnostic test results: selection of an optimal decision limit to de ne a positive test result, estimation of the likelihood ratio needed to convert an individual test result into a posterior probability of disease, and context-speci c assessment of diagnostic test performance. 5.1 Selection of an optimal decision limit In screening situations, it is often desired to use the diagnostic test to determine whether or not a subject needs further work-up or treatment. Selection of a decision limit at which to dichotomize a continuous diagnostic test results as positive (further

work-up or treatment required) or negative (no further work-up required) is an optimization problem; the appropriate method depends on the speci c objective and data available and, clearly, many different optimization criteria could be used. Two examples are described here. One possible objective is to minimize the expected costs of misclassi cation of diseased and nondiseased subjects, in which case the Bayes minimum cost decision rule is optimal. 1 This decision rule classi es a subject as diseased or nondiseased according to whether or not the likelihood ratio f XjD =f XjD of the test result X exceeds a xed decision limit Q ˆ The interpretation of diagnostic tests 129 1 P D R P D R where P(D+) is the prior probability of disease, and R and R are the regrets (difference between cost of correct and incorrect decision) associated with classifying a subject as nondiseased and diseased, respectively. Strike 1 describes two methods for deriving the optimal decision limit X Q : if the test result distributions for both diseased and nondiseased are assumed normally distributed, the decision limit X Q can be calculated algebraically by solving the equality f XjD =f XjD ˆ Q for X. Alternatively, the decision limit can be determined graphically from the ROC curve, using the fact that the slope of the ROC curve at any decision limit X equals the likelihood ratio at that decision limit; X Q is the decision limit corresponding to the point where the line with slope Q is tangent to the ROC curve. As an alternative to the cost-minimizing approach, Somoza and Mossman 67 suggest selecting a prevalence-speci c decision limit that maximizes diagnostic information (in the information theoretic sense) in the particular clinical situation. This method is appropriate when the objective of testing is to reduce overall uncertainty about the patient's disease status, rather than to balance the error rates based on their respective costs, and in situations where misclassi cation costs are open to dispute or are dif cult to estimate precisely. 5.2 Estimation of the likelihood ratio Several methods have been proposed for estimating the likelihood ratio L X corresponding to an individual test result X. If the ROC curve of the diagnostic test is available for a population similar to the population of interest, the likelihood ratio can be estimated graphically as the slope of the tangent to the ROC curve at the point corresponding to the test result X; note that this method requires knowledge of the actual test measurements corresponding to each point on the ROC curve, which may not be included in published papers. For the case where test results for both diseased and nondiseased subjects are normally distributed, Strike 1 directly calculates the likelihood ratio corresponding to an individual test result and gives a normal-theory standard error and large-sample con dence interval for the posterior probability of disease. If one or both test result distributions are non-normal and cannot both be transformed to normality by a single transformation, he suggests using kernel density estimation to obtain separate smooth estimates of the nondiseased and diseased test result distributions, then calculating the likelihood ratio at the test result X as the ratio of the kernel density estimates

130 DE Shapiro f XjD and f XjD ; however, he does not give a variance estimate or con dence interval for the resulting estimated posterior probability of disease. Linnet 68 suggested a different nonparametric method based on estimating relative frequency polygons of the test result distributions. Albert 69 suggests estimating the likelihood ratio L X using logistic regression of true disease status on the test result X. This approach assumes that the likelihood ratio can be well-approximated by an exponential function; in particular, L X ˆ exp 0 1 X log n D =n D, where 0 and 1 are the usual logistic regression coef cients and n D - and n D are the numbers of nondiseased and diseased subjects, respectively. This method can handle dichotomous or continuous diagnostic tests, and can easily incorporate covariate information. Albert and Harris 70 and Simel et al. 71 discuss con dence intervals for likelihood ratios. 5.3 Context-speci c assessment of diagnostic test performance The methods for assessing discriminatory accuracy of a diagnostic test reviewed in Sections 3 and 4 only provide information about the ability of the test to distinguish between diseased and nondiseased patients; however, they do not directly indicate how well the test result for an individual patient predicts the probability of disease in that patient, which will vary depending on the prevalence of disease. This section reviews methods for assessing context-speci c diagnostic test performance. The results of all these methods are prevalence-dependent and may not be directly generalizable to other settings. Linnet 72 suggests assessing diagnostic test performance based on the accuracy of estimated posterior probabilities. He proposes using a strictly proper scoring rule, which is a function of the posterior probabilities that has maximum expected value at the underlying true probabilities. His approach is to estimate the posterior probability as a function of diagnostic test result based on a training set of observations, then subsequently to assess the score on a new set of test observations. Bootstrap standard error estimation and bias correction allows the same subjects to serve as training and test sets. Comparison of two diagnostic tests can be done using the bootstrap. Somoza and Mossman 67 suggest comparing diagnostic tests by determining the prevalence-speci c decision limit for each test that maximizes diagnostic information (i.e. maximizes the reduction in uncertainty about disease status), then comparing the tests at their optimal decision limits, which may occur at different FPR. This method may be more useful than comparing partial AUC or sensitivity at xed speci city when ROC curves cross; in this situation, comparison of partial areas or TPR at xed FPR may be dif cult if the ranges of FPR where the tests perform optimally do not coincide. Many other related approaches have been proposed in the medical decision making literature to incorporate the consequences (in terms of risks and bene ts) of subsequent diagnostic decisions based on the test result into the evaluation of diagnostic test performance. A review of this literature is beyond the scope of this paper; a good starting place is a recent paper by Moons et al. 73 Bloch 74 provides two alternative methods for comparing two diagnostic tests in a paired design where the decision limit for each test and the costs of a false positive and false negative are speci ed a priori. One method is based on comparing risks of the two tests, and the other on comparing kappa coef cients.

The interpretation of diagnostic tests 131 6 Concluding remarks The assessment and interpretation of laboratory diagnostic tests has received increasing attention in the statistical literature in recent years. Many new methods have been proposed, but the relative advantages and disadvantages of different methods have not yet been assessed. More work is needed in identifying the most promising methods and incorporating them into widely available statistical software. This paper has reviewed methods for assessing, comparing, and interpreting individual diagnostic tests. In actual medical practice, however, individual diagnostic test results are not interpreted in a vacuum, but are combined by the physician with all other available information to arrive at a diagnosis. A large and growing literature exists on computer-based methods for diagnosis and prognosis, including both statistical methods such as logistic regression and recursive partitioning, and nonstatistical methods such as expert systems and neural networks. Strike 1 and Begg 4 give brief overviews of this area and include several references. Despite the availability of methodology for interpreting diagnostic test results, relatively few physicians actually use formal methods such as sensitivity, speci city, likelihood ratios, or Bayes theorem when interpreting diagnostic test results. A recent telephone survey of 300 practicing physicians found that only 3% reported calculating posterior probability of disease using Bayes theorem, 1% reported using ROC curves, and 1% reported using likelihood ratios when evaluating diagnostic test results. 75 On the other hand, 84% reported using a test's sensitivity and speci city when conducting diagnostic evaluations, primarily when interpreting test results; however, when asked how they used these indexes, 95% described an informal method that amounted to estimating the test's posterior error probabilities (i.e. proportion of patients who test positive that subsequently are found to be nondiseased, and proportion of patients who test negative that subsequently are found to be diseased) based on patients in their own practice, rather than using published sensitivity and speci city data. Additional efforts to make the formal diagnostic test interpretation methods more user-friendly and explain their use, like the two recent papers by Jaeschke et al., 76,77 are clearly needed. Acknowledgement Supported in part by National Institute of Allergy and Infectious Diseases, cooperative agreement number U01-AI411. References 1 Strike PW. Measurement in laboratory medicine: a primer on control and interpretation. Oxford: Butterworth-Heinemann, 1995. 2 Zweig MH, Campbell G. Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine. Clinical Chemistry 1993; 39: 561±77. 3 Hanley JA. Receiver operating characteristic (ROC) methodology: the state of the art. Critical Reviews in Diagnostic Imaging 1989; 29: 307±35. 4 Begg CB. Advances in statistical methodology for diagnostic medicine in the 1980's. Statistics in Medicine 1991; 10: 1887±95.

132 DE Shapiro 5 Swets JA. Indices of discrimination or diagnostic accuracy: their ROCs and implied models. Psychological Bulletin 1986; 99: 100±17. 6 Boyd JC. Mathematical tools for demonstrating the clinical usefulness of biochemical markers. Scandinavian Journal of Clinical and Laboratory Investigation 1997; 57(suppl. 227): 46±63. 7 Begg CB. Biases in the assessment of diagnostic tests. Statistics in Medicine 1987; 6: 411±24. 8 Begg CB, McNeil BJ. Assessment of radiologic tests: control of bias and other design considerations. Radiology 1988; 167: 565±69. 9 Zhou XH. Correcting for veri cation bias in studies of a diagnostic test's accuracy. Statistical Methods in Medical Research 1998; 7: 337±53. 10 Hui SL, Zhou XH. Evaluation of diagnostic tests without gold standards. Statistical Methods in Medical Research 1998; 7: 354±70. 11 Torrance-Rynard VL, Walter SD. Effects of dependent errors in the assessment of diagnostic test performance. Statistics in Medicine 1997; 16: 2157±75. 12 Phelps CE, Hutson A. Estimating diagnostic test accuracy using a fuzzy gold standard. Medical Decision Making 1995; 15: 44±57. 13 Obuchowski NA. Sample size calculations in studies of test accuracy. Statistical Methods in Medical Research 1998; 7: 371±92. 14 Reid MC, Lachs MS, Feinstein AR. Use of methodological standards in diagnostic test research. Journal of the American Medical Association 1995; 274: 645±51. 15 Goddard MJ, Hinberg I. Receiver operating characteristic (ROC) curves and non-normal data: an empirical study. Statistics in Medicine 1990; 9: 325±37. 16 Egan JP. Signal detection theory and ROC analysis. New York: Academic Press, 1975. 17 England WL. An exponential model used for optimal threshold selection on ROC curves. Medical Decision Making 1988; 8: 120±31. 18 Metz CE, Herman BA, Shen J-H. Maximum likelihood estimation of receiver operating characteristic (ROC) curves from continuously-distributed data. Statistics in Medicine 1998; 17: 1033±53. 19 Dorfman DD, Alf E Jr. Maximum likelihood estimation of parameters of signal detection theory ± a direct solution. Psychometrika 1968; 33: 117±24. 20 Hanley JA. The use of the `binormal' model for parametric ROC analysis of quantitative diagnostic tests. Statistics in Medicine 1996; 15: 1575±85. 21 Zou KH, Hall W, Shapiro DE. Smooth nonparametric receiver operating characteristic (ROC) curves for continuous diagnostic tests. Statistics in Medicine 1997; 16: 2143±56. 22 Hsieh F, Turnbull BW. Nonparametric and semiparametric estimation of the receiver operating characteristic curve. Annals of Statistics 1996; 24: 25±40. 23 Campbell G. General methodology I. Advances in statistical methodology for the evaluation of diagnostic and laboratory tests. Statistics in Medicine 1994; 13: 499±508. 24 Le CT. Evaluation of confounding effects in ROC studies. Biometrics 1997; 53: 998±1007. 25 Lloyd CJ. Using smoothed receiver operating characteristic curves to summarize and compare diagnostic systems. Journal of the American Statistical Association 1998; 93: 1356±64. 26 Ma G, Hall WJ. Con dence bands for receiver operating characteristic curves. Medical Decision Making 1993; 13: 191±97. 27 Li G, Tiwari RC, Wells MT. Quantile comparison functions in two-sample problems, with application to comparisons of diagnostic markers. Journal of the American Statistical Association 1996; 91: 689±98. 28 Hilden J. The area under the ROC curve and its competitors. Medical Decision Making 1991; 11: 95±101. 29 Wieand S, Gail MH, James BR, James KL. A family of non-parametric statistics for comparing diagnostic markers with paired or unpaired data. Biometrika 1989; 76: 585±92. 30 Bamber D. The area above the ordinal dominance graph and the area below the receiver operating characteristic graph. Journal of Mathematical Psychology 1975; 12: 387±415. 31 Hanley JA, McNeil BJ. The meaning and use of the area under a ROC curve. Radiology 1982; 143: 27±36. 32 Hanley JA, McNeil BJ. A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology 1983; 148: 839±43. 33 DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a non-parametric approach. Biometrics 1988; 44: 837±45. 34 McNeil BJ, Hanley JA. Statistical approaches to the analysis of receiver operating

The interpretation of diagnostic tests 133 characteristic (ROC) curves. Medical Decision Making 1984; 4: 137±50. 35 Hanley JA, Hajian-Tilaki KO. Sampling variability of non-parametric estimates of the area under receiver operating characteristic curves: an update. Academic Radiology 1997; 4: 49±58. 36 Cof n M, Sukhatme S. Receiver operating characteristic studies and measurement errors. Biometrics 1997; 53: 823±37. 37 Hajian-Tilaki KO, Hanley J, Joseph L, Collet J-P. A comparison of parametric and nonparametric approaches to ROC analysis of quantitative diagnostic tests. Medical Decision Making 1997; 17: 94±102. 38 Walsh SJ. Limitations to the robustness of binormal ROC curves: effects of model misspeci cation and location of decision thresholds on bias, precision, size and power. Statistics in Medicine 1997; 16: 669±79. 39 Obuchowski NA, Lieber M. Con dence intervals for the receiver operating characteristic area in studies with small samples. Academic Radiology 1998; 5: 561±71. 40 McClish DK. Analyzing a portion of the ROC curve. Medical Decision Making 1989; 9: 190±95. 41 Thompson ML, Zuccini W. On the statistical analysis of ROC curves. Statistics in Medicine 1989; 8: 1277±90. 42 Jiang Y, Metz CE, Nishikawa RM. A receiver operating characteristic partial area index for highly sensitive diagnostic tests. Radiology 1996; 201: 745±50. 43 Greenhouse SW, Mantel N. The evaluation of diagnostic tests. Biometrics 1950; 6: 399±412. 44 Schafer H. Constructing a cut-off point for a quantitative diagnostic test. Statistics in Medicine 1989; 8: 1381±91. 45 Schafer H. Ef cient con dence bounds for ROC curves. Statistics in Medicine 1994; 13: 1551±61. 46 Hilgers R. Distribution-free con dence bounds for ROC curves. Methods of Information in Medicine 1991; 30: 96±101. 47 Hsieh F, Turnbull BW. Nonparametric methods for evaluating diagnostic tests. Statistica Sinica 1996; 6: 47±62. 48 Lee W-C, Hsiao CK. Alternative summary indices for the receiver operating characteristic curve. Epidemiology 1996; 7: 605±11. 49 Sukhatme S, Beam C. Strati cation in nonparametric ROC studies. Biometrics 1994; 50: 149±63. 50 Pepe MS. Three approaches to regression analysis of receiver operating characteristic curves for continuous test results. Biometrics 1998; 54: 124±35. 51 Tosteson ANA, Begg CB. A general regression methodology for ROC curve estimation. Medical Decision Making 1988; 8: 204±15. 52 Smith PJ, Thompson TJ, Engelau MM, Herman WH. A generalized linear model for analysing receiver operating characteristic curves. Statistics in Medicine 1996; 15: 323±33. 53 Coughlin SS, Trock B, Criqui MH, Pickle LW, Browner D, Tefft MC. The logistic modelling of sensitivity, speci city, and predictive value of a diagnostic test. Journal of Clinical Epidemiology 1992; 45: 1±7. 54 Leisenring W, Pepe MS, Longton G. A marginal regression modelling framework for evaluating medical diagnostic tests. Statistics in Medicine 1997; 16: 1263±81. 55 Shapiro DE. Issues in combining independent estimates of the sensitivity and speci city of a diagnostic test. Academic Radiology 1995; 2: S37±S47. 56 Beam CA. Analysis of clustered data in receiver operating characteristic studies. Statistical Methods in Medical Research 1998; 7: 324±36. 57 Metz CE, Kronman HB. Statistical signi cance tests for binormal ROC curves. Journal of Mathematical Psychology 1980; 22: 218±43. 58 Metz CE, Wang P-L, Kronman HB. A new approach for testing the signi cance of differences between ROC curves measured from correlated data. In Deconinck F ed. Information Processing in Medical Imaging: Proceedings of the Eighth Conference. The Hague: Martinus Nijhoff, 1984: 432±45. 59 Metz CE, Herman BA, Roe CA. Statistical comparison of two ROC-curve estimates obtained from partially-paired datasets. Medical Decision Making 1998; 18: 110±21. 60 Venkatraman E, Begg CB. A distribution-free procedure for comparing receiver operating characteristic curves from a paired experiment. Biometrika 1996; 83: 835±48. 61 Moise A, Clement B, Raissis M. A test for crossing receiver operating characteristic (ROC) curves. Communications in Statistics ± Theory and Methods 1988; 17: 1985±2003. 62 Mossman D. Resampling techniques in the analysis of non-binormal ROC data. Medical Decision Making 1995; 15: 358±66. 63 Linnet K. Comparison of quantitative diagnostic tests: type I error, power and sample size. Statistics in Medicine 1987; 6: 147±58.

134 DE Shapiro 64 Bland JM, Altman DG. Measuring agreement in method comparison studies. Statistical Methods in Medical Research 1999; 8: 135±60. 65 Beam CA, Wieand S. A statistical method for the comparison of a discrete diagnostic test with several continuous diagnostic tests. Biometrics 1991; 47: 907±19. 66 Lachenbruch PA, Lynch CJ. Assessing screening tests: extensions of McNemar's test. Statistics in Medicine 1998; 17: 2207±17. 67 Somoza E, Mossman D. Comparing and optimizing diagnostic tests: an information theoretic approach. Medical Decision Making 1992; 12: 179±88. 68 Linnet K. A review on the methodology for assessing diagnostic tests. Clinical Chemistry 1988; 34: 1379±86. 69 Albert A. On the use and computation of likelihood ratios in clinical chemistry. Clinical Chemistry 1982; 28: 1113±19. 70 Albert A, Harris EK. Multivariate interpretation of clinical laboratory data. New York: Marcel Dekker, 1987. 71 Simel DL, Samsa GP, Matchar DB. Likelihood ratios with con dence: sample size estimation for diagnostic test studies. Journal of Clinical Epidemiology 1991; 44: 763±70. 72 Linnet K. Assessing diagnostic tests by a strictly proper scoring rule. Statistics in Medicine 1989; 8: 609±18. 73 Moons KG, Stinjen T, Michel BC, Buller HR, Es G-AV, Grobbee DE, Habbema DF. Application of treatment thresholds to diagnostic-test evaluation: an alternative to the comparison of areas under receiver operating characteristic curves. Medical Decision Making 1997; 17: 447±54. 74 Bloch DA. Comparing two diagnostic tests against the same `gold standard' in the same sample. Biometrics 1997; 53: 73±85. 75 Reid MC, Lane DA, Feinstein AR. Academic calculations versus clinical judgments: practicing physicians' use of quantitative measures of test accuracy. The American Journal of Medicine 1998; 104: 374±80. 76 Jaeschke R, Guyatt GH, Sackett DL, the Evidence-Based Medicine Working Group. Users' guides to the medical literature. III how to use an article about a diagnostic test. A. are the results of the study valid? Journal of the American Medical Association 1994; 271: 389±91. 77 Jaeschke R, Guyatt GH, Sackett DL, the Evidence-Based Medicine Working Group. Users' guides to the medical literature. III how to use an article about a diagnostic test. B. what are the results and will they help me in caring for my patients? Journal of the American Medical Association 1994; 271: 703±707.