Neuropsychological assessment and the Cattell- Horn-Carroll (CHC) cognitive abilities model



Similar documents
Patterns of Strengths and Weaknesses in L.D. Identification

CHC theory is derived from the concept that there are three strata of human cognitive abilities that differ in breadth and generality.

Woodcock-Johnson WJ III

standardized tests used to assess mental ability & development, in an educational setting.

Areas of Processing Deficit and Their Link to Areas of Academic Achievement

Running head: SLD CALIFORNIA 1

Module V. Neuropsychological Assessments. "The Impact"

Despite changes in special education law (i.e., Public Law , Individuals With

Encyclopedia of School Psychology Neuropsychological Assessment

Intelligence. Cognition (Van Selst) Cognition Van Selst (Kellogg Chapter 10)

Accommodations STUDENTS WITH DISABILTITES SERVICES

Learning Disability Documentation Instructions Updated December, 2015

Disability Services Office Health, Counselling & Disability Services

Psychoeducational Assessment How to Read, Understand, and Use Psychoeducational Reports

Patterns of Strengths and Weaknesses Standards and Procedures. for. Identification of Students with Suspected Specific Learning Disabilities

To Err is Human: Abnormal Neuropsychological Scores and Variability are Common in Healthy Adults

Recent Advances in the Assessment of Intelligence and Cognition

Normal toddler, school age and adolescent neurocognitive development and measurements in these age groups

COGNITIVE PSYCHOLOGY

Neuropsychology Research Program: Thomas P. Ross, Ph.D.

Woodcock-Johnson. Use of the WJ III TM

Essentials of WAIS-IV Assessment

Underlying Factor Structures of the Stanford-Binet Intelligence Scales Fifth Edition. A Thesis. Submitted to the Faculty.

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

Interpretive Report of WMS IV Testing

Introducing the Woodcock-Johnson IV:

Guidelines for Documentation of a Learning Disability (LD) in Gallaudet University Students

PSYCHOLOGICAL AND NEUROPSYCHOLOGICAL TESTING

Guidelines for Documentation of a A. Learning Disability

Common Educational Tests used for Assessments for Special Education

TESTING GUIDELINES PerformCare: HealthChoices. Guidelines for Psychological Testing

Intelligence. My Brilliant Brain. Conceptual Difficulties. What is Intelligence? Chapter 10. Intelligence: Ability or Abilities?

The Role of Neuropsychological Testing in Guiding Decision- Making Related to Dementia

COUNSELLING IN PSYCHOLOGICAL TESTING

3030. Eligibility Criteria.

SPECIFIC LEARNING DISABILITIES (SLD)

APA National Standards for High School Psychology Curricula

SPECIFICATIONS FOR PSYCHIATRIC AND PSYCHOLOGICAL EVALUATIONS

WISC IV and Children s Memory Scale

The Scoop on Understanding Psych Testing: What do all those numbers really mean???

The Relationship between the WISC-IV GAI and the KABC-II

Psychology (MA) Program Requirements 36 credits are required for the Master's Degree in Psychology as follows:

Intelligence. Operational Definition. Huh? What s that mean? 1/8/2012. Chapter 10

GRADUATE STUDENTS ADMINISTRATION AND SCORING ERRORS ON THE WOODCOCK-JOHNSON III TESTS OF COGNITIVE ABILITIES

CLINICAL DETECTION OF INTELLECTUAL DETERIORATION ASSOCIATED WITH BRAIN DAMAGE. DAN0 A. LELI University of Alabama in Birmingham SUSAN B.

CRITICALLY APPRAISED PAPER (CAP)

INTRODUCTION TO PSYCHOLOGICAL EVALUATIONS. Course Syllabus

EDUCATIONAL PSYCHOLOGY. effectiveness of, the psychology of teaching, and the social psychology of schools as

Joseph K. Torgesen, Department of Psychology, Florida State University

CHAPTER 8: INTELLIGENCE

Harrison, P.L., & Oakland, T. (2003), Adaptive Behavior Assessment System Second Edition, San Antonio, TX: The Psychological Corporation.

Measuring critical thinking, intelligence, and academic performance in psychology undergraduates

History of the Stanford-Binet Intelligence Scales: Content and Psychometrics

Overview. Neuropsychological Assessment in Stroke. Why a Neuropsychologist. How to make a referral. Referral Questions 11/6/2013

ETS Policy Statement for Documentation of Intellectual Disabilities in Adolescents and Adults

Research into competency models in arts education

WMS III to WMS IV: Rationale for Change

************************************************** [ ]. Leiter International Performance Scale Third Edition. Purpose: Designed to "measure

Early Childhood Measurement and Evaluation Tool Review

a. Educational requirements. To academically qualify for licensure as a master's level psychologist or a clinical psychotherapist, the

Guidelines for the Documentation of a Learning Disability in Adolescents and Adults

The University of Memphis Guidelines for Documentation of a Learning Disability in Adolescents and Adults

Practice Test for Special Education EC-12

INTELLIGENCE. Key: Alfred Binet. Key: William stern.

Grade 12 Psychology (40S) Outcomes Unedited Draft 1

Functional Assessment Measures

SPECIFIC LEARNING DISABILITY

Categories of Exceptionality and Definitions

What and When of Cognitive Aging Timothy A. Salthouse

Student Intelligence and Academic Achievement in Albanian Universities. Case of Vlora University

Pair B Two tasks selected from: evaluation of research data analysis essay media response annotated folio of practical activities oral presentation

DRAFT TJ PROGRAM OF STUDIES: AP PSYCHOLOGY

Dr V. J. Brown. Neuroscience (see Biomedical Sciences) History, Philosophy, Social Anthropology, Theological Studies.

Master of Arts in Psychology: Counseling Psychology

Department of Psychology

Introduction to Neuropsychological Assessment

Gf - FLUID INTELLIGENCE

Psychology Courses (PSYCH)

WISC-III and CAS: Which Correlates Higher with Achievement for a Clinical Sample?

Standardized Tests, Intelligence & IQ, and Standardized Scores

2 The Use of WAIS-III in HFA and Asperger Syndrome

SPECIFIC LEARNING DISABILITY

Psychology UNDERGRADUATE

Courses in the College of Letters and Sciences PSYCHOLOGY COURSES (840)

Al Ahliyya Amman University Faculty of Arts Department of Psychology Course Description Psychology

The WISC III Freedom From Distractibility Factor: Its Utility in Identifying Children With Attention Deficit Hyperactivity Disorder

PSYC PSYCHOLOGY Calendar Proof

Psychology. Administered by the Department of Psychology within the College of Arts and Sciences.

CHAPTER 2: CLASSIFICATION AND ASSESSMENT IN CLINICAL PSYCHOLOGY KEY TERMS

PS3021, PS3022, PS4040

curveballs & pitfalls

PsyD Psychology ( )

Transcription:

The University of Toledo The University of Toledo Digital Repository Theses and Dissertations 2008 Neuropsychological assessment and the Cattell- Horn-Carroll (CHC) cognitive abilities model James B. Hoelzle The University of Toledo Follow this and additional works at: http://utdr.utoledo.edu/theses-dissertations Recommended Citation Hoelzle, James B., "Neuropsychological assessment and the Cattell-Horn-Carroll (CHC) cognitive abilities model" (2008). Theses and Dissertations. Paper 1192. This Dissertation is brought to you for free and open access by The University of Toledo Digital Repository. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of The University of Toledo Digital Repository. For more information, please see the repository's About page.

A Dissertation Entitled Neuropsychological Assessment and the Cattell-Horn-Carroll (CHC) Cognitive Abilities Model By James B. Hoelzle Submitted as partial fulfillment of the requirements for The Doctor of Philosophy in Psychology Advisor: Gregory J. Meyer, Ph.D. Committee Members: Wesley A. Bullock, Ph.D. Mary E. Haines, Ph.D. Stephen D. Christman, Ph.D. Joni L. Mihura, Ph.D. College of Graduate Studies The University of Toledo August 2008

An Abstract of Neuropsychological Assessment and the Cattell-Horn-Carroll (CHC) Cognitive Abilities Model James B. Hoelzle Submitted as partial fulfillment of the requirements for The Doctor of Philosophy in Psychology The University of Toledo August 2008 This study determined whether popular neuropsychological measures evaluate Cattell- Horn-Carroll (CHC) broad and narrow cognitive abilities. A thorough literature review was conducted to identify relevant datasets that would permit factor analyses of targeted instruments. Seventy-seven datasets were obtained and analyzed, or reanalyzed, to ensure methodological consistency across samples. Many factor solutions included dimensions that reflected broad CHC ability constructs, which suggests it is possible to integrate aspects of neuropsychological assessment and the CHC theory. Overall, the project is relevant to assessment practice because it connects neuropsychological tests with CHC theory, and thus facilitates accurate interpretation of performances across different measures. It ultimately brings clinical practice and cognitive theory closer together. ii

Acknowledgements It is challenging to appropriately acknowledge and thank everyone who has supported me as I completed this project and my formal graduate training. First and foremost, I am especially grateful for the excellent mentoring provided by my academic advisor and friend, Dr. Gregory Meyer. Dr. Meyer is responsible for introducing me to psychological assessment, fostering my curiosity in research, and helping me to think through many important professional and personal decisions. His guidance has helped me grow as a clinician, and more importantly as a person. I have no doubt many of the opportunities I have are directly related to his willingness to spend time teaching and collaborating with me. It is in an inadequate thank you, but I aspire to encourage and foster professional growth in students in the same ways that he mentored me. I am also appreciative of my dissertation committee s efforts. Drs. Wesley Bullock, Stephen Christman, Mary Haines, and Joni Mihura each provided specific feedback regarding aspects of this project, and these contributions were instrumental in making the finished project more sophisticated. I am thankful for their expertise, encouraging words, and accessibility to discuss the project and a range of other professional issues. A project of this magnitude could not have been completed without the assistance of other researchers. The following individuals were instrumental in providing relevant data or helpful comments during the completion of this project: Dan Han, Peter Hartmann, Bruce Hermann, David Kinsinger, Thomas Merten, Michael Nelson, Benjamin Pyykkonen, Cynthia Riccio, Noah Silverberg, Lisa Stanford, Mark Wilde, and iii

John Woodard. It was a pleasure to share ideas with these individuals, and hopefully in the future I can reciprocate the generosity they exhibited towards me. Lastly, and most personally, I am very much thankful for the unwavering support from those closest to me. Jessica, I can only imagine how frustrating it must have been at times to watch me get worked up and bogged down with different aspects of this project. Thank you for doing all you could to keep me going. I could not have made it without your support and I am forever grateful for it. You have set the bar high and it will be challenging for me to return the favor, but I have every intention of fully supporting your efforts as you complete your training. To my family, I have only recently realized how truly blessed I am to be part of such a loving family. It was remarkable and inspiring to watch over the past year as everyone did what they could to overcome an assortment of major to minor difficulties. Somehow you instilled in me the ability to think it will always work out as long as you keep at it. The completion of this project is tangible evidence of this. I am grateful for the unconditional support and encouragement you provide me each day. With all my heart, thank you. iv

Table of Contents Abstract Acknowledgements Table of Contents List of Tables List of Figures ii iii v ix x I. Introduction 1 Range of Abilities Evaluated by Neuropsychological Assessment 4 Intellectual Assessment 8 Psychometric Theories of Intelligence 10 Spearman s g-factor 11 Bi-factor Models of Intelligence 14 Multiple Intelligences 15 Extended Gf-Gc Models 17 Carroll s Three-Stratum Model of Intelligence 20 An Integration of Carroll s Three-Stratum Model and Extended Gf-Gc Theory 22 CHC Theory to Practice 26 Alternative Models of Intelligence 30 Defining the Gap Between Neuropsychological Assessment and the Cognitive Anilities Theory 34 Question of Interest 36 v

II. Methodology 37 General Methods 37 Survey of Neuropsychological Measures 37 Data Collection 45 Reanalysis of Data: Exploratory Factor Analytic Methods 46 Integration of Factors and CHC Theory 50 Hierarchical Exploratory Factor Analysis 54 Factorial Invariance 54 III. Results 58 Test Classification by CHC Construct and Reliability 58 Overview and Example of Factor Analyses 69 Attention/Concentration Measures 78 Trail Making Test 81 Paced Auditory Serial Addition Test 81 Stroop Test 82 Executive Functioning Measures 82 Wisconsin Card Sorting Test 85 Category Test 85 Controlled Oral Word Association Test 85 Language Measures 86 Boston Naming Test 89 Aphasia Screening Test 89 Multilingual Aphasia Examination 89 vi

Memory Measures 90 Wechsler Memory Scales 93 Rey Auditory Verbal Learning Test 93 Wide Range Assessment of Memory and Learning 94 Motor Function Measures 97 Finger Tapping Test 97 Grip Strength Test 97 Purdue Pegboard Test 98 Visual Perception Measures 98 Hooper Visual Organization Test 101 Rey-Osterrieth Complex Figure Test 101 Judgment of Line Orientation Test 101 Neuropsychological Batteries/Comprehensive Measures 102 Halstead-Reitan Neuropsychological Battery 106 NEPSY 106 Luria-Nebraska Neuropsychological Battery 107 Dementia Rating Scale 108 Neurobehavioral Cognitive Status Examination/Cognistat 109 Hierarchical Exploratory Factor Analysis 109 Factorial Invariance 110 vii

IV. Discussion 113 Implications for Neuropsychological Assessment 117 Implications for the CHC Model 119 Factorial Invariance 122 Methodological Limitations 123 Conclusion 125 References 126 Appendix A. Pattern Matrices 158 Appendix B. Factor Structure Congruency Coefficients 203 viii

List of Tables Table 1. Targeted neuropsychological tests and batteries: Rank-order popularity and percent of respondents who use each test 40 Table 2. Intrarater reliability of CHC constructs 60 Table 3. Hypothesized relationships between tests and CHC broad/ narrow ability constructs 63 Table 4. Rotated pattern matrix for Pontón et al. (2000) 75 Table 5. Targeted Attention/Concentration measures 80 Table 6. Targeted Executive Functioning measures 84 Table 7. Targeted Language measures 88 Table 8. Targeted Memory measures 92 Table 9. Targeted Motor Function measures 96 Table 10. Targeted Visual Perception measures 100 Table 11. Targeted Neuropsychological Batteries/ Comprehensive measures 104 Table 12. Targeted samples for hierarchical exploratory factor analysis 110 Table 13. Documented relationships between tests and CHC broad ability constructs 114 ix

List of Figures Figure 1. The integrated Cattell-Horn-Carroll (CHC) cognitive abilities Framework: Broad and narrow abilities 25 Figure 2. Pontón et al. (2000) parallel analysis 72 Figure 3. Pontón et al. (2000) results for the minimum average partial procedure 73 x

Chapter I Introduction Psychological assessment involves collecting a variety of test scores using different measures and methods, and interpreting the data in the context of an individual s unique history to answer specific referral questions (Meyer et al., 2001). Ideally, the outcome of an assessment is that the individual meaningfully increases their understanding of how they interact in the world. Family members, peers, and referring professionals also benefit for similar reasons: the assessment process should increase their understanding of the individual s tendencies or skills. While assessment occurs across different sub-domains of psychology and may include slightly different referral questions, the present investigation focuses on neuropsychological assessment. More specifically, it focuses on how neuropsychological instruments function in relation to theoretical models of cognitive ability. It is important to understand the goals of and procedures followed during a neuropsychological evaluation to appreciate the present project and its significance in clinical research and practice. Referral questions specific to neuropsychological 1

assessment include, but are not limited to, making diagnoses, evaluating the quality of improvement or decline in acute or deteriorating conditions (e.g., stroke or multiple sclerosis), planning care, determining treatment efficacy, and increasing scientific knowledge regarding brain-behavior relationships through research (Lezak, Howieson, & Loring, 2004). Proficient neuropsychologists are able to understand an individual s cognitive processes or possibly determine the location of a cerebral lesion based on data collected during an evaluation (Lezak, 2003). Lezak (2003) stated that neuropsychological assessment behavior has traditionally been viewed or organized using a three-dimensional schema. This structure includes explicitly focusing on an intellectual component and two nonintellectual components, motivation and executive functioning (i.e., the ability to initiate and carry-out goaldirected activities). Lezak et al. (2004) further clarified these domains by stating that the three-dimensional organization of relevant behaviors includes the following: (1) cognition or information-handling aspects of behavior, (2) emotions relating to attention and memory, and consistent with the earlier description, (3) executive functions. A widely accepted delineation of general cognitive functioning includes receptive functions, memory and learning, thinking, and expressive functioning subcomponents (Lezak et al., 2004). Each of these subcomponents can be additionally differentiated, although investigators have reached divergent conclusions regarding what distinctions might be most appropriate. For example, Tulving (2002) stated that there are over 100 different named types of memory, whereas Lezak and colleagues identified just seven (verbal memory, visual memory, tactile memory, incidental memory, prospective memory, remote memory, and forgetting). Largely, these subcomponents have come to 2

be better understood by studying what distinct anatomical structures are involved in the more specific processes (e.g., memory functions, Nolte, 2002). Being cognizant of the relationship between particular cognitive functions and anatomical structures is imperative when interpreting data and answering referral questions (e.g., implications of a cerebrovascular accident in the left frontal lobe). Neuropsychological assessment generally seeks to understand cognitive defects and deficits (Lezak, 2003). Lezak stated the true value of a neuropsychological evaluation is in determining subtle defects as opposed to those easily observed (e.g., Broca s aphasia). She defined cognitive deficits as measurable differences from an ideal or normal level functioning. It is essential that clinicians consider the most appropriate comparison group when interpreting cognitive deficits (Strauss, Sherman, & Spreen, 2006). For example, consider the performance of an individual with mental retardation on a visuospatial task. Their results could be interpreted in strikingly different manners depending on the reference group selected by the clinician. In relation to a normative sample, the performance might appear severely impaired, whereas when compared to a sample of individuals with similar general abilities, or to their own level of ability, the performance might be viewed as above-average or represent a specific cognitive strength. In short, neuropsychological assessment requires evaluating a range of cognitive skills and interpreting the collected data in reference to an appropriate comparison group to form conclusions regarding brain-behavior relationships. Studies comprised of patients with similar conditions or injuries have illustrated consistent patterns of deficits that are beneficial in understanding the implications of a person s disorder or injury, diagnosing, and understanding cerebral functioning (Banich, 3

2004; Lezak et al., 2003). For example, Alzheimer s disease is characterized by severe global anterograde amnesia, retrograde amnesia, visuospatial processing difficulties, and motor difficulties (Banich, 2004). Banich noted several important ways Alzheimer s disease is differentiated from more general amnesia. First, individuals with Alzheimer s disease do not typically retain procedural knowledge whereas this is typically maintained in amnestic individuals. Second, individuals extensive retrograde amnesia is inconsistent with most non-korsakoff s amnesias. Lastly, immediate auditory attention (e.g., Digit Span) problems are typically observed during the later stages of Alzheimer s disease whereas this deficit is infrequently observed in amnestic patients. If a clinician were unable to recognize this pattern of difficulties as being specific to Alzheimer s disease the individual may be misdiagnosed, which could have detrimental implications. Range of Abilities Evaluated by Neuropsychological Assessment While the current project broadly focuses on neuropsychological assessment, the more specific goal is to better understand how neuropsychological instruments evaluate cognitive abilities. Reitan and Wolfson (1993) stated that neuropsychological instrument development can be traced to two historical trends. The first is attributed to biopsychology and clinical psychology and views task performance on continuous distributions. This approach makes use of a standardized distribution of performance. The second approach views performance in a more dichotomous manner resulting in classification as either normal or abnormal, which is consistent with behavioral neurology. Initially, instruments used during assessment were selected because they successfully differentiated between groups of individuals, not because they were 4

congruent with empirically-derived theories of cognitive abilities (Strauss et al., 2006). For example the Binet-Simon Scale (Binet & Simon, 1905) was intended to identify children with mental retardation, whereas the Army Alpha and Beta forms (Yerkes, 1919, 1921) guided military selection. Further, tasks included within the Halstead-Reitan Battery (Reitan & Wolfson, 1993) were differentially responded to by a combined group of normal functioning individuals and patients hospitalized for difficulties unrelated to brain functioning compared to individuals with clearly documented brain damage (Reitan & Wolfson, 2004). Not surprisingly, there is variability in the range of cognitive functions evaluated by neuropsychological measures and tasks. Some instruments evaluate one function (e.g., motor functions) while others provide data related to multiple functions (e.g., learning and memory). Illustrating the range of specific abilities evaluated by neuropsychological instruments, Strauss and colleagues (2006) recent test compendium divided measures and tasks into the following nine categories: (1) general cognitive functioning/ neuropsychological batteries/ assessment of premorbid intelligence, (2) achievement, (3) executive functions, (4) attention, (5) memory, (6) language, (7) visual perception, (8) somatosensory function, olfactory function, and body orientation, and (9) motor function. Lezak et al. (2004) described the range of abilities covered by neuropsychological tests in a slightly different manner that included eight domains. They differentiated between tasks and measures evaluating: (1) perception, (2) memory, (3) verbal functions, and (4) neuropsychological batteries, which are similar to four of the categories identified by Strauss and colleagues. Diverging from Strauss et al. s distinctions, Lezak et al. combined (5) orientation and attention, and (6) executive functions and motor 5

performance into single areas. Further, they identified two unique areas evaluated by measures and tasks, (7) construction and (8) concept formation/reasoning. Goldstein and Beers (2004) proposed an additional manner to conceptually differentiate between measures. They identified six broad areas: (1) comprehensive neuropsychological assessment, (2) language and communication, (3) memory and learning, (4) attention, (5) abstract reasoning and problem solving, and (6) sensory-perceptual and motor functions. The obvious lack of clear congruence between classifications has clinical implications. For example, it leads to questions regarding what areas should be evaluated during assessment. Neuropsychological theories have been developed to account for the relationships between domains, which have unique corresponding biological components. For example, Luria s classical framework (1966, 1973, 1980) included three functional units. Das (2004) summarized the model by stating that the first unit is responsible for attention and arousal, the second unit processes material either simultaneously or successively, and the remaining unit is instrumental in evaluating and planning cognitive functions. In greater detail, Luria (1973) stated the first functional unit was responsible for selectively attending to relevant stimuli while inhibiting a response or directing attention to irrelevant stimuli. He identified this functioning as occurring predominantly in the brain stem and reticular activating system. The second unit is associated with the occipitalparietal lobes (simultaneous processing) and fronto-temporal lobes (successive processing) and is responsible for the manner in which we understand what we attend to. Simultaneous processing includes integrating or synthesizing stimuli into groups or 6

recognizing that individual components of various stimuli are interrelated 1 ; it requires a large number of neural events to occur simultaneously and cooperatively (Naglieri & Das, 2005). Successive functioning involves processing individual stimuli in a serial or successive order (Das, Naglieri, & Kirby, 1994). For example, successive coding is needed for writing. Noteworthy, many cognitive activities involve both simultaneous and successive processing. For example, understanding the syntax of speech involves appreciating the serial relation of one word to the next, which requires successive processing while the comprehension of the meaning of the sentence requires simultaneous processing. Luria (1980) clearly described the location and importance of the third functional unit as follows: The frontal lobes synthesize the information about the outside worlds and are the means whereby the behavior of the organism is regulated in conformity with the effect produced by its actions (p. 263). The Reitan-Wolfson model (Reitan & Wolfson, 2004) is an alternative framework to understand brain-based functions. It posits external information first enters the brain through one of the five senses. Next, alertness, attention, and the ability to relate prior experiences to a current situation (i.e., immediate, intermediate, and remote memory) is required to register environmental input. The model makes a distinction between processing verbal information in the left hemisphere and visual-spatial information in the right hemisphere. The highest level of central processing includes concept formation, reasoning, and logical analysis. Finally, the model posits the resulting decision from this more abstract cognitive processing is expressed as output in some manner, which has a 1 An example of a simultaneous processing task, Draw a triangle above a square that is to the left of a circle under a cross. 7

corresponding biological basis (e.g., speech and language functions correspond with the left cerebral hemisphere). Intellectual Assessment It is notable that there is a gap between neuropsychological measures and evolving conceptualizations of intelligence. That is, for as seemingly related as the instruments and concepts are, they have strikingly different historical backgrounds. Consistent with the goals of science, the construct of intelligence has become more complete over time. The progression is well documented (e.g. see, Brinkman, Decker, & Dean 2005; Carroll, 1993; Flanagan, Genshaft, & Harrison, 1997; Flanagan & Harrison, 2005; Flanagan, McGrew, & Ortiz, 2000; Johnson & Bouchard, 2005a; Neisser et al., 1996) and much is gained by understanding general similarities and differences across historical approaches. Taylor (1994) suggested increasingly complex cognitive models can be classified into three distinct groups of theories: psychometric, information processing, and cognitive modifiability frameworks. Psychometric or structural approaches attempt to evaluate an individual s performance along dimensions that comprise the fundamental structure of intelligence. Flanagan et al. (2000) view this approach as being the most empirically supported. Further, they stated that psychometric theories have produced numerous efficient and practical instruments. One criticism against structural or psychometric concepts of intelligence is that these concepts are data driven, which may make them overly sensitive to arbitrary methodological decisions (e.g., decisions pertaining to factor retention, method of rotation; Taylor, 1994). 8

Information-processing models evaluate whether an individual can efficiently process everyday tasks or problems (Taylor, 1994). This theory views human intelligence as being similar to a computer information processing system, thus more intelligent individuals have greater abilities to efficiently process information through working memory. At first glance this approach appears analogous to various neuropsychological processing models (e.g., Luria s model [Luria, 1966; 1973; 1980]; Reitan-Wolfson model [Reitan & Wolfson, 2004]). However, Taylor stated information-processing theories strive to evaluate a range of physiological characteristics such as average evoked potentials or nerve conduction velocity, which is somewhat inconsistent with neuropsychological processing models. It is challenging to incorporate this model with traditional neuropsychological assessment because physiological data are not typically collected. Thus, some view this model as having low clinical practicality or utility (Flanagan et al., 2000). On the other hand, Das et al. (1994) posit their Planning, Attention-Arousal, Simultaneous and Successive (PASS) cognitive processing model fits within the information-processing framework, even though no physiological data are directly collected. Floyd (2005) maintains there is a distinction between information-processing and psychometric theories, although he believed the integration of these models would yield clinical promise. He suggested an optimal integration might include informationprocessing theories that strive to determine what micro-level cognitive processes underlie the specific abilities identified via psychometric approaches. Although it is not clearly stated as such, this ideal integration is very much consistent with the notion of neuropsychological assessment. 9

The remaining approach, the cognitive modifiability or dynamic approach, considers intelligence as pertaining to how individuals adapt to, or learn to function in, their environment. Similar to information-processing theories, dynamic theories are significantly limited because they lack practical comprehensive measurement tools (Flanagan et al., 2000). At face value this approach is similar to how executive functioning is conceptualized in the neuropsychological literature. Inconsistent with the views of Flanagan et al., it could be argued that there are numerous measures and tasks that evaluate how individuals adapt and learn. An example is the Wisconsin Card Sorting Task (WCST; Heaton, Chelune, Talley, Kay, & Curtis, 1993), which requires an individual to shift problem solving strategies, form and test hypotheses, and integrate feedback into thought processes. It is noteworthy that this task does not provide a comprehensive evaluation of all cognitive abilities, which raises the question if it would be possible to conduct a thorough assessment using a strictly dynamic approach. Psychometric Theories of Intelligence While information processing theories and cognitive modifiability theories have the potential to provide rich, relevant clinical data, psychometric theories of intelligence are most applicable to clinical practice. The increased complexity of psychometric models is illustrated by considering how Spearman (1904, 1927) first presented a general or g-factor theory of intelligence and nearly 100 years later McGrew (1997, 2005) proposed an integrated Cattell-Horn-Carroll (CHC) model of cognitive abilities with three hierarchically organized levels: a general intellectual ability dimension (Stratum III level), 10 broad cognitive subdomains (Stratum II level), and 70 more specific abilities, which represent narrow cognitive subdomains (Stratum I level). This progression 10

coincides with a theoretical shift in attention from a singular intelligence to a more differentiated and clinically useful set of cognitive abilities, which are clearly more relevant to neuropsychological assessment. However, the distinction between intelligence and cognitive abilities is considered arbitrary by some researchers, as evidenced by some referring to the CHC framework as a model of intelligence (e.g., Flanagan et al., 2000) and others as a model of cognitive ability (e.g., Alfonso, Flanagan, & Radwan, 2005). The evolution of these psychometric models is well documented (e.g., Carroll, 1993; Flanagan et al., 1997; Flanagan et al., 2000) and will be elaborated upon. Spearman s g-factor Spearman (1904, 1927) was the first to consider an underlying dimension might comprise cognition. Spearman s g theory is typically understood as a single-factor model of intelligence, although this notion is somewhat misleading as the theory included a general factor, g, and smaller specific (s) factors, which he believed contributed to the performance of specific activities. Spearman posited g was common across various abilities and represented a fixed amount of mental energy whereas s was unique to particular activities. He believed the presence of g was an explanation for why intelligence tests were correlated with one another (Thorndike, 1997). Spearman s g- factor is an essence theory that stipulates all distinct intellectual abilities arise from one basic process (Horn & Noll, 1997). Carroll (1993) stated Spearman s work remains relevant today because it was one of the earliest attempts to develop a theory of individual differences in intelligence based on a set of correlations between tests. Alternative views of Spearman s theory consider it a two-factor theory of intelligence, with the general intellectual factor, g, representing one dimension and the set 11

of more specific factors, s, representing the other (e.g., Thorndike, 1997; Zachary, 1990). Spearman was steadfast in his belief that specific tests evaluated both g and s but differed in the proportion measured (Horn & Noll, 1997). The most notable applied measure that evaluated g was the original the Stanford-Binet Intelligence Scale (Terman, 1916), although later versions of this test included more complex and differentiated conceptualizations of cognitive ability (Terman & Merrill, 1937; Terman & Merrill, 1960; Terman & Merill, 1973; Thorndike, Hagen, & Sattler, 1986; Roid, 2003). Two examples of single-test measures that were reported to be pure measures of g are the Raven s (1938) matrices test and Porteus (1946) maze test, although these assumptions are not unconditionally accepted (e.g., Carroll, 1995; Horn & Noll, 1997). Spearman s g theory (1904, 1927), was not unreservedly accepted. For example, Thorndike, Lay, and Dean (1909) concluded, In general there is evidence of a complex set of bonds between the psychological equivalent of both what we call the formal side of thought and what we call its content, so that one is almost tempted to replace Spearman s statement by the equally extravagant one that there is nothing whatever common to all mental functions, or to any half of them (p.368). Thorndike (1920) later posited three types of intelligence existed, abstract intelligence, which was evaluated by intelligence tests of the time, social intelligence that included the ability to successfully work with others, and lastly mechanical intelligence, the ability to comprehend and use concrete and spatial concepts. The Wechsler scales are the most frequently used measures of intelligence and were derived from the belief that intelligence was a singular construct (Flanagan, et al., 2000; Rabin, Barr, & Burton, 2005). Although Wechsler rejected some aspects of 12

Spearman s g model (1904, 1927) he defined intelligence similarly as the aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment (Wechsler, 1939, p. 3). He viewed intelligence as being global because it characterized an individual s behavior as a whole, but maintained this capacity also included unique components that were interdependent (Zachery, 1990; Zhu, Weiss, Prifitera, & Coalson, 2004). Wechsler believed it was important to evaluate intelligence according to the two different ways it was expressed, verbally and nonverbally (i.e., in performance; Kamphaus, 1993; Reynolds & Kamphaus, 1990). He viewed a verbal-nonverbal dichotomy as representing the two ways that intelligence could be expressed, not as two independent intelligences (Flanagan et al., 2000; Tulsky, Saklofske, & Ricker, 2003; Zachary, 1990; Zhu et al., 2004). Wechsler was critical of the Binet scales because he believed they were too reliant on verbal abilities. This dissatisfaction served as an impetus to design a test that would include subtests measuring both verbal and nonverbal abilities (Thorndike, 1997). In general, Wechsler believed that the tasks used to measure intelligence were relatively unimportant, and were functionally equivalent, as long as they met psychometric criteria and were related to other variables believed to reflect intelligence (Kaufman & Lichtenberger, 2002). He focused less on absolute tasks and more on understanding an individual s relative position compared to a particular reference group. Given this focus, it is interesting that many of the tasks he included in the original measure remain in use today (Zhu et al., 2004). Although Wechsler s scales were not developed on the basis of empirically supported theories of intelligence (Flanagan, Ortiz, Alfonso, & Mascolo, 2002; Zhu et al., 13

2004), Zachary (1990) reported both factor analytic and criterion-related support for Wechsler s hierarchical model, which includes Full Scale IQ at the apex and verbal and performance scales as second-tier factors. The original Wechsler tests have recently undergone substantial revisions resulting in a more differentiated 4-dimensional test structure (i.e., Verbal Comprehension, Perceptual Organization, Working Memory, and Processing Speed indices) that is consistent with current cognitive theories (Zhu et al. 2004). Bi-factor Models of Intelligence Cattell (1941, 1943, 1957, 1971) proposed one of the most prominent 2-factor models of intelligence (Flanagan et al., 2000; Johnson & Bouchard, 2005a; Taylor, 1994). His original model proposed that intelligence was best conceptualized as a composite of two separate entities, fluid (Gf) and crystallized (Gc) intelligences. The former is an inherited ability developed by interaction with the environment leading to inductive and deductive reasoning. It reflects a capacity to solve problems where previously learned knowledge and skills are irrelevant (Johnson & Bouchard, 2005a). This capacity was thought to be influenced a great deal by biological and neurological factors (Flanagan et al., 2000). Gc is believed to represent the specialized or consolidated knowledge and skills relative to a specific culture. It is influenced by access to education, cultural information, and overall experiences. Initially, Gc was conceptualized as having a smaller degree of genetic influence than Gf, although this prediction has been repeatedly disconfirmed (e.g., see Horn, 1998). Several measures were developed specifically to evaluate fluid and crystallized intelligences. For example, the Comprehensive Ability Battery (CAB; Hakstain & 14

Cattell, 1975) operationalized the Gf-Gc model. The test consists of 20 brief primary ability tests that measure a broad range of abilities contributing to both intelligences. The Kaufman Adolescent and Adult Intelligence Test (KAIT; Kaufman & Kaufman, 1993) is a more recent measure that includes scales evaluating Gf and Gc intelligences. The likelihood that a true Gf-Gc dichotomy is adequate to describe all intellectual abilities has been widely questioned (e.g., Horn & Noll, 1997). Cattell expanded his bifactor model into a more general triadic theory in 1971 that stipulated intelligence was the interaction of capacities (i.e., individual limitations pertaining to physiology), provincial powers (i.e., localized sensory and motor abilities), and agencies (i.e., the ability to perform a specific cultural task). At face value this theory appears to have less in common with psychometrically derived intelligence theories and more in common with neuropsychological processing models. Nonetheless, it shows the evolution of his thought, his dichotomous conceptualization of intelligence transformed into something much more complex. Multiple Intelligences Bi-factor models of intelligence gradually evolved into more diverse multiple intelligences or sets of cognitive abilities. Flanagan et al. (2000) divided more complex cognitive theories into Incomplete and Complete models. Incomplete models are more differentiated models than single or bi-factor models of intelligence, although they fail to account for all cognitive abilities, as complete models do. For example, Flanagan et al. (2000) viewed Thurstone s (1938) attempt at mapping primary mental abilities as incomplete because it did not account for the total range of abilities that were later 15

observed, whereas Carroll s (1993) three stratum theory of cognitive abilities was viewed as constituting a complete model of cognitive abilities. Thurstone s (1938) work is notable because it was the first large study that investigated the structure of intelligence by using multiple-factor methods within a comprehensive battery of 57 tests (Carroll, 1993). Thurstone reached the conclusion that intelligence was multi-dimensional after administering a battery of tests to a large group of predominantly University of Chicago students. He initially concluded there were seven primary mental abilities, although he later included two additional abilities. The nine primary abilities identified were inductive reasoning (I), deductive reasoning (Rs), practical problem solving (R), verbal comprehension (V), associative short-term memory (Ma), spatial relations (S), perceptual speed (P), numerical facility (N), and word fluency (Fw). Initially, this work led to the development of the seven-factor Primary Abilities Battery (Thurstone & Thurstone, 1941), which was based on the belief that a pattern of scores would permit more accurate prediction of performance than an overall score. However, this hypothesis was later found to be inaccurate (Thorndike, 1997). Not surprisingly, there was much debate regarding the number of factors Thurstone (1938) retained. Spearman (1939) retained an extreme position that the data were indicative of a single factor while others were able to replicate and eventually extend the identification of factors (e.g., Carroll, 1993; Ekstrom, French, & Harman, 1979; Guilford, 1959, 1967; Hakstian & Cattell, 1974; Horn, 1972). For example, Guilford s (1959) model of intelligence (described in detail below) proposed 150 distinct factors and is a clearly more complex and differentiated model than the one Thurstone proposed. 16

Initially, while conclusive evidence of multiple intelligence dimensions accumulated, it was challenging to organize this information in a parsimonious way that would foster research and clinical utility. For example, Guilford (1967) used factor analytic procedures similar to Thurstone s (1938) and proposed a structural model of intelligence (SOI) that organized the concepts he believed intelligence required but not specific cognitive abilities. Guilford viewed intelligence as being comprised of five mental operations (cognition, divergent production, memory, convergent production, and evaluation) that operated across four distinct contents (figural, symbolic, semantic, and behavioral), which in combination produced six unique products (units, classes, relations, systems, implications, and transformations). There is mixed appreciation for this complex model. On one hand, there is evidence the operational and content aspects of Guilford s SOI model may have relevance in modern-day theory (Horn & Noll, 1997), whereas others have questioned the logical validity and advantage of creating a taxonomic system with universal parameters that interact in all possible ways to create a structural model of intelligence (e.g. Carroll, 1993). Extended Gf-Gc Models The Gf-Gc model evolved into a more comprehensive multi-structured theory of cognitive abilities as evidence accumulated that there were more than two components of intelligence, although curiously the Gf-Gc label remained. Flanagan et al. (2000) posited the gradual augmentation of this theory began with Horn s (1965) doctoral dissertation, which suggested the inclusion of four additional abilities, visual perception or processing (Gv), short-term memory (Short Term Acquisition and Retrieval SAR or Gsm), longterm storage and retrieval (Tertiary Storage and Retrieval TSR or Glr), and speed of 17

processing (Gs). In 1968 Horn added auditory processing ability (Ga) to his extended Gf- Gc theory while also refining Gv, Gs, and Glr descriptions. After a relatively quiet period of nearly 20 years, in 1991 Horn added a factor representing an individual s reaction time and decision speed (Gt), which was later conceptualized as correct decision speed (CDS; Horn & Noll, 1997). Additionally, factors reflecting quantitative ability (Gq) and broad reading/writing ability (Grw) were included based on empirical support to extended Gf- Gc models (Horn, 1991; Woodcock, 1994). Horn and Blankson (2005) described the current Gf-Gc framework to include the following eight prominent cognitive abilities: (1) Acculturation knowledge (Gc), (2) Fluid reasoning (Gf), (3) Short-term acquisition and retrieval (SAR) or short-term memory (Gsm), (4) Fluency of retrieval from long-term storage (TSR) or long-term memory (Glm), (5) Processing speed (Gs), (6) Visual processing (Gv), (7) Auditory processing (Ga), and (8) Quantitative knowledge (Gq). They noted that these broad abilities represent behavioral organizations that stem from neural structures and functions. Impressively, this framework was developed in response to five distinct kinds of evidence: (1) structural evidence, (2) developmental evidence, (3) neurocognitive evidence, (4) achievement evidence, and (5) heritability (Horn & Blankson, 2005; Horn & Noll, 1997). Horn and Blankson stated that strong structural support for the extended Gf-Gc model has been obtained across gender, levels of education, ethnicity, nationality, language, and historical period, as similar differentiated abilities are consistently identified. Empirical evidence makes clear that specific abilities change at varying rates as an individual grows older or experiences changes in neurological functioning (Horn & 18

Noll, 1997). Specifically, a cluster of Gf, Gsm, and Gs abilities appear vulnerable to neurological, genetic, and aging effects whereas the cluster of Gc, Glm, and Gq are not related to aging effects and represent more expertise abilities. A third cluster of abilities is comprised of Gv and Ga and have been linked to sensory modalities, which also makes them uniquely sensitive to aging processes and neurological functioning. In short, these findings suggest different abilities have unique developmental trajectories and neurological components, which supports a differentiation between them. Horn and Noll state that the best predictor of specific achievement (e.g., math) is a similar corresponding ability (e.g., Gq) rather than a more general ability (i.e., g). This finding represents achievement evidence that cognitive abilities are multidimensional and not a single entity. Heritability evidence for the extended Gf-Gc framework is not as overwhelming, as it does not conclusively support either single or multiple intelligences. This particular line of research developed in response to Cattell s (1941, 1957) belief that Gf and Gc differed in hereditability. Specifically, the traditional thought was that Gf represented primarily genetic influences that are invested in Gc. Thus, these abilities were thought to be distinct from one another. The extended Gf-Gc theory does not support one general factor (i.e., g) that subsumes all other factors. Horn and Blankson (2005) stated there is inadequate evidence to suggest a single factor satisfactorily accounts for the intercorrelations between all other abilities. Specifically, they believe it is troubling that one common factor does not reproduce across studies and that in some instances the more general factor is similar to a lower level factor. For example, Gustafsson (1984) found a higher order factor that was roughly equivalent to Gf. 19

The Cattell-Horn Gf-Gc cognitive abilities model served as a blueprint for the Woodcock-Johnson Psycho-Educational Battery Revised (WJ-R; Woodcock & Johnson, 1989a), thus the model has high clinical utility. McGrew (2005) stated the WJ-R was the first individually administered cognitive test evaluating the nine broad abilities specified by the theory. McGrew and Flanagan (1998) posited the abilities measured by many intelligence and neuropsychological batteries are accounted for by the nine Gf-Gc abilities despite the fact many of these measures do not explicitly state doing so. Carroll s Three-Stratum Model of Intelligence Carroll (1993, 1997) viewed the Cattell-Horn (Horn, 1985, 1988) model as being the most acceptable, comprehensive hierarchical model of cognitive abilities, although he recognized the valuable contributions of Spearman (1904), Thurstone (1938), and Vernon (1964, 1965). His major criticism of the Cattell-Horn model was that it did not accept a higher level g factor as explanation for correlations between Gf-Gc factors. Carroll was steadfast in his belief in an over-arching intelligence factor based on the results of his major survey of the literature, which included reanalyzing 460 sets of data pertaining to performance on cognitive tasks. Carroll (1997) described his ambitious goal for this project as follows: [The results were] intended to constitute a provisional statement about the enumeration, identification, and structuring of the total range of cognitive abilities known or discovered thus far. In this way it was expected to replace, expand, or supplement previous theories of the structure of cognitive abilities, such as Thurstone s (1938) theory of primary mental abilities, Guilford s (1967) structure-of-intellect theory, Horn and Cattell s (1966) Gf-Gc theory, or Wechsler s (1974) theory of verbal and performance components of intelligence. (pp. 124). 20

This enormous project led Carroll (1993) to propose a three-stratum theory of cognitive abilities. Carroll viewed his theory as an extension of Vernon s (1964, 1965) hierarchical model of intelligence, which specified the importance of verbal and educational experiences, and also spatial, practical, and mechanical abilities. Carroll s theory is based on the results of his factor analytic review; he subjected correlation matrices of variables from independent studies to exploratory factor analysis. Resulting factors were considered first-order factors, which were next subjected to similar factor analytic techniques to derive second-order factors. Similarly, second-order factors were next factor analyzed resulting in the conclusion that the variance contained within second-order factors was best accounted for by a single third-order factor. Carroll s model includes more than 60 primary mental abilities (Stratum I), which comprise eight broad abilities (Stratum II) that are generally consistent with those identified in the Gf-Gc theory. Carroll posited these eight broad abilities include (1) fluid intelligence, (2) crystallized intelligence, (3) general memory and learning, (4) broad visual perception, (5) broad auditory processing, (6) broad retrieval ability, (7) broad cognitive speediness, and (8) processing speed, which collectively comprised a third-order factor representing general (g) intelligence (Stratum III). In response to Horn and Noll s (1997) questioning of a general intelligence factor, Carroll (2003) conducted confirmatory analysis within the standardization sample of the WJ-R. He found that an intelligence model that included g was superior to alternative models specifying either (1) that g is equivalent to Gf or (2) that there is no g. 21

An Integration of Carroll s Three-Stratum Model and Extended Gf-Gc Theory McGrew (1997) first proposed an integrated Cattell-Horn-Carroll (CHC) framework to synthesize the factor analytic work conducted by Carroll (1993, 1997) and Horn and Noll (1997). He believed this was possible because of the numerous conceptual overlaps between models. For example, both frameworks included similar Gf and Gc intelligences. Additionally, they similarly proposed that short-term memory (i.e., General Memory and Learning and Short- Term Acquisition and Retrieval Gsm) abilities were distinct from storage and retrieval (i.e., Broad Retrieval Ability and Long-Term Associative Storage and Retrieval Glr) abilities. Furthermore, both models included similar sensory processing abilities (i.e., Gv and Ga), as well as two speed abilities (i.e., Broad Cognitive Speediness or Cognitive Processing Speed Gs, and Reaction Time/Decision Speed or Correct Decision Speed Gt). Given these similarities, McGrew believed the models could be integrated into a comprehensive model that would optimally (1) describe or classify individual subtests included in major intelligence batteries and (2) serve as a theoretical blueprint for future measures. Although the two models, Carroll s (1993, 1997) three-stratum theory and the Gf- Gc theory (Horn & Noll, 1997), included numerous similarities, McGrew (1997) had to resolve a number of notable discrepancies in order to integrate them. Initially, McGrew did not believe it was important to resolve the most striking difference between models, the acceptance of a higher-order g factor. Additional discrepancies that needed to be resolved included the placement of narrow abilities under broad abilities. For example, Carroll viewed reading and writing abilities as narrow abilities subsumed by Gc whereas the Gf-Gc model viewed them as a distinct broad ability (i.e., Reading/writing; Grw). A 22

similar discrepancy is observed with the placement of quantitative abilities. Carroll viewed quantitative reasoning as a narrow ability subsumed by Gf whereas the Gf-Gc model posited quantitative knowledge and reasoning constituted a distinct broad construct (Gq). Carroll viewed phonological awareness (e.g., phonetic coding) as being subsumed by Gc whereas the Gf-Gc model placed these abilities under a broad auditory processing construct (Ga). The remaining discrepancy noted by McGrew, Carroll placed short-term and free recall memory abilities with learning abilities under the broad General Memory and Learning factor whereas Horn (1991) differentiated between immediate apprehension and storage/retrieval abilities by having broad short-term memory ability (Gsm) distinct from long-term retrieval ability (Glr). To synthesize these models McGrew (1997) conducted confirmatory factor analysis using 37 measures from the standardization sample of the complete WJ-R battery. After testing alternative models he concluded an optimal framework (1) retained the distinction between quantitative reasoning/knowledge (Gq) and fluid intelligence (Gf), (2) included a distinct factor for reading and writing abilities (Grw), (3) placed phonological awareness abilities under auditory processing (Ga), and (4) separated short term memory (Gsm) from storage and retrieval abilities (e.g., associative memory), which fall under a broad retrieval (Glr) ability. McGrew (2005) stated that Horn and Carroll informally agreed on the CHC umbrella theory terminology in 1999. However, they continued to disagree on the existence of g. McGrew noted this disagreement initially presented a dilemma for individuals using the CHC model in research or clinical practice because it required the clinician to make a decision regarding whether or not to include g. Encouragingly, this 23

disagreement has since been settled and the current CHC framework includes a thirdorder general factor at the apex of the model (Carroll, 2003; Evans, Floyd, McGrew, & Legforgee, 2002; Floyd, Evans, & McGrew, 2003; McGrew & Woodcock, 2001; Taub & McGrew, 2004). Horn and Carroll s general acceptance of CHC theory functioned as a stamp of approval for the most thorough, empirically supported model of cognitive abilities. Evans et al. (2002) described the abundance and range of empirical support for this framework as follows: [The] CHC theory is grounded in more than half a century of factor analytic research, and developmental studies of cognitive abilities, genetic heritability research, and neurocognitive analyses have contributed to its validity base (p. 247). Thus, the theory provides a well supported, superordinate structure from which individual differences in cognitive ability and achievement can be explained. The integrated CHC model of cognitive abilities is presented in Figure 1. For simplification, g was omitted from the apex of this figure so that it only includes the 10 broad Stratum II dimensions and the 70 narrow Stratum I abilities. Narrow abilities that are printed in bold text were placed under different broad ability constructs in Carroll s (1993) model. Narrow abilities that are printed in italic text were included in a broad knowledge/achievement domain by Carroll. 24

Fluid Intelligence (Gf) Quantitative Knowledge (Gq) Crystallized Intelligence (Gc) Reading and Writing (Grw) Short-Term Memory (Gsm) Visual Processing (Gv) Auditory Processing (Ga) Long-Term Storage & Retrieval (Glr) Processing Speed (Gs) Decision/ Reaction Time/Speed (Gt) General Sequential Reasoning Induction Quantitative Reasoning Piagetian Reasoning Speed of Reasoning Math. Know. Math. Ach. Language Develop. Lexical Knowl. Listening Ability General Info. Info. about Culture General Science Info. Geography Ach. Comm. Ability Oral Production & Fluency Gram. Sensitivity Foreign Language Proficiency Foreign Language Aptitude Reading Decoding Reading Comp. Verbal Language Comp. Cloze Ability Spelling Ability Writing Ability English Usage Knowledge Reading Speed Memory Span Learning Abilities Working Memory Visualization Spatial Relations Visual Memory Closure Speed Flexibility of Closure Spatial Scanning Serial Perceptual Integration Length Estimation Perceptual Illusions Perceptual Alternations Imagery Phon. Cdg.: Analysis Phon. Cdg.: Synthesis Speech Snd. Discrim. Res. to Aud. Stim. Distortion Memory for Sound Patterns General Snd. Discrim. Temporal Tracking Musical Discrim. & Judgement Maintaining & Judging Rhythm Snd-Intensity Duration Discrim. Snd-Freq. Discrim. Hearing & Speech Threshold Assoc. Memory Mngful. Memory Free Recall Memory Ideational Fluency Assoc. Fluency Expressional Fluency Naming Facility Word Fluency Figural Fluency Figural Flexibility Sensitivity to Problems Originality/ Creativity Learning Abilities Perceptual Speed Rate-of- Test Taking Number Facility Semantic Processing Speed Simple Reaction Time Choice Reaction Time Mental Comparison Speed Correct Decision Speed Absolute Pitch Sound Localization Figure 1. The integrated Cattell-Horn-Carroll (CHC) cognitive abilities framework: Broad and narrow abilities 25

Consistent with science, in all likelihood the CHC model will be revised as individual differences are better understood over time. For example, while the broad Gf, Gc, Gv, Gsm, Glr, Gs, Gq, and Grw cognitive abilities appear to be the primary components of the CHC model there is tentative support for including tactile (Gh), kinesthetic (Gk), and olfactory (Go) abilities (McGrew, 2005). Similarly, empirical evidence suggests that the cognitive speed portion of the CHC model might be too simplistic to convey individual differences. It may be more accurate to reflect cognitive speed with three broad speed ability domains: broad cognitive processes (Gs); broad decision speed (Gt); and broad psychomotor speed (Gps) (Ackerman, Beier, & Boyle, 2002; O Connor & Burns, 2003; Roberts & Stankov, 1999; Stankov, 2000). Furthermore, Ackerman, Bowen, Beier, and Kanfer (2001) suggested a broad domain reflecting specific general ability knowledge (Gkn) is distinct from Gc, which constitutes an additional possible revision to the framework. In summary, there is mounting evidence that makes clear the current framework is likely to be revised as a more refined understanding of cognitive abilities is collectively developed. Thus, although the current framework is conceptualized as a Complete framework future revisions may render the current structure Incomplete. CHC Theory to Practice Many early standardized intelligence tests were used because they exhibited clinical relevance, not because they corresponded with empirically derived theories (Strauss et al., 2006). Testing measures were historically developed in response to many different notions of intelligence. However, the current trend is for new or revised versions of measures to evaluate CHC cognitive abilities. This is a positive change as it indicates a 26

more widely accepted view of cognitive abilities. Summarizing the extensive work conducted by Woodcock (1990), Horn (1991), Carroll (1993), McGrew (1997), and McGrew and Flanagan (1998), Alfonso et al. (2005) reported that many intelligence batteries developed prior to 1998 provided poor coverage of CHC abilities. For example, only the Cognitive Assessment System (CAS; Naglieri & Das, 1997) satisfactorily evaluated Gv, Gsm, and Gs abilities. Alfonso et al. reported a majority of intelligence batteries published after 1998 provided broader coverage of abilities (e.g., four or five broad cognitive abilities versus two or three broad abilities). This makes evident that the CHC model has impacted test development. As an illustration, a direct descendent of the Binet-Simon Scale (Binet, 1905), the Stanford Binet Intelligence Scales Fifth Edition (SB5; Roid, 2003) now provides coverage for the following CHC factors: Gf, Gc, Gq, Gv, and Gsm (Strauss et al., 2006). The Wechsler Preschool and Primary Scale of Intelligence Third Edition (WPPSI-III; Wechsler, 2002) and Wechsler Intelligence Scale for Children Fourth Edition (WISC-IV; Wechsler, 2003a, 2003b) evaluate Gf, Gc, Gv, Gsm (WISC-III only), and Gs abilities, while earlier versions evaluated fewer constructs (Alfonso et al., 2005). Currently, the most comprehensive measure is the Woodcock-Johnson III Tests of Cognitive Abilities Diagnostic Supplement (WJ-III/DS; Woodcock, McGrew, Mather, & Schrank, 2003), which evaluates Gf, Gc, Gv, Gsm, Glr, Ga, and Gs cognitive abilities. Noteworthy, the CHC model served as a blueprint during test construction of this measure (Woodcock, McGrew, & Mather, 2001a). Given that many psychologists have relied on CHC theory to guide current test development and interpretation suggests the theory is widely accepted as being the most useful model of cognitive abilities (Alfonso et al., 2005). However, a review of recently 27

published tests highlighted that Glr, Ga, and Gs abilities remain inadequately evaluated by intelligence tests (Alfonso et al., 2005). Woodcock (1990) proposed cross battery assessment to increase the number of cognitive abilities evaluated during assessment. His goal was to more satisfactorily evaluate a range of cognitive abilities by combining the components of different measures. The importance of evaluating a range of cognitive abilities cannot be understated. For example, Evans et al. (2002) reported numerous CHC cognitive abilities were associated with reading achievement. It is critically important to thoroughly evaluate all cognitive abilities because an individual s reading difficulties may be due to cognitive inefficiencies less logically related to reading, such as narrow Gs or Glr abilities. Thus, administering subtests from different measures is likely required to evaluate a fuller range of cognitive abilities, as opposed to relying on a single instrument, which would lead to less adequate assessment. McGrew and Flanagan (1998) formally proposed a cross-battery (CB) approach as a means of incorporating the CHC theory and research into practice. This strategy in essence involves selecting subtests across batteries that are known to evaluate specific broad CHC abilities. At face-value, this approach is very much in-line with standard neuropsychological assessment practice. The CB approach is advantageous because it decreases the likelihood results will be skewed by construct-irrelevant variance and construct under-representation (e.g., Messick, 1995). Alfonso et al. (2005) elaborated on the three components of CB assessment. First, the approach is grounded in CHC theory because of the theoretical strengths previously noted. Second, it focuses on selecting subtests across batteries based on their coverage of broad CHC abilities. Thankfully, all subtests of major cognitive and achievement 28

batteries have been classified based on these broad-ability classifications (Alfonso et al., 2005; Flanagan, et al., 2002), which permits the clinician to easily select subtests across batteries to increase coverage of constructs. Alfonso and colleagues argued it is essential to be cognizant of what broad abilities are evaluated by specific subtests so data can be organized into construct-relevant clusters to aid interpretation. The third component of CB assessment focuses on narrower CHC abilities (Stratum I). It is important to focus on narrower abilities (e.g., Math Knowledge, Math Achievement) to ensure that broad CHC abilities (e.g., Gq) are well represented. McGrew and Flanagan (1998) and Flanagan et al. (2000) have outlined how flexible CB approaches can supplement common intelligence batteries to increase coverage of CHC abilities. In line with their recommendations and views, research has supported combining measures to more thoroughly evaluate CHC abilities. For example, Tulsky, Ivnik, Price, and Wilkins (2003) factor analyzed the combination of scores obtained from administering the Wechsler Adult Intelligence Scale Third Edition (WAIS-III; The Psychological Corporation, 1997) and Wechsler Memory Scale Third Edition (WMS-III; Wechsler, 1997) and reported a six-factor structure including verbal, perceptual, processing speed, working memory, auditory memory, and visual memory constructs, which have conceptual overlap with both broad and narrow CHC abilities. Dean and Woodcock (2003) developed the Dean-Woodcock Neuropsychological Assessment System (D-WNAS) neuropsychological model, which synthesized the CHC model and a cognitive neuropsychological/information-processing theory. Thus, the model is extremely relevant to neuropsychological assessment practice. Brinkman et al. (2005) stated the D-WNAS effectively integrates the assessment of cognitive abilities and 29

sensorimotor functions. The model arranges CHC abilities in a sequential processing framework that begins with conscious awareness (Gsm), which functions to influence more specific processing. Specific processing ranges from automatic processing (e.g., tactile-kinesthetic thinking [Gtk]) to complex thinking (e.g., novel reasoning [Gf]) and it ultimately produces cognitive or motor performance. Alternative Models of Intelligences It should be noted that alternative models of intelligence have been proposed that are unrelated to the Spearman s g CHC lineage. While some of these conceptualizations have clinically supported measures, others do not, which raises the question of whether they are clinically relevant. The present section will elaborate on four alternative models of intelligence, (1) the Planning, Attention, Simultaneous, Successive (PASS) theory (Naglieri & Das, 2005), (2) the Triarchic theory (Sternberg, 1985, 1988, 2005), (3) the Multiple-Intelligences theory (Chen & Gardner, 2005; Gardner, 1983, 1993, 1999), and (4) the verbal-perceptual-image rotation (VPR) model (Johnson & Bouchard, 2005a; 2005b). Das et al. (1994) developed the PASS theory of intelligence based on neuropsychological theory put forth by Luria (1966, 1973, 1980). As previously mentioned, Luria conceptualized cognitive processes as three separate functional systems simultaneously working to produce conscious activity. Planning (P) is essential for (1) forming strategies to complete tasks, (2) controlling attention, (3) engaging in simultaneous and successive processing, and (4) utilizing knowledge and skills (Das, Kar, & Parrila, 1996). Das et al. (1994) believe that Attention (A) is closely related to Luria s first functional unit and is directed by intention and goals. The remaining PASS 30

components are simultaneous (S) and successive (S) processing, which reflects Luria s second functional unit. The PASS theory is rooted in neuropsychological theory and is empirically supported (e.g., Das et al., 1994; Naglieri, 1997; Naglieri & Das, 2005). The Das-Naglieri Cognitive Assessment System (CAS; Naglieri & Das, 1997) was specifically developed to assess the PASS theory of intelligence and was minimally influenced by the above-mentioned psychometric conceptualizations of intelligence. Sternberg s (1985, 1988) Triarchic theory of intelligence emphasizes analytic, creative, and practical aspects of intelligence. The analytic aspect of intelligence consists of information-processing skills that guide intelligent behavior. He notes a cultural preference for teaching and measuring this specific aspect of intelligence during assessment. Creative aspects of intelligence are relied upon to create an optimal match between one s skills and one s external environment. Lastly, the practical aspect of intelligence permits an individual to capitalize on past experiences to process both novel and unfamiliar information successfully. Sternberg s (1985, 1988) theory values assessing meta-components (i.e., how an individual sequentially processes information, acts, and learns from experiences), performance components (i.e., problem solving processes), and knowledge acquisition components (i.e., intellectual functioning processes) when evaluating intelligence (Sternberg, 2005). Metacomponents trigger performance and knowledge acquisition components, which function in a way that produces feedback for metacomponents and optimally makes the entire system more effective and efficient over time. Although Sternberg s model is an interestingly complex manner to conceptualize intelligence and 31

cognitive processes, it does not correspond with specific measures and thus has limited clinical utility. Gardner s (1983, 1993, 1999) theory of Multiple-Intelligences (MI) posits eight autonomous intelligences comprise an individual s unique profile of intelligence (Chen & Gardner, 2005). The eight intelligences included in a profile are (1) linguistic, (2) logical-mathematical, (3) musical, (4) spatial, (5) bodily-kinetic, (6) naturalistic, (7) interpersonal, and (8) intrapersonal. It is clear that there is some conceptual overlap between several intelligences identified by Gardner and the CHC model (e.g., logicalmathematical intelligence). An important distinction between MI theory and more traditional views of intelligence is the belief that intelligences emerge in response to what is valued in the specific culture of an individual, not what is innate (Gardner, 1999). Chen and Gardner (2005) stated the MI theory is grounded in empirical data collected across disciplines including biology, neuropsychology, developmental psychology, and cultural anthropology. While there is not one specific test that evaluates all intelligences, the MI theory stresses the importance of using intelligence-fair instruments specific to each of the eight identified intelligences. Visser, Ashton, and Vernon (2006a) recently investigated whether MI abilities were truly distinct and theoretically independent from one another in a large adult sample. They selected two tests for each of the eight MI abilities and preformed exploratory factor analyses within a combined correlation matrix of these test variables. Their results did not support eight independent abilities. They interpreted the data as suggesting that six of the eight abilities appeared similarly related to general intelligence (i.e., g) whereas bodily-kinesthetic abilities, and to a lesser degree musical abilities, were 32

unrelated. Garner (2006) viewed the Visser et al. (2006a) investigation of the MI theory as weak because they relied too heavily on tests that included logical components, which inevitably led to the recovery of a general cognitive factor. He stated the MI theory was created specifically to challenge intelligence structures similar to Visser et al. s (2006a) results. In response, Visser, Ashton, and Vernon (2006b) stated it is critically important that Gardner clarify his theory by providing falsifiable, testable, MI based hypotheses that would predict results different from those predicted by existing [cognitive] models. The remaining conceptualization of intelligence that will be elaborated upon was recently put forth by Johnson and Bouchard (2005a), the verbal-perceptual-image rotation (VPR) model. This model evolved from their investigation of three competing intelligence models, Carroll s (1993) three-stratum model, the Cattell-Horn (Cattell, 1971; Horn, 1989) Gf-Gc model, and Vernon s (1964, 1965) verbal-perceptual model. Making use of a large heterogeneous sample, they performed confirmatory factor analysis in a dataset that included 42 mental ability tests. They interpreted the results as first indicating a verbal-perceptual model was superior to the other two models. However, they improved upon this model by adding a higher-order image rotation factor 2. Thus, their revised model identified verbal (V), perceptual (P), and image rotation (R) factors. They found the VPR model superior to the three alternative models. Johnson and Bochard (2005b) found further support for the VPR model in their reanalysis of data presented in Thurstone s (Thurstone & Thurstone, 1941) Factorial studies of intelligence. 2 The importance of image rotation abilities has been established by prior investigations. For example, this ability has been found to predict success in the areas of aviation, engineering, and physical sciences better than more general or verbal abilities (Gottfredson, 2002; Humphreys & Lubinski, 1996; Shea, Lubinski, & Benbow, 2001), thus there is support for the meaningfulness of a specific image rotation cognitive ability. 33

Although their support for a VPR model is impressive, their decision to investigate the precursors to the CHC framework rather than the current, refined CHC model is questionable. An important empirical question is whether the proposed VPR framework provides a better fit to data than the differentiated and integrated CHC framework. Defining the Gap Between Neuropsychological Assessment and Cognitive Abilities Theory A gap exists between neuropsychological assessment, which has been traditionally non-theoretical, and popular models of intelligence and cognitive abilities, which have been derived via statistical procedures (Brinkman et al., 2005). For example, tasks included in the Halstead-Reitan Neuropsychological Test Battery were selected based on the ability to differentiate between neurological and normal conditions whereas the CHC framework was derived with factor analytic techniques to synthesize theories that were similarly derived. Reitan and Wolfson (1993) stated there have not been serious attempts to integrate theories of intelligence with brain functioning. Their comment explicitly defines the void between neuropsychological assessment and cognitive theory. A possible explanation for the lack of integration between neuropsychological assessment and historical conceptualizations of intelligence is that the singular concept of intelligence has minimal clinical utility. For example, many traditional intelligence tests were only moderately sensitive to neurological impairment. Reitan and Wolfson (1993) noted historical accounts of individuals who had experienced significant brain damage and retained their ability to perform well on measures such as the Stanford-Binet Intelligence Scale. Brinkman et al. (2005) stated these findings can be generalized to 34

more recent measures as well (e.g., Wechsler Adult Intelligence Scale Revised [WAIS- R; Wechsler, 1981]; Zillmer, Waechtler, Harris, Khan, & Fowler, 1992). Although a global indication of intelligence is unhelpful in differentiating normal from impaired, it is likely that a theoretical framework that differentiates between many broad and narrow cognitive abilities would have much greater clinical utility. While researchers have shifted their understanding of intelligence to include a distinct set of cognitive abilities, there was a time lag before these clinically useful theories were evaluated by broad intelligence instruments (e.g., Alfonso et al., 2005). Thankfully, many new or revised broad intelligence measures have been developed to correspond with the CHC framework, which increases the clinical utility of the theory. Given the prevalence of the CHC theory in new intelligence batteries, it will be advantageous for neuropsychologists to understand how frequently used neuropsychological instruments fit with this model. This knowledge would foster more accurate synthesis and interpretation of data collected during a clinical evaluation. Thus, it would be optimal to close the gap between theory and practice. The integration of neuropsychological assessment and CHC theory would be beneficial to both practicing clinicians and researchers alike. It seems the broad array of abilities included in the CHC framework would be especially fitting for neuropsychological assessment because they have the potential to be more diagnostically sensitive than historical single- or dual-factor conceptualizations of intelligence. The integration would provide neuropsychologists a new manner to conceptualize data obtained during an evaluation. For example, the model could serve as a foundation for selecting measures, organizing collected data, and interpreting results (Alfonso et al., 35

2005). Working in the other direction, CHC theory would be tremendously strengthened if the neurological processes underlying broad and specific abilities could be further established. This understanding would go a long way toward bridging the divide between cognitive ability theory and neuropsychological assessment. Question of Interest The degree to which neuropsychological measures evaluate dimensions consistent with the CHC framework has not been well established. While there is a growing appreciation of CHC theory in new editions of cognitive batteries (e.g., SB-5, WJ-III), more focused neuropsychological measures have not been developed or interpreted in relation to this framework. An integration of CHC theory and clinical practice would strengthen both in several respects. First, it would document what CHC abilities are adequately evaluated by instruments, which would increase the clinical utility of the instrument and theory alike. Second, it would draw attention to the vast and complex nature of cognitive abilities, thus increasing the likelihood of adequate assessment. Lastly, it is good science to use a standard nomenclature when describing an individual s cognitive abilities and the constructs evaluated by specific instruments. The creation of a common-language across sub-disciplines of psychology can increase collaboration, make research more clinically applicable, and foster progress towards better understanding cognitive abilities. In some respects the CHC framework has the potential to function similarly to the Five Factor Model of personality (Costa & McCrae, 1992), which has provided a comprehensive and unified taxonomy for studying personality (e.g. see, Costa & Piedmont, 2003). The CHC model is a potentially complete taxonomic system that could be used to describe all cognitive abilities. 36

Chapter II Methodology General Methods The degree to which neuropsychological measures are consistent with the CHC framework was evaluated by conducting a thorough review and analysis, or reanalysis, of datasets in the literature that are suitable for factor analysis. In general, this methodology is analogous to Carroll s (1993) general survey of factor analytic results, although the present study focused solely on frequently used neuropsychological instruments. In short, Carroll collected and reanalyzed factor analytic investigations using both exploratory and hierarchical factor analytic procedures. He interpreted the results and derived his threestratum theory of intelligence. The present study differs in that resulting factor structures were integrated with the CHC theory of cognitive abilities to bridge a gap between theory and practice. Survey of Neuropsychological Measures To make the review and reanalysis manageable, a decision was made regarding what relevant measures would be included in the search. Given the goal of the project 37

was to determine how neuropsychological measures correspond with CHC abilities it was important to ensure a range of tests were included that are typically administered during a neuropsychological evaluation. Consistent with specific neuropsychological content areas identified by Lezak et al. (2004) and Strauss et al. (2006), it was important to include tests evaluating the following: (1) achievement, (2) attention/concentration, (3) executive functions, (4) language, (5) memory, (6) motor function, (7) visual perception, (8) intelligence batteries, and (9) general neuropsychological batteries. Because achievement and intelligence tests have already been thoroughly investigated and linked with CHC constructs, these tests were not reinvestigated. Recent neuropsychological assessment practice surveys (Camara, Nathan, & Puente, 2000; Rabin et al., 2005) were consulted to identify frequently used neuropsychological tests. Camara et al. investigated which neuropsychological and psychological tests were most often used by neuropsychologists. Rabin et al. did the same but also investigated the popularity of particular tests for assessing specific domains of functioning. They reported detailed information regarding the popularity of (1) neuropsychological assessment tests across all domains, (2) memory tests, (3) attention tests, and (4) executive functioning tests. The top 20 frequently used tests identified in each survey were targeted in the present review. 3 These tests were assigned to one of the nine content areas mentioned above, with the goal being to include three tests in each category. However, Strauss et al. s (2006) neuropsychological test compendium was also consulted to select additional relevant tests when more were needed in a specific domain 3 Camara et al. (2000) identified the MMPI-2, Beck Depression Inventory, and Rorschach Inkblot Test as the 1 st, 11 th, and 18 th most frequently used tests by neuropsychologists. The MMPI-2 was identified as the 11 th most frequently used test by Rabin et al. (2005). These measures were not included because they are specific to psychopathology and personality. 38

(e.g., Visual Perception) 4. The complete list of targeted tests is presented in Table 1. A total of five tests are listed in the last category because all were listed as one of the top 20 general instruments in the Rabin et al. (2005) survey. 4 If possible, tests with referenced factor analytic research were preferentially selected. 39

Table 1 Targeted neuropsychological tests and batteries: Rank-order popularity and percent of respondents who use each test Global Rank- Global Rank- Order Order Popularity: Popularity: Content Area Camara et al. Rabin et al. Achievement Attention/ Concentration Executive Functions Test Wide Range Achievement Test / Rev. / 3 rd Ed./ 4 th Ed. (Jastak & Jastak, 1978; Jastak & Wilkinson, 1984; Wilkinson, 1993; Wikinson & Robertson, 2006) Wechsler Individual Achievement Test / 2 nd Ed. (Psychological Corporation, 1992, 2002) Woodcock-Johnson Achievement Test Rev. / 3 rd Ed. (Woodcock & Johnson, 1989b; Woodcock, McGrew, & Mather, 2001b) Trail Making Test (Reitan, & Wolfson, 1985, 1993) Paced Auditory Serial Addition Test (Gronwall, 1977). Stroop Color-Word Test (Stroop, 1935; Jensen & Rohwer, 1966) Wisconsin Card Sorting Test (Heaton, 1981; Heaton et al., 1993) Halstead Category Test (Reitan & Wolfson, 1985, 1993) Controlled Oral Word Association / FAS Word Fluency Test (Benton & Hamsher, 1978, 1989; Benton, Hamsher, Rey, & Sivan, 1994) Percent who use the test for the Domain a : Rabin et al. 9 18 -- -- 20 -- -- -- -- 4 3 63.7 -- 35 43.6 -- 30 31.5 12 7 75.5 9 16 40.1 5 19 23.5 40

Language Memory Motor Function Boston Naming Test / 2 nd Ed. (Kaplan, Goodglass, & Weintraub, 1978, 1983, 2001) Aphasia Screening Test (Halsted & Wepman, 1959; Reitan & Wolfson, 1985, 1993) Multilingual Aphasia Examination (Benton & Hamsher, 1978, 1989; Benton, Hamsher, Rey, & Sivan, 1994) Wechsler Memory Scale Rev. / 3 rd Ed. (Wechsler, 1987, 1997) Rey Auditory Verbal Learning Test/ California Verbal Learning Test / 2 nd Ed. (Delis, Kramer, Kaplan, & Ober, 1987, 2000; Rey, 1964) Wide Range Assessment of Memory and Language / 2 nd Ed. (Adams & Sheslow, 1990, 2003) Finger Tapping Test (Reitan, 1969; Reitan & Wolfson, 1985, 1993) Hand Dynamometer (Reitan & Davidson, 1974; Reitan & Wolfson, 1985, 1993) Grooved Pegboard (Matthews & Klove, 1964) 8 13 -- 17 35 -- -- -- -- 3 2 70.8 14 4 21.3 (54.3) -- 13 -- 6 -- -- 20 -- -- 15 -- -- 41

Visual Hooper Visual Organization Test 19 -- -- Perception (Hooper, 1958; Hooper 1983) Rey-Osterrieth Complex Figure Test (Corwin & Bylsma 1993; Osterrieth, 1944; Meyers & Meyers, 12 8 -- 1995; Rey, 1941) Judgment of Line Orientation -- -- -- (Benton, Varney, & Hamsher, 1978) Intelligence Wechsler Adult Intelligence Scale Rev. / 3 rd Ed. 2 1 63.1 b Batteries (Wechsler, 1981; Psychological Corporation, 1997) Wechsler Intelligence Scale for Children 3 rd / 4 th Ed. 16 5 15.9 b (Wechsler, 1991, 2003a, 2003b) Woodcock-Johnson Rev. / 3 rd Ed. -- 11 4.4 b (Woodcock & Johnson, 1989a; Woodcock et al., 2001a) 7 6 15.5 b NEPSY/ NEPSY-II -- 11 4.4 b (Korkman, Kirk, & Kemp, 1998; 2007) Luria-Nebraska Neuropsychological Battery -- 16 3.3 b (Golden, Purisch, & Hammeke, 1985) Dementia Rating Scale / 2 nd Ed. -- 15 3.4 b (Mattis, 1988; Jurica, Leitten, & Mattis, 2001) Neurobehavioral Cognitive Status Examination (NCSE)/ Cognistat -- 20 2.3 b (Northern California Neurobehavioral Group, Inc., 1988) Note. A bolded test name indicates the measure appeared in at least one of the two usage surveys. Neuropsych. = Neuropsychological. a Percent of respondents who use the measure in the respective domain. b Percent of respondents who indicated using the measure as a general neuropsychological assessment instrument Neuropsych. Halstead-Reitan Neuropsychological Battery Batteries (Reitan & Wolfson, 1985, 1993) 42

The present study aimed to locate five factor analytic studies for each test. Given the comprehensiveness of Carroll s (1993) review, which included many of the factor analytic investigations of cognitive abilities published prior to 1985, articles published after this date were initially targeted. Relevant studies were identified and retrieved with four primary search strategies. First, neuropsychological test manuals were obtained, if possible, and reviewed to determine whether relevant data is reported. Second, a literature review using PSYCInfo was conducted using the test name in combination with the following terms as keywords: factor, component, or internal and analysis, analytic, or structure. Relevant articles were reviewed to determine if the authors had referenced additional pertinent studies. Third, Lezak et al. (2004), Strauss et al. (2006), and Mitrushina, Boone, Razani, and D Elia s (2005) texts were examined to locate factor analytic investigation of the targeted measures. Fourth, if the previously described search strategies did not identify relevant studies, Carroll s (1993) datasets were reviewed to determine if it would be feasible to include studies from his comprehensive review. In the event that these search strategies did not locate five studies, an additional literature search was conducted using the Social Science Citation Index to identify all studies that reference the seminal citations for the tests listed in Table 1. Abstracts of these studies were reviewed to determine whether there was adequate reason to believe factor analytic methodology or the associations between tests/test variables were included. Relevant articles were obtained for further review. Strong inclusion criteria are an essential component of a thorough review. For an article to be retained it must have included one or more of the instruments reported in Table 1. It was not crucial that the investigation include all tests within a battery. This 43

decision lead to the retention of studies that included selected combinations of subtests because of clinical relevance (e.g., administering WAIS-III Picture Completion and Block Design when perceptual difficulties are assumed). Additionally, investigations were retained if they included a combination of targeted and non-targeted instruments. If an investigation included all targeted subtests plus other test variables or just a sub-set of the targeted subtests, it is possible that resulting factor structures might be similar to those observed when the full measure is investigated independently. More than five studies were identified for some tests and it was necessary to determine which ones would be included in the project. Articles were preferentially selected for several reasons. Studies with a large sample sizes are preferable over those with smaller sample sizes. Additionally, articles that provided a correlation matrix of data were selected over those where an approximated correlation matrix would have to be generated (described in detail below). These preferences increased the likelihood that resulting factor structure would be replicable across samples. Studies that make use of current test versions were selected over prior editions because of clinical relevance. Further, different types of samples were targeted. However, there was a preference for at least one article to make use of a normative sample. Lastly, the impact rating of the journal in which an article was published was considered. All other factors being considered equal, it was preferable that articles come from journals with high impact ratings, which suggests the article is of higher quality. 44

Data Collection Generally consistent with Carroll (1993), the following relevant information was recorded from each study: (1) full citation, (2) sample size, (3) percentage of men included in the sample, (4) mean age, (5) additional important sample characteristics 5, (6) targeted test included in the dataset, (7) total number of variables analyzed, (8) method of analysis, (9) extraction criteria, (10) number of factors retained, (11) factor rotation, (12) higher-order analyses, and (13) the type of data available. It was documented when any of this information was not available. For reanalysis, studies must include either (1) a correlation matrix describing the association between test variables or (2) a pattern/factor matrix describing the relationships between test variables and identified dimensions. The latter information was used to generate an approximated/reproduced correlation matrix (e.g. see, Tabachnick & Fidell, 2001; Watson & Tellegen, 1985). These matrices were created by multiplying the factor matrix by its transpose (Goldberg & Velicer, 2006). In other words, the columns in the original matrix become the rows in a second matrix, the rows in the original matrix become the columns in the second matrix, and then the two matrices are multiplied. Goldberg and Velicer stated the difference between an actual and reproduced correlation matrix decreases as more factors are extracted because residual matrix values decrease. Thus, there is variability in the accuracy of approximated correlation matrices 6. 5 For example, specific medical conditions and/or diagnoses are imperative to note. If a sample is comprised of individuals who have recently experienced an acute injury (e.g., head injury or stroke) or have a chronic illness the duration between diagnosis and evaluation is noteworthy given corresponding neurological changes. This information may be important in understanding resulting factor structures. 6 Given this limitation, authors were contacted and asked to provide a correlation matrix reporting the associations between test variables. If this information was not provided an approximated correlation matrix was then generated. 45

In the event that a study included confirmatory factor analytic procedures as opposed to exploratory factor analytic procedures, the corresponding author was contacted to determine whether relevant data might be provided. When these data could not be obtained, the article was excluded from the review. Reanalysis of Data: Exploratory Factor Analytic Methods The following factor analytic methods were used to reanalyze all data. Consistent methodology is imperative because different techniques can arbitrarily alter findings (Taylor, 1994). Broadly defined, factor analytic methods pertain to three decisions, the method of analysis (e.g., factor analysis versus principal components analysis), how many factors will be retained, and lastly, how factors are rotated. Each of these issues will be described in detail. The issue of whether traditional principal axis factor analysis (PAFA) or principal components analysis (PCA) is conducted within a dataset in practice often makes little difference (Fava & Velicer, 1992; Goldberg & Velicer, 2006; Velicer, Eaton, & Fava, 2000; Velicer & Jackson, 1990). The difference between these approaches is that PAFA extracts factors that explain shared or common variance among the variables and PCA extracts components that explain all the variance (including error variance) among the variables. Differences between these approaches are most likely to emerge when factor definition is weak, potentially because of over-extraction (Goldberg & Velicer, 2006; Velicer & Jackson, 1990). Carroll (1993) used a combination of PAFA and PCA in his review. He first conducted PCA within datasets to apply the scree test and Kaiser s rule (1960) to determine the appropriate number of dimensions to retain. He noted his decision to 46

further analyze and interpret PAFA factors was contrary to a considerable body of opinion (p. 84). However, he believed PCA solutions potentially included inflated factor loadings because they contained specific as well as common variance, which may in turn influence resulting impressions of the data. In contrast to Carroll, the present study utilized only PCA when determining the underlying structure of the data. Given that only substantial factors were retained, it is likely that factors derived from PCA or PAFA would have been similar. Additionally, accurate retention procedures have been developed specifically for PCA as opposed to PAFA (Goldberg & Velicer, 2006), thus choosing PCA over PAFA increased methodological consistency across analyses. Factor retention decisions are crucially important when conducting factor analysis. Traditionally, the scree test and Kaiser s rule (1960) were believed to satisfactorily determine the number of substantial factors. However, the use of these procedures is problematic in several respects. Kaiser s rule posits that factors should be retained if the corresponding eigenvalue is > 1.00. This methodological decision potentially leads to over-extraction because the number of factors retained is more significantly influenced by the number of variables analyzed than the underlying structure of the data (see Velicer et al., 2000). Typically, the number of eigenvalues greater than one is approximately equal to one-quarter or one-third of the total number of variables analyzed (Goldberg & Velicer, 2006). Theoretically, the scree test is less problematic. The procedure is based on the belief that the point where a line of plotted eigenvalues breaks or levels off suggests the number of factors to retain. Eigenvalues above this point are identified as common factors while those below represent error or unique variance (see Velicer et al., 2000). Although the scree test generally performs well 47

and is accurate with sufficiently large samples and strong factors (Zwick & Velicer, 1982), it relies on a subjective decision to determine where the leveling off occurs, which in turn potentially leads to different methodological decisions with no strong rationale. Ultimately, the scree test is recommended as an adjunct to more appropriate methods (Velicer et al., 2000). Given the limitations of these two retention procedures, the present investigation incorporated three alternative extraction criteria, with the goal being to neither under- nor over-extract substantial dimensions. The first of these procedures is parallel analyses (PA), which was also used by Carroll (1993). PA involves creating random parallel data matrices with identical numbers of variables and participants as the actual data. Mean eigenvalues derived from analyses of the random data matrices were compared to those obtained from the actual data. For a factor to be retained, the actual eigenvalue must be larger than the corresponding PA generated eigenvalue. It is more appropriate to compare obtained eigenvalues with the 95 th percentile of PA generated eigenvalues as opposed to the mean because the latter slightly overestimates the number of factors to retain (Cota, Longman, Holden, Fekken, & Xinaris, 1993; Glorfeld, 1995; Longman, Cota, Holden, & Fekken, 1989). The present study compared obtained eigenvalues with the 95 th percentile of PA eigenvalues when determining how many factors to retain, which is consistent with recommendations. O Connor s (2000) syntax was used to conduct PA. The second retention procedure is Velicer s (1976) minimum average partial (MAP) procedure, which considers the average partial correlation matrix after sequentially extracting individual components. When a dimension is extracted that contains common variance the associations between the remaining and now partialled 48

variables decreases. Thus, MAP values decrease as well. However, when a dimension is extracted that contains variance attributed uniquely to one variable the associations between the residualized variables increases. MAP values thus also increase. The number of components to retain is identified when the average partial correlation reaches a minimum value. O Connor s (2000) syntax was used to conduct the MAP. While the PA and MAP are more successful at determining the appropriate number of factors to retain than Kaiser s test (1960) and the scree test (e.g., Hubbard & Allen, 1987; Velicer et al., 2000; Zwick & Velicer, 1982; Zwick & Velicer, 1986), the remaining retention procedure was incorporated as further protection against overextraction. Datasets were investigated in combination with random variables to determine at what point a dimension emphasizes randomness (i.e., has one pattern matrix loading.40 ), which would clearly suggest over-extraction had occurred. This procedure has been recommended by both Gorsuch (1983) and Wood, Tataryn, and Gorsuch (1996), and it suggests the appropriate number of factors to retain during PCA is one less dimension than the first instance when random error defines a factor. Wood et al. (1996) recommended the number of random variables included in a matrix should be in proportion to the number of actual test variables. They suggested when the number of true variables in the dataset is 12 the same number of random variables should be introduced into the dataset, when the number of true variables is between 13 and 72 they suggested including 12 random variables, and if the dataset includes more than 72 variables the number of random variables included should equal one-sixth the total number of genuine variables. Factor structures were investigated after including the appropriate number of random variables into the correlation matrix. 49

In many instances these three procedures suggested retaining different numbers of factors. When there was disagreement the following steps were followed to determine the appropriate number of factors to retain. First, the least differentiated factor structure suggested by MAP or PA was specified when the combination of genuine and random variables were analyzed to ensure that random variables did not define a factor. If permissible (i.e., no random variable pattern matrix loading.40 ), more complex factor structures were investigated until either (1) a dimension was defined by random variables or (2) a dimension was not defined by genuine variables. Selecting orthogonal or oblique rotation is an important methodological decision to consider when conducting factor analysis. Orthogonal rotation keeps dimensions uncorrelated with one another while oblique rotation permits dimensions to be related. In practice, choosing orthogonal or oblique rotation may make a large difference when there are substantial correlations between factors; it makes little difference if dimensions are essentially uncorrelated after rotation. Consistent with Carroll s (1993) methodology, the present study made use of oblique rotation. Integration of Factors and CHC Theory The present study differs from Carroll s (1993) review most significantly in that his work led to a new conceptualization of intelligence whereas this review evaluated the degree to which neuropsychological measures correspond with the CHC framework. Prior to reanalysis all specific test variables were linked with the narrow CHC ability constructs that most closely match the task. For example, a memory task that requires recalling one part of a previously learned unrelated word pair when the other part is 50

presented (e.g., WMS-III Verbal Paired Associates subtest) reflects the narrow ability associative memory (Glr-MA). Test variable descriptions presented in the literature (e.g., Strauss et al., 2006; specific test manuals) were reviewed to ensure there was an accurate understanding of a task prior to matching it with corresponding narrow CHC abilities. The most current description of CHC abilities (McGrew, 2005) were referenced when determining the link between test variables and narrow CHC abilities. It was also be helpful to consult the work of Carroll (1993, 1997), Horn and Blankson (2005), and Woodcock (1990) for further clarification between abilities when there was ambiguity determining connections between tasks and CHC constructs. Test variables were assigned to CHC ability constructs twice, once at the start of data collection and once after. Identifying connections between tasks and narrow CHC abilities was necessary because it permitted observed factors to be interpreted as reflecting (1) a specific narrow ability, (2) a broad ability identified by the CHC framework (i.e., a combination of narrow abilities subsumed by a single broad CHC ability), or (3) a broad ability that was not identified by the CHC framework (i.e., a dimension of general cognitive ability that includes a combination of narrow abilities that are not subsumed by a single broad CHC construct). The linking of test variables and narrow CHC abilities was inferred logically, although the process was guided by empirical literature when possible. There is a great deal of research that illustrates the relationship between targeted intelligence and achievement subtests and CHC constructs (e.g., Alfonso et al., 2005; Flanagan et al., 2000; Flanagan et al., 2002; McGrew, 1997; McGrew & Flanagan, 1998). However, there 51

was a limited amount of literature describing the relationship between neuropsychological measures and CHC constructs (McGrew & Flanagan, 1998). Three examples are provided that illustrate the sequential procedures followed when determining the connection between neuropsychological tasks and narrow abilities, and ultimately how observed factors were integrated with CHC theory. The first example is a dataset that included WISC-IV Vocabulary, Information, Similarities, and Comprehension subtests. First, each of these test variables was matched with narrow CHC abilities. McGrew and Flanagan (1998) have already defined each of these subtests and matched them with corresponding narrow CHC abilities. Vocabulary 7 and Similarities 8 subtests reflect the narrow abilities of language development (Gc-LD) and lexical knowledge (Gc-VL). The Comprehension 9 subtest reflects the narrow abilities of language development (Gc-LD) and general information (Gc-KO), whereas the Information 10 subtest reflects only general information (Gc-KO). Each of these narrow abilities is subsumed by the broad ability Gc (see Figure 1). If reanalysis of this dataset produced a single WISC-IV dimension that was defined by these subtests it would be apparent that it reflects the broad ability Gc (see Alfonso et al., 2005). Prior knowledge of the link between tests and narrow CHC ability constructs were used to better understand how additional measures are related to the theory. For example, extending the previous illustration a dataset may include the four above-mentioned WISC-IV subtests and a measure that requires an individual to listen and comprehend 7 The Vocabulary subtest requires the examinee to define orally presented words. 8 The Similarities subtest requires the examinee to orally explain how a pair of words is similar. 9 The Comprehension subtest requires the examinee to respond orally to an orally presented problem that is related to everyday problems or an understanding of social rules and concepts. 10 The Information subtest requires the examinee to respond to orally presented questions that pertain to common events, objects, places, and people. 52

oral communications (i.e., MAE Token Test). First, this task would be linked with the narrow construct Listening Ability (Gc-LS). This task-narrow ability link could be confirmed based on the results of reanalysis. If a dimension reflected WISC-IV Vocabulary, Similarities, Comprehension, and Information subtests, as well as the listening comprehension task, it would be interpreted as reflecting Gc because the specific WISC-IV subtests are considered adequate markers or anchors of Gc. Thus, the narrow ability assigned to the listening comprehension task should also be subsumed by Gc. The last example considers the connection between the Rey-Osterrieth Complex Figure Test (ROCFT; Meyers & Meyers, 1995; initial recall component) and narrow CHC abilities. McGrew and Flanagan s (1998) Intelligence Test Desk Reference (ITDR) was helpful to consider in this instance because it provides narrow and broad ability classifications for various non-intelligence tests. The ITDR was helpful to the extent that there was conceptual overlap between the tests they describe and those targeted in the present review. The ROCFT is similar to the Wide Range Assessment of Memory and Learning (WRAML) Design Memory subtest, which is included in the ITDR. Each task requires an individual to replicate a design from memory. McGrew and Flanagan stated that the WRAML Design Memory subtest reflects the narrow ability visual memory (Gv- MV) 11. Thus, based on the previous classification of a very similar test, it is logical to hypothesize that the ROCFT also reflects Gv-MV. If the ROCFT had high loadings on a dimension that included additional visual memory tests, the dimension could be 11 The narrow ability visual memory reflects the ability to form and store a mental representation or image of a visual shape or configuration during a brief study period and then be able to recognize or recall it later during the test phase. 53

interpreted as a marker of Gv-MV, and the logically inferred link between a task and CHC theory would be confirmed. Hierarchical Exploratory Factor Analysis In some instances reanalysis identified differentiated factor structures that emphasized distinct narrow abilities as separate factors. It was possible to investigate whether these distinct narrow ability factors were meaningfully related to one another by conducting hierarchical exploratory factor analysis (HEFA), which determined if they hang together under a higher-order broad CHC ability factor. Carroll (1993) conducted similar analyses to create his three-stratum theory of cognitive abilities. HEFA is simple to conduct if the correlations between factors are known. This information was available because reanalysis of all datasets yielded obliquely rotated factors. The correlations between sets of factors were subjected to PCA in the same manner that correlations between test variables were. Factorial Invariance The underlying dimensions of specific measures were expected to be similar across samples, thus factorial invariance was anticipated. Factorial invariance is required when determining whether similar constructs are measured by an instrument under different conditions (e.g., Taub & McGrew, 2004). It is a fundamental testing assumption, which is imperative for the validity of diagnosis and assessment (e.g., Bejar, 1980; Bryne, Shavelson, & Muthèn, 1989; Cunningham, 1991; Labouvie, 1980; Meredith, 1993; Rock, Werts, & Flaugher, 1978). Historically, researchers have conceptualized factorial invariance in discrepant ways. For example, some have believed identical variance/covariance matrices are required (e.g., Jöreskog, 1971; Sörbom & 54

Jöreskog, 1976), whereas a more practical approach requires configural invariance (i.e., test variables across samples have consistent loadings that are either significantly or minimally associated with a given dimension; Horn, McArdle, & Mason, 1983; Thurstone, 1947). The latter, less-stringent, approach was consistent with expectations for the current study. Assessment researchers have questioned whether instruments are invariant across populations because individuals with specific diagnoses, localized lesions, or behavioral tendencies perform in markedly different ways (e.g., have lower mean-level performance) compared to non-impaired or normative samples. For example, Demakis (2006) questioned whether executive functioning dimensions identified in a mixed neuropsychiatric group (i.e., Boone, Pontón, Gorsuch, Gonzalez, & Millers, 1998) would hold up in additional populations. O Connor (2002) proposed a similar question in the area of personality. He investigated the factorial invariance of 37 different personality and psychopathology measures and found similar factor structures across clinical and normative samples. Taub and McGrew (2004) provided an additional example of factorial invariance that is more relevant to neuropsychological assessment. They found the WJ-III factor structure is similar across age groups that spanned from age 6 to 90+ years. Similarly, Hill, Reddon, and Jackson (1985) stated that Wechsler measures appear to evaluate comparable dimensions across different samples (e.g., age groups; brain injured versus normative). Although Hill et al. s review was published prior to the development of the WAIS-III, Jones, van Schaik, and Witts (2006) reported factorial invariance across samples for that version as well. Factorial invariance for more specific instruments such as the WCST has also been reported; the WCST has been found to have 55

a similar three-dimensional structure across different types of samples (Greve, Stickle, Love, Bianchini, & Stanford, 2005). Thus, there is ample evidence suggesting that instruments should work similarly across populations. Although invariance is generally expected, there are instances when the same instrument has produced different structures across samples. For example, while Jones et al. (2006) reported the WAIS-III factor structure was invariant across many samples, their sample of low functioning individuals (IQ 74) produced a two-dimensional structure, which was inconsistent with prior studies that produced four-dimensional structures. It is important to understand what factors may contribute to incongruent dimensional structures between samples. Possible explanations for this are samplespecific characteristics and methodological limitations (e.g., using unsophisticated factor retention procedures). The dimensional structure of a measure could appear incongruent across samples in two different ways. First, it is readily apparent there is a lack of congruency when one sample produces more dimensions than another (e.g., a four-dimensional structure in one dataset and a two-dimensional structure in another). Second, two samples may produce the same number of dimensions but the dimensions may be defined by different test variables. This incongruence may be genuine or it may be the result of trivially small differences in the data that cause the factor axes to be positioned quite differently (e.g., see McCrae, Zonderman, Costa, Bond, & Paunonen, 1996). In the latter situation, the same multidimensional space is represented in both solutions even though the rotation of the factor axes masks this congruence. 56

Barrett (2005) created software to conduct orthogonal vector matrix comparisons or maximally congruent orthogonalized factor comparisons. This program was used to evaluate whether seemingly inconsistent factor structures were in fact congruent. His program investigates the congruence between unadjusted target and comparison factor matrixes. These procedures simultaneously rotate the full complement of orthogonal factors in a comparison matrix to maximal alignment with a specified target matrix. Optimal alignment is determined by the least squares criterion, such that the program seeks to minimize the sums of squared deviations between the factor coefficients in the comparison and target matrixes. Factors are not aligned one by one; rather, orthogonal structure is maintained across all dimensions. The rotational method does not distort the original data or artificially align factors in the absence of genuine congruence. Ultimately, these procedures determined whether seemingly incongruent m-dimensional structures were congruent. 57

Chapter III Results Test Classification by CHC Construct and Reliability Although some neuropsychological tests had previously been linked to specific CHC ability constructs (e.g., a confrontation naming task was linked to Gc-VL, Gc-KO, and Gc-LD), it was necessary to match CHC ability constructs with the remaining targeted measures. Based on my understanding of the cognitive requirements needed to successfully complete a test scale, I identified which specific CHC constructs were most relevant. To assess the reliability of my judgments, this process was completed twice after an interval of at least 5 weeks. The second set of ratings was made independent from the original ratings. On both occasions the same CHC definitions and test descriptions were used to classify measures. Intrarater reliability of my ratings was quantified by computing coefficient kappa, which corrects for chance agreement. Targeted measures that had previously been linked to CHC constructs were not included in reliability analyses. 58

Cicchetti (1994) recommended the following benchmarks for interpreting kappa coefficients: fair >.40, good >.60, excellent >.75. In relation to these benchmarks, the reliability of linking test variables with broad CHC constructs was excellent (see Table 2; mean kappa =.92; kappa range =.80 to 1.00). While still generally excellent, it is not surprising that reliability coefficients were slightly less when test variables were linked to more narrow CHC constructs (see Table 2; mean kappa =.90; kappa range =.39 to 1.00). Because the base rates are important to understanding kappa values, they are also presented in Table 2. Base rates influence kappa coefficients dramatically and it is challenging to document reliability when there are few opportunities to classify a test as being related to a construct. For example, the lowest kappa coefficient (Glr-NA kappa =.39) was calculated because 1 of 110 test variables was linked to the CHC construct Glr- NA during the first rating (109 times it was determined the construct was not relevant) and during the second rating 4 of 110 test variables were linked to this specific construct (106 times it was determined the construct was not relevant) based on my better understanding of the narrow CHC construct. The reliability of ratings for 5 narrow CHC constructs were only fair to good (Gc- CM =.66; Gv-CS =.66; Gv-SS =.66; Gv-LE =.66; Glr-NA =.39) and these ratings and inferences based on these CHC constructs should be considered tentatively. Disagreements between ratings were resolved by carefully reviewing (1) the requirements of the test being rated and (2) the definitions and descriptions of CHC constructs that were linked to the specific test variable. 59

Table 2. Intrarater reliability of CHC constructs CHC Construct 60 Broad CHC Kappa Narrow CHC Kappa Freq. of Rating Time 1 Freq. of Rating Time 2 Fluid Intelligence- Gf.91 32 32 General Sequential Reasoning- RG.91 21 22 Induction- I.95 10 11 Quantitative Reasoning- RQ.96 14 13 Piagetian Reasoning- RP 0 0 Speed of processing- RE.74 4 4 Quantitative Knowledge- Gq.94 21 23 Math Knowledge- KM.91 12 14 Math Achievement- A3 1.00 4 4 Quantitative Reasoning- RQ.96 14 13 Crystallized Intelligence- Gc.86 38 37 Language Development- LD.84 14 14 Lexical Knowledge- VL.79 4 6 Listening Ability- LS.91 19 20 General Information- KO.94 20 18 Information About Culture- K2 0 0 General Science Information- K1 0 0 Geography Achievement- A5 0 0 Communication Ability- CM.66 3 3 Oral Production and Fluency- OP 1.00 1 1 Grammatical Sensitivity- MY 0 0 Foreign Language Proficiency- KL 0 0 Foreign Language Aptitude- LA 0 0 Reading and Writing- Grw 1.00 16 16 Reading Decoding- RD 1.00 9 9 Reading Comprehension- RC 1.00 3 3 Verbal Language Comprehension- V 0 0 Cloze Ability- CZ 1.00 1 1 Spelling Ability- SG 1.00 9 9 Writing Ability- WA 1.00 1 1 English Usage Knowledge- EU 0 0 Reading Speed- RS 0 0 Short-Term Memory- Gsm.97 24 23 Memory Span- MS.88 15 14 Working Memory- MW.94 10 9 Learning Abilities- L1 1.00 3 3 Visual Processing- Gv.94 33 36 Visualization- Vz.87 8 8 Spatial Relations- SR.94 9 10 Visual Memory- MV.96 17 16 Closure Speed- CS.66 1 2 Flexibility of Closure- CF.80 2 3 Spatial Scanning- SS.66 1 2 Serial Perceptual Integration- PI 0 0 Length Estimation- LE.66 1 2

Perceptual Illusions- IL 1.00 3 3 Perceptual Alternations- PN 1.00 4 4 Imagery- IM 0 0 Auditory Processing- Ga.95 10 11 Phonological Coding Analysis- PC-A 1.00 2 2 Phonological Coding Synthesis- PC-S 1.00 2 2 Speech Sound Discrimination- US.80 2 3 Resistance to Auditory Stimulation Distortion- UR 0 0 Memory for Sound Patterns- UM 1.00 2 2 General Sound Discrimination- U3 1.00 3 3 Temporal Tracking- UK 0 0 Musical Discrimination and Judgment- UI-U9 0 0 Maintaining and Judging Rhythm- U8 1.00 2 2 Sound Intensity/Duration Discrimination- U6 0 0 Sound-Frequency Discrimination- U5 1.00 1 1 Hearing and Speech Threshold- UA-UT-UU 1.00 1 1 Absolute Pitch- UP 0 0 Sound Localization- UL 0 0 Long-Term Storage and Retrieval- Glr.86 20 23 Associate Memory- MA.94 9 10 Meaningful Memory- MM 1.00 3 3 Free-recall Memory- M6.74 4 4 Ideational Fluency- FI 0 0 Associational Fluency- FA 1.00 1 1 Expressional Fluency- FE 0 0 Naming Facility- NA.39 1 4 Word Fluency- FW 1.00 0 1 Figural Fluency- FF 0 0 Figural Flexibility- FX 0 0 Sensitivity to Problems- SP 1.00 1 1 Originality to Problems- FO 0 0 Learning Abilities- L1 1.00 5 5 Processing Speed- Gs.97 24 23 Perceptual Speed- P 0 1 Rate of Test Taking- R9.79 13 14 Number Facility- N 1.00 5 5 Speed of Reasoning- RE.74 4 4 Semantic Processing Speed- R4.70 9 5 Decision Speed/Reaction Time- Gt.80 2 3 Simple Reaction Time- R1 1.00 2 2 Choice Reaction Time- R2 0 1 Mental Comparison Speed- R7 0 0 Note. It is not possible to compute a Kappa coefficient when broad and narrow CHC constructs were not linked to test variables during both rating periods. Freq. = Frequency. 61

Table 3 shows the results of my CHC classifications for each test scale. The table is organized by cognitive domain and illustrates the hypothesized relationships between test variables and CHC ability constructs. To simplify presentation, several decisions were made regarding what information would be included in the table. First, only the most current versions of tests were included. When possible, test variables from a measure were grouped together if they reflected the same CHC ability construct. Lastly, duplicated tasks across measures/batteries were included in only one domain (e.g., the Category Test is listed under the domain Executive Functioning and this information is not duplicated when the HRNB is described in the Neuropsychological Battery section). 62

Table 3. Hypothesized relationships between tests and CHC broad/narrow ability constructs CHC Broad Ability Constructs Gf Gc Gq Grw Gsm Gv Ga Glr Gs Gt Domain Att. /Concent. TMT 1 Part A RQ RQ MS R9, N Part B RG, RQ RQ MW R9, N PASAT 1 RQ, RE RQ, KM MW N, RE Stroop Word RD R9 Color NA R9 Interference RG NA R4 Executive Func. WCST 1, 2 I, RQ RQ MW, L1 PN L1 Category Test 1 Subtests I-II RQ, I KM, RQ Subtest III RQ, I RQ PN Subtest IV RQ, I RQ MV Subtests V-VI RQ, I KM, RQ Subtest VII RG, RQ KM, RQ L1 L1, MA COWA 1 Phonemic RG, RE FW RE Semantic RG, RE FA RE Language BNT 1, 3 VL, KO, AST 1 MAE 4 Visual Naming 3 LD VL, KO, LD, LS A3 SG, RD, WA VL, KO, LD Token Test RG, RQ LD, LS RQ MW, MS PN MS SR 63

MV CHC Broad Ability Constructs Gf Gc Gq Grw Gsm Gv Ga Glr Gs Gt Memory WMS-III 1 Logical Mem I 5 LS MM Logical Mem II 5 MM Faces I / II MV Verbal PA I / II 5 MA Fam. Pic. I / II MV Verbal Mem. MA Recog. Tasks 6 Wrd. List I / II 5 M6 Vis. Reprod. I 5 MV Vis. Reprod. II 5 MV M6 Vis. Reprod. II Recog. MV MA Vis. Reprod. II Copy SR Mental Control RQ, RE KO KM, RQ MW N, R9, RE Digit Span 5 RG MS Let.-Num. Seq. RG MW Spatial Span 5 MS MV RAVLT 1 All Trials M6 Recog. MA WRAML-2 5 Story Imm. / Delayed 5 LS MM Verbal Imm. / M6 Delayed 5 Story / Verbal MA Recog. 5 Design / Pic. Imm. MA / Recog. 5 Finger Windows 5 MS MV Number-Letter 5 MS Sentence Mem. 5 LD MS Sound Symbol 5 MA 64

Motor Func. FTT 1 Grip Strength 1, 7 PPT 1 Vis. Perc. HVOT CHC Broad Ability Constructs Gf Gc Gq Grw Gsm Gv Ga Glr Gs Gt Vz, Sr R9 R1 ROCFT 1 Copy SR Recall MV M6 Recognition MV MA JOLO Vz, LE Neurop. Bat. HRNB TPT Time Vz, MV, CS R9 TPT Mem. / Loc. Vz, MV Speech Snd. Perc. SG US Rhythm Test U3 Finger Discrim. 7 Fingertip Num. 7 NEPSY 8 Affect Recog. RG MV Animal Sorting I PN Arrows CF Auditory Att. / LS MW Resp. Set Block Construction SR R9 Clocks KO SR Comp. of Instruc. LS, LD, VL Design Fluency RG, RE RE FTT Rep. / Seq. MV R9 R1 Geometric Puzzles RG Vz, IL Inhibition RG KM NA R4 Mem. for Names MV MA 65

CHC Broad Ability Constructs Gf Gc Gq Grw Gsm Gv Ga Glr Gs Gt Phono. Processing RG LD PC-S Picture Puzzles SR Speeded Naming LD NA R4 Statue LS Theory of Mind I Visuomotor Prec. SS R9 LNNB Motor LS, LD KM MS MV, SR U3 R9 Rhythm LS KM U3, U5, UM, U8, UA-UT-UU Tactile Func. LS, LD, KO KM R9 Visual Func. RQ VL, KO, RQ CS, Vz R4 LD, LS Recept. Speech 9 RG LS, LD, SG MS IL US KO, Rc1 SG MS Rc2 RG LS, LD US Rc3 LS, KO Rc4 LS Rc5 LS Rc6 RG LD, KO Express. Speech 9 RG VL, KO, RD, CZ MS PC:A MM R9 LD, CM E1 KO RD E2 MS E3 RD MS PC:A Writing 10 KO, LS KM SG MS W1 KM SG MS W2 KO, LS SG Reading 9 RD, RC, SG R9 Rg1 RD, RC R9 Rg2 KO RD, RC 66

CHC Broad Ability Constructs Gf Gc Gq Grw Gsm Gv Ga Glr Gs Gt Arithmetic KO, LS KM, A3, MS, MW Memory CM MW MV UM, U8 MA, M6, MM Intell. Processes RG, RQ, I CM, LS, KM, A3, R4, N KO, LD RQ DRS 11 Verbal Commands Ls Visual Imitation MV Language Persev. PC:S Alt. Movements L1 L1 Write Name SG Vis. Similarities I IL Ver. Similarities I LD Sent. Generation OP Orientation KO Counting KM CF Sentence Recall MM Word Reading RD Word Recog. MA Vis. Matching CF Vis. Memory MV MA Cognistat 12 Orientation KO Attention MS Comprehension LS Repetition LD MS Constructions MV, SR Memory M6, MA Calculations A3 Similarities I LD Judgment RG SP 67

Note. Broad and narrow CHC variable abbreviations are reported in Table 2. Alt. = Alternating; AST = Aphasia Screening Test; Att. = Attention; Bat. = Battery; BNT = Boston Naming Test; Comp. = Comprehension; Concent. = Concentration; COWA = Controlled Oral Word Association Test; Discrim. = Discrimination; DRS = Dementia Rating Scale; E1 = Simple phonetic reading; E2 = Word repetition; E3 = Reading poly-syllabic words; Express. = Expressive; Fam. = Family; FTT = Finger Tapping Test; Func. = Function; HRNB = Halstead-Reitan Neuropsychological Battery; HVOT = Hooper Visual Organization Test; Imm. = Immediate; Instruc. = Instructions; Intell. = Intellectual; JOLO = Judgment of Line Orientation Test; Let. = Letter; LNNB = Luria-Nebraska Neuropsychological Battery; MAE = Multilingual Aphasia Emanation; Mem. = Memory; Neurop. = Neuropsychological; Num. = Number; PA = Paired Associates; PASAT = Paced Auditory Serial Addition Test; Perc. = Perception; Persev. = Perseveration; Phono. = Phonological; Pic. = Pictures; PPT = Purdue Pegboard Test; RAVLT = Rey Auditory Verbal Learning Test; Rc1 = Phonemic Discrimination; Rc2 = Using relational concepts; Rc3 = Concept recognition; Rc4 = Verbal-spatial relationships; Rc5 = Word comprehension; Rc6 = Logical grammatical relations; Recept. = Receptive; Recog. = Recognition; Rep. = Repetition; Reprod. = Reproduction; Resp. = Response; Rg1 = Reading complex materials; Rg2 = Reading simple materials; ROCFT = Rey-Osterrieth Complex Figure Test; Sent. = Sentence; Seq. = Sequencing; Snd. = Sound; TMT = Trail Making Test; TPT = Tactual Performance Test; Ver. = Verbal; Vis. = Visual; W1 = Spelling; W2 = Motor writing skills; WCST = Wisconsin Card Sorting Test; WMS-III =Wechsler Memory Scale-Third Edition; WRAML-2 = Wide Range Assessment of Memory and Language- Second Edition; Wrd. = Word. 1 This test/task is included in the HRNB. 2 All WCST variables were rated similarly. 3 This task has previously been linked to CHC ability constructs. 4 Includes the COWA and tests similar to the BNT, DRS, and LNNB W1. 5 Prior versions of task had been linked to CHC constructs. 6 Includes the WMS-III Recognition Index score and the Word List II Recognition subtest. 7 The ability required by this test does not correspond with CHC constructs. 8 Includes subtests similar to the BNT, COWA, WMS, and WRAML. 9 LNNB Scale includes more items than are included in the related Subscales. 10 LNNB Scale includes the same items that are included in the related Subscales. 11 Includes subtests similar to the BNT, COWA, WMS, and, ROCFT. 12 Includes subtests similar to the BNT. 68

Overview and Example of the Factor Analyses The following is a brief overview to explain how datasets were analyzed and the results interpreted in the context of the CHC model, which is followed by example analyses to illustrate these steps. Prior to obtaining a factor solution from a dataset, test variables were reviewed carefully to ensure the data was appropriate for PCA. Next, three factor retention procedures were applied to determine what dimensional structures were appropriate to consider. It was not uncommon for these procedures to produce unambiguous or conflicting results regarding how many factors to retain. Given this, multiple factor solutions produced from the same dataset were reviewed to determine which was the most optimal solution. Optimal solutions included more coherent, welldefined dimensions, as opposed to non-optimal solutions. Finally, retained factors were interpreted in the context of CHC terminology. This last step was guided by CHC literature and the hypothesized relationships between CHC constructs and test variables presented in Table 3. Pontón, Gonzalez, Hernandez, and Higareda s (2000) study was randomly selected to illustrate in detail the above-mentioned steps. Pontón et al. investigated how a group of Hispanic adults performed on a battery of 10 neuropsychological measures. Their study included a correlation matrix illustrating the relationships between 17 test variables (ROCFT: Copy and Delayed Recall; Auditory Verbal Learning Test: Immediate Recall, Short Delay, and Long Delay; WAIS-R: Digit Span Total, Digit Span Forward, Digit Span Backward, Digit-Symbol Coding; Color Trails 1 and 2; Pin Test Dominant and Non-Dominant Hand; Boston Naming Test; Block Design; Raven s Progressive 69

Matrices; COWA). Pontón and colleagues conducted PCA and justified retaining 5 dimensions because there were 5 eigenvalues > 1.00. Prior to reanalyzing the Pontón et al. (2000) correlation matrix, test variables were reviewed to detect collinearity between test variables. This was a crucial step in the analyses because PCA is not possible when there is significant linear dependency between test variables. When significantly linear dependency exists between test variables the resulting correlation matrix is considered not positive definite (NPD) and it is impossible to generate a PCA solution. There were two main reasons for collinearity between test variables in the collected data. The first instance occurred when one test variable was the sum of two or more other test variables (e.g., the WAIS-III Working Memory Index is comprised of Digit Span, Letter-Number Sequencing, and Arithmetic subtests). The second instance occurred when scores on one test variable were partially dependent on scores from another test variable (e.g., WMS-III Logical Memory II performance is partially dependent on WMS-III Logical Memory I performance). In many instances authors provided test variables means and standard deviations, and it was possible to use collinearity diagnostics available through SPSS regression to detect linear dependency problems. For these analyses it is immaterial what variable is predicted, so an artificially generated criterion that was uncorrelated with the predictor variables was utilized. Predictor variables with tolerance values <.10 were considered problematic (Field, 2005) and generally excluded from further analyses. As an exception to this methodological decision, test variables with tolerance values <.10 were retained if (1) the tests were administered independently, even though an individual s performance on one test variable was partially dependent on their performance on another test variable 70

(e.g., WMS-III Immediate and Delayed Memory subtests) and (2) it was still possible to conduct a PCA with the problematic variables included. Pontón et al. (2000) did not provide means and standard deviations for their data, thus it was not possible to evaluate collinearity between variables by conducting regression analyses. For studies like Pontón et al. where regression diagnostics could not be used to detect collinearity, preliminary PCA determined whether the data were appropriate for further analyses. These analyses were considered preliminary because the goal was not to retain substantial factors, but rather to determine whether it would be possible to conduct PCA. For Pontón et al. (2000) it was not possible to conduct PCA with all three of the Digit Span test variables included in the analyses because the resulting matrix was NPD. In this case the linear dependency occurred because Digit Span Total is the sum of Digit Span Forward and Digit Span Backward. As such, it can be predicted perfectly from them, which creates the problems with collinearity. However, it was possible to conduct PCA when both Digit Span Forward and Backward variables were included, or when just the Digit Span Total variable was included in analyses. It was preferable to include primary test variables as opposed to composite summary test variables, thus the Digit Span Total variable was not included in subsequent analyses. Once a correlation matrix was determined to be reasonably free of problems with multicollinearity, attention turned to identifying the proper number of factors to extract. For all studies this began with using O Connor s (2000) SPSS syntax to conduct parallel analyses. Figure 2 illustrates the results from parallel analyses for Pontón et al.'s (2000) study. The figure strongly supports retaining 2 factors and provides some support for 71

EV retaining 3 factors (First three EVs = 6.23, 1.87, and 1.34; 95 th percentile PA EVs = 1.50, 1.39, and 1.31). It is clear that extracting more than 3 dimensions is inappropriate because genuine eigenvalues are less than the corresponding eigenvalues generated from 500 random datasets (e.g., Fourth Factor: EV = 1.00; 95 th percentile PA EV = 1.24). 7 6 5 4 3 EV 95% PA EV 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Factor Figure 2. Pontón et al. (2000) parallel analysis. Note. EV = Eigenvalues; 95% PA = Ninety-fifth percentile of randomly generated eigenvalues based on 500 random data sets. Next, O Connor s (2000) SPSS syntax was used to conduct the minimum average partial procedure (MAP). Figure 3 presents MAP results for the Pontón et al. (2000) study. The results technically support retaining 2 dimensions. However, these results were somewhat ambiguous and the slope of the MAP curve did not clearly differentiate among the 1, 2, or 3 dimensional solutions, as the average squared partial correlations are 72

Average Squared Partial Correlation nearly identical (MAP Values: One Extracted Factor = 0.031; Two Extracted Factors = 0.026; Three Extracted Factors = 0.031). It is clear that the average squared partial correlation values increased as 4 or more factors were extracted (MAP Values: Four Extracted Factors = 0.034; Five Extracted Factors = 0.038), thus it is inappropriate to extract 4 or more factors. 1.20 1.00 0.80 0.60 0.40 0.20 0.00 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Number of Components Extracted from the Correlation Matrix Figure 3. Pontón et al. (2000) results for the minimum average partial procedure. As a third procedure to determine the appropriate number of factors to extract, I examined the actual correlation matrix in combination with 12 random variables. For the Pontón et al. (2000) data, when two factors were extracted, random variables did not have significant loadings on either dimension (largest random variable loading =.22 ). When 3 factors were extracted, random variables had significant loadings. The three largest 73

random variable loadings on the third dimension were greater than those from genuine test variables (3 largest random variables =.48,.41,.39 ; 3 largest genuine variables =.45,.34,.33 ). Thus, these random variable analyses indicated it was appropriate to retain 2 dimensions. In 23 of 77 datasets it was not possible to conduct PCA on matrices containing both the genuine and random variables because adding the random variables made matrices that were NPD 12. To determine whether these unexpected outcomes might be related to specific sets of random variables included in analyses, Halstead (1947) and Schwamm, Van Dyke, Kiernan, Merrin, and Mueller s (1987) datasets were randomly selected from the group of 23 and analyzed in combination with 10 different sets of random variables rather than just a single set of random variables. In every instance it was still not possible to conduct PCA because of NPD matrices. These findings suggest that the NPD problems would be evident when any set of random variables was inserted in the matrices for these 23 datasets. Consequently, this aid for identifying the proper number of factors was not used in these 23 instances. Based on converging support from the three extraction procedures, 2 factors were extracted from the Pontón et al. (2000) data. The obliquely rotated pattern matrix is presented below in Table 4. (To simplify the presentation all pattern matrices are reported in Appendix A rather than in the text.). While it was easy to conceptualize the second 12 Collinearity diagnostics illustrated why some NPD matrices were produced when genuine and random variables were analyzed. For instance, Wilde et al. s (2003) dataset included several variables with borderline problematic tolerance values (WMS-III: Family Picture I =.12, Family Pictures II =.11; Logical Memory I =.21; Logical Memory II =.20) even thought it was still possible to conduct PCA. However, when random variables were added to the matrix, the tolerance values for some genuine variables became more problematic (WMS-III: Family Pictures II = -.07; Logical Memory I =.05; Logical Memory II =.05; Verbal Paired Associates I =.09; Verbal Paired Associates II =.09). This increased collinearity meant PCA could not be conducted. This example illustrates that in some instances introducing random variables into analyses problematically increases collinearity between variables. 74

dimension as being a concise marker of verbal memory (Glr-M6), the other dimension is more complex, as it predominantly emphasized broad CHC constructs Gv and Gs, and to a lesser degree Gc and Gsm. Table 4 Rotated pattern matrix for Pontón et al. (2000) (n = 300) Factor 1 Factor 2 WAIS-R Digit Symbol Coding 1.80.10 Block Design.80.02 Color Trails 2 -.76.01 Raven s Progressive Matrices.73.12 Color Trails 1 -.71.01 Pin Test Dominant Hand.68 -.04 ROCFT Copy.68.03 Controlled Oral Word Association Test.59 -.06 ROCFT Delayed Recall.59.14 Boston Naming Test.57 -.04 Pin Test Non-Dominant Hand.56 -.02 WAIS-R Digit Span Forward.50 -.10 WAIS-R Digit Span Backward.43.13 Auditory Verbal Learning Test Short Delay 2 -.03.89 Auditory Verbal Learning Test Immediate Recall 3.01.87 Auditory Verbal Learning Test Long Delay.08.87 r between factors.34 Eigenvalue 6.23 1.87 Note. Loadings.40 are bolded. ROCFT = Rey-Osterrieth Complex Figure Test. 1 Digit Symbol Coding was slightly altered, the number of target symbols was 6 instead of the standardized version, which includes 9. 2 Original words recalled after a distracter list. 3 Original words recalled after 5 immediate learning trials. The Pontón et al. (2000) study was included because it provided a correlation matrix with the Boston Naming Test (BNT) and Rey-Osterrieth Complex Figure Test (ROCFT). It was anticipated that these tests might appear on dimensions that reflect narrow Gc and Gv abilities, respectively. However, specific dimensions reflecting these broad abilities were not observed. It is important to consider the composition of test 75

variables included in the data matrix when the resulting factor structure appears to be inconsistent with expectation. Factor and component analysis identify variance that is shared between the items or scales in a matrix and the specific mix of variables in a given matrix determines whether there is enough shared or common variance for a factor to be identified. Generally speaking, at least three good markers of a factor have to be present for the factor to be identified. With this in mind, it can be seen that the Pontón et al. dataset was not ideal to test the hypothesized relationship between the BNT and Gc because no other test variables in the matrix reflect Gc. Given this, it is impossible for the BNT to appear on a dimension that reflects Gc. It is surprising that the ROCFT did not appear on a dimension reflecting Gv abilities, as there were an adequate number of test variables (i.e., 3) related to Gv. Unfortunately, it was necessary to include some less than ideal datasets to test hypothesized relationships reported in Table 3. Ideally, all targeted datasets would have included three or more marker variables that reflected the broad CHC ability constructs hypothesized to be related to the test. When datasets include an inadequate composition of test variables, it dramatically decreases the likelihood hypothesized relationships are sufficiently evaluated. Factor structures derived from these data matrices should be interpreted cautiously as they are poor tests of the hypotheses presented in Table 3. It was also necessary in some instances to include studies with small sample sizes. This is against general recommendations for factor analysis because correlation coefficients are less reliable when they are generated from smaller sample sizes (Tabachnick & Fidell, 2001). Although the necessary sample size to conduct factor analysis is related to the number of high loading marker variables in the solution, it is 76

generally accepted that a sample size of 100 is poor (e.g., see Comrey & Lee, 1992). Studies with fewer than 100 participants would not have been included if more optimal datasets were available. The factor structures generated from these studies should be interpreted cautiously, and as a reminder of this limitation, these studies are presented in italic text in the domain specific tables. The likelihood that datasets are adequate to evaluate the hypothesized relationships presented in Table 3 was rated on a 3-point scale. Quality ratings were based on the adequacy of sample size (n > 100) and whether the composition of test variables would permit hypothesized relationships to be evaluated. A quality rating of 3 indicates that the sample is appropriately large to conduct PCA and the composition of test variables permits hypothesized relationships to be tested. A quality rating of 1 indicates the sample size is small (n < 100) and/or the composition of test variables does not include an adequate number of marker variables related to relevant CHC constructs. If a dataset includes multiple targeted measures, it is necessary to rate the quality multiple times to indicate whether it is an adequate data matrix to test each specific hypothesis. For example, with respect to the ROCFT, Pontón et al. (2000) is a high quality study because it includes an adequate number of Gv marker variables to test the hypothesis that the ROCFT is related to Gv, thus it receives a quality rating of 3. With respect to the BNT, the study does not include a sufficient number of Gc marker variables to test the hypothesis that the BNT is related to Gc, thus it receives a quality rating of 1. Analysis of the Pontón et al s. (2000) data illustrated how each of the other 76 data matrices were analyzed. Attention now turns to the more general results, which are presented by cognitive domain. I present information across domains following the same 77

general structure. First, the targeted tests are introduced and general information is provided regarding the hypothesized associations between the target tests and CHC constructs. Additionally, information about targeted datasets is provided, including what types of variables were included and whether a correlation matrix or approximated correlation matrix was used in subsequent analyses. Next, domain specific tables that include information regarding the targeted studies for each measure are presented. Domain specific tables include information regarding (1) sample demographics, (2) how many factors were retained by the author(s), (3) how many variables were analyzed by the author(s), (4) the type of correlation matrix utilized during reanalysis, (5) the number of test variables and the total number of variables included during reanalysis, (6) a dimensional indication of the likelihood that the dataset was an adequate test of the hypothesized relationship between test variables and CHC constructs, (7) the results of factor retention procedures, and (8) the number of factors ultimately retained. Lastly, the results are summarized by test. Attention/Concentration Measures The Trail Making Test (TMT), Paced Auditory Serial Addition Test (PASAT), and Stroop test are the targeted attention and concentration measures. In general, it was hypothesized that these measures are predominantly related to Gf and Gs constructs (see Table 3). The TMT was examined in five diverse samples, as indicated in Table 5. Each of these samples contained at least 2 TMT variables and the total number of variables ranged from 13 (de Jong & Das-Small, 1993) to 39 (Royce, Yeudall, & Bock, 1976). The PASAT was also investigated in five diverse samples and the number of total variables included in my final analyses ranged from 5 to 17 (see Table 5). Correlation matrices 78

provided for three samples included 1 PASAT variable (Crawford, Obonsawin, & Allan, 1998; Hartmann, Kruuse, & Nyborg, 2007; O Donnell, MacGregor, Dabrowski, Oestreicher, & Romero, 1994). The remaining PASAT studies included small samples and reported data that made it possible to generate approximated correlation matrices with multiple PASAT variables reflecting different presentation times (Deary, Langan, Hepburn, & Frier, 1991; Fos, Greve, South, Mathias, & Benefield, 2000). The Stroop test was investigated in five diverse samples and each included at least three Stroop variables (see Table 5). The total numbers of variables in these datasets ranged from 11 to 21. 79

Table 5. Targeted Attention/Concentration measures Measure Study Sample (n) TMT de Jong & Das-Small (1993) Mean Age Publ. Fac. (Var.) Data analyzed (Test/Tot. Var.) Quality Rating for Test Extraction criteria EV > 1 PA MAP RV Children (175) 1 9.0 4 (13) r-mat. (2/13) 3 4 3 1-2 2-3 2 Children (204) 2 9.0 4 (13) r-mat. (2/13) 3 4 3 1 2 2 Lehto et al. (2003) Children (108) 10.5 3 (8) r-mat. (3/14) 3 4 3 2 NPD 3 Royce et al. (1976) Neuro. (176) 40.1 13 (54) r-mat. (2/39) 3 11 3-4 2-6 NPD 5 Salthouse et al. (1996) Adults (259) 51.4 5 (16) r-mat. (2/16) 3 4 1-2 1-2 1-2 2 PASAT Crawford et al. (1998) Adults (152) 40.2 3 (12) r-mat. (1/12) 2 2 1 1-2 1 1 Deary et al. (1991) Diabetic (94) 40.0 3 (11) E. r-mat. (2/11) 1 3 2 3 NPD 3 Fos et al. (2000) College students (74) 25.6 4 (9) E. r-mat. (4/9) 1 4 3 3 NPD 3 Hartmann et al. (2007) US Veterans (4321) 38.4 3 2 (19) r-mat. (1/17) 3 3 3 2-3 3 3 O Donnell et al. (1994) Neuro./Psych. (117) 30.2 2 (5) r-mat. (1/5) 2 1 1 1 1 1 Stroop Boone et al. (1998) Neuropsy. /Norm. (250) 55.5 3 (18) r-mat. (3/18) 3 5 3-4 4 NPD 4 de Jong & Das-Small (1993) Children (175) 1 9.0 4 (13) r-mat. (3/13) 3 4 3 1-2 2-3 2 Children (204) 2 9.0 4 (13) r-mat. (3/13) 3 4 3 1 2 2 Graf et al. (1995) Adults (129) 65-95 4 5 (21) r-mat. (4/21) 3 5 2 1-4 2 2 Pineda & Merchan College students (100) 18.5 5 (15) r-mat. (3/11) 3 4 2-3 1-2 NPD 3 (2003) Note. Studies that are presented in italic text had sample sizes < 100. Publ. = Publication; Fac. = Factors; Var. = Variables; EV = Eigenvalue; PA = Parallel analysis; MAP = Velicer s (1976) minimum average partial procedure; RV = Random variables method; Ext. = Extracted; TMT = Trail Making Test; mat. = matrix; NPD = PCA resulted in a not positive definite matrix; Neuro. = Neurologic; PASAT = Paced Auditory Serial Addition Test; E. = estimated; Psych. = Psychiatric; Neuropsy. = Neuropsychiatric; Norm. = Normative. 1 Male sample. 2 Female sample. 3 Based on sample size of 4462. 4 Age range. Fac. Ext. 80

TMT. Reanalysis of de Jong and Das-Small s (1993) data produced a 2- dimensional factor structure in both their male and female samples (see Table A-17; Table A-18). These structures were comprised of a dimension that predominantly reflected Gs, which included significant TMT loadings, and a dimension that reflected the construct Glr. In a third sample of children, the TMT defined a Gs dimension that was differentiated from dimensions that emphasized Gf and Gsm (see Lehto, Juujärvi, Kooistra, & Pulkkinen, 2003; Table A-38). In the remaining two samples, the TMT appeared on diverse cognitive markers that reflected a combination of narrow Gf and Gv constructs, and to a lesser degree Gs constructs (Royce et al., 1976 [Table A-55]; Salthouse, Fristoe, & Rhee, 1996 [Table A-57]). It is notable that TMT Part A and B did not appear on differentiated dimensions across analyses. PASAT. The results of my reanalysis were largely dependent on how many PASAT variables were included in the datasets. For example, when 1 PASAT test variable was included in a diverse group of test variables, 1-factor solutions that reflect general cognitive ability were produced (see Crawford et al., 1998 [Table A-15]; Hartmann et al., 2007 [Table A-26]). When 1 PASAT variable was included with test variables reflecting Gf and Gs, it had a meaningful loading on a dimension that reflected these constructs (see O Donnell et al., 1994 [Table A-49]), which is in-line with expectations. When multiple PASAT variables were included in datasets, reanalysis produced factor structures with dimensions specific to these variables. For example, in one small sample a PASAT dimension was differentiated from general Gc and Gv dimensions (Deary et al., 1991; n = 94; Table A-16). In the remaining sample, which 81

included only 74 college students, a PASAT dimension was different from Digit Span and TMT dimensions (Fos et al., 1991; Table A-19). Stroop Test. The Stroop test and verbal fluency measures defined single dimensions in two samples that reflected the broad constructs Gs, Glr, and Gf (Boone et al., 1998 [Table A-8]; Pineda & Merchan, 2003 [Table A-52]). Reanalysis of Graf, Uttl, and Tuokko s (1995) data produced a similar dimension. In the remaining two samples (de Jong & Das-Small, 1993), Stroop variables served as the most meaningful markers for dimensions that reflected broad constructs Gs and Gf (see Table A-17; A-18). It is interesting that in each factor solution the Stroop variables appeared on single dimensions. Executive Functioning Measures The Wisconsin Card Sorting Test (WCST), Category Test, and Controlled Oral Word Association (COWA) test are the targeted executive functioning measures. In general, it was hypothesized that these measures are predominantly related to Gf constructs (see Table 3). The WCST was examined in five diverse samples, as indicated in Table 6. The number of WCST variables included in my final correlation matrices ranged from 2 (Salthouse et al., 1996) to 5 (Bowden et al., 1998) and the total of number of variables included in these datasets ranged from 5 to 18. The Category Test was also examined in five samples (see Table 6). These samples provided matrices that contained 8 to 39 variables. The Category Test was examined as a single total score in three of these studies (Aftanas & Royce, 1969; Goldstein & Shelly, 1972; Halstead, 1947), and with separate scores for each of the seven subtests in the remaining two studies (Johnstone, Holland, & Hewett, 1997; Royce et al., 1976). The COWA, or a similar task, was 82

included in five targeted samples (see Table 6). The total number of variables included in these matrices ranged from 5 to 29. Three of these samples included single variables for phonemic and semantic fluency (Graf et al., 1995; Korkman, Kirk, & Kemp, 2007; Pineda & Merchan, 2003). Riva, Nichelli, and Decoti s (2000) data included multiple phonemic and semantic fluency variables, whereas Salthouse et al. s data included only two phonemic fluency variables. 83

Table 6. Targeted Executive Functioning measures Measure Study Sample (n) WCST Boone et al. (1998) Neuropsy. / Norm. (250) Mean Age Publ. Fac. (Var.) Data analyzed (Test/Tot. Var.) Quality Rating for Test Extraction criteria EV > 1 PA MAP RV 55.5 3 (18) r-mat. (4/18) 3 5 3-4 4 NPD 4 Bowden et al. (1998) Alcoh. Depd. (151) 40.9 1 (6) r-mat. (5/5) 1 2 1 1 1 1 College students (75) 20.6 1 (6) r-mat. (5/5) 1 2 1 1 NPD 1 Pineda & Merchan (2003) College students (100) 18.5 5 (15) r-mat. (4/11) 3 4 2-3 1-2 NPD 3 Salthouse et al. (1996) Adults (259) 51.4 5 (16) r-mat. (2/16) 3 4 1-2 1-2 1-2 2 CAT Aftanas & Royce (1969) Adults (100) 16-70 1 12 (35) r-mat. (1/27) 2 9 1-2 1-3 NPD 3 Goldstein & Shelly (1972) Neuro. / psych. (619) Adult 2 4 (25) r-mat. (1/25) 2 4 2-3 2-4 2-3 3 Halstead (1947) Adults (50) Adult 2 4 (13) r-mat. (1/13) 1 4 1 2 NPD 2 Johnstone et al. (1997) Neuro. / psych. (308) 32.6 6 (27) r-mat. (7/8) 2 2 1 1 1 1 Royce et al. (1976) Neuro. (176) 40.1 13 (54) r-mat. (7/39) 3 11 3-4 2-6 NPD 5 COWA Graf et al. (1995) Adults (129) 65-95 1 5 (21) r-mat. (2/21) 3 5 2 1-4 2 2 Korkman et al. (2007) Norm. (200) 13-16 1 NA r-mat. (2/29) 3 9 3-7 1-3 3 3 Pineda & Merchan College students (100) 18.5 5 (15) r-mat. (2/11) 3 4 2-3 1-2 NPD 3 (2003) Riva et al. (2000) Children (160) 5-11 1 2 (5) r-mat. (4/5) 2 1 1 1 1 1 Salthouse et al. (1996) Adults (259) 51.4 5 (16) r-mat. (2/16) 3 4 1-2 1-2 1-2 2 Note. Studies that are presented in italic text had sample sizes < 100. Publ. = Publication; Fac. = Factors; Var. = Variables; EV = Eigenvalue; PA = Parallel analysis; MAP = Velicer s (1976) minimum average partial procedure; RV = Random variable method; Ext. = Extracted; WCST = Wisconsin Card Sorting Test; Neuropsy. = Neuropsychiatric; Norm. = Normative; mat. = matrix; NPD = PCA resulted in a not positive definite matrix; Alcoh = Alcohol; Depd. = Dependent; CAT = Category Test; Neuro. = Neurologic; Psych = Psychiatric; COWA = Controlled Oral Word Association Test; NA = Not analyzed. 1 Age range. 2 Mean age not reported. Fac. Ext. 84

WCST. Reanalysis of data reported in four different studies produced dimensions that emphasized only WCST variables (Boone et al., 1998 [Table A-8]; Bowden et al., 1998 [Table A-9; Table A-10, n = 75]; Pineda & Merchan, 2003 [Table A-52]). These dimensions predominantly reflected Gf-Gv abilities; these dimensions were bipolar and reflected both accuracy and errors on the task 13. Two of these four solutions included WCST factors that were differentiated from Stroop factors (Boone et al., 1998; Pineda & Merchan, 2003), which is not surprising given that the Stroop requires Gs abilities and the WCST does not. Reanalysis of Salthouse et al. s (1996; Table A-57) data produced a factor structure that included WCST test variables on a Gf-Gv dimension with other similar test variables. Category Test. When data were provided for each of the Category Test subtests, interesting solutions were produced that included factors that only emphasized Subtests 3-7 (Johnstone et al., 1997 [Table A-28]; Royce et al., 1976 [Table A-55]). This is likely because Subtests 3-7 require more complex cognitive processes (i.e., Gf) than the first two subtests. In the remaining samples only overall Category Test variables were reported, and analyses produced solutions where the Category Test loaded onto dimensions that predominately reflected Gv abilities (Aftanas & Royce, 1969 [Table A- 5]; Goldstein & Shelly, 1972, [Table A-21]; Halstead, 1947 [Table A-24, n = 50]). COWA. Although there was an increased likelihood that Riva et al. s (2000) data might have produced differentiated dimensions of phonemic and semantic fluency given 13 It is curious that the data reported by Bowden et al. (1998) produced somewhat different single factor solutions. It is possible that this difference is related to a potential typographical error in the manuscript, as it is hard to conceptualize why there is a non-existent relationship between the variables Categories Complete and Correct in the college student sample (r =.04) and a strong positive association between these variables in the alcohol dependent sample (r =.62). It is notable that the college sample size is significantly less than the clinical sample, and this discrepancy may also be contributing to the unexpected results. 85

the composition of test variables, the data produced a single dimension of speeded naming, which reflects the broad constructs Gs and Glr (see Table A-54). A similar dimension was observed when Korkman et al. s (2007; Table A-36) data was analyzed, as COWA-like variables appeared on a dimension that primarily reflected speed, naming ability, and verbal fluency. The test had modest factor coefficients on dimensions that emphasized the Stroop in two samples (Graf et al., 1995 [Table A-23]; Pineda & Merchan, 2003 [Table A-52]), which is logical because the Stroop is a speeded naming task that evaluates Gs, Glr, and Gf narrow constructs. An interesting finding is that the COWA had modest coefficients on a dimension that reflected predominantly Glr abilities in the remaining sample (see Salthouse et al., 1996; Table A-57). This structure is consistent with the hypothesized relationship between verbal fluency tasks and Glr (see Table 3). Lastly, although phonemic and semantic fluency tasks require different narrow abilities (Glr-FW versus Glr-FA), these variables always appeared on the same factors which raises the question of whether or not they are truly dependent on different abilities 14. Language Measures The Boston Naming Test (BNT), Aphasia Screening Test (AST), and Multilingual Aphasia Examination (MAE) are the three targeted language measures. These measures predominantly reflect Gc narrow abilities (see Table 3). The BNT was investigated in five studies (see Table 7). Four studies provided correlation matrices that included 1 BNT 14 Reanalysis of Jones and Ayers (2006; Table A-30) data produced a 2-factor solution that included a phonemic verbal fluency variable loading uniquely onto one dimension and a semantic verbal fluency variable having similar loadings on both dimensions. Reanalysis of Woodard, Goldstein, Roberts, and McGuire s (1999; Table A-77) data produced a 2-factor solution where a phonemic verbal fluency variable loaded onto both dimensions and the semantic verbal fluency variable loaded onto just one dimension. It is unclear how meaningful these inconsistent finding are because (1) these analyses were conducted on approximated correlation matrices generated from a 2-factor PCA solutions and (2) the differentiation between variables was not replicated across other samples. 86

variable and a total of 5 (Riva et al., 2000) to 16 (Pontón et al., 2000) test variables in my final matrices. Holtzer, Verghese, Xue, and Lipton (2006) provided data that made it possible to approximate a correlation matrix that included 1 BNT variable and a total of 11 test variables. It was only possible to locate four studies that reported relevant AST data (see Table 7). Three of these samples included 1 AST variable and the total number of variables included in these final datasets ranged from 25 (Goldstein & Shelly, 1972) to 39 (Royce et al., 1976). Williams and Shane s (1986) data are unique because it included only 8 AST variables. Three of the four AST studies included a correlation matrix between test variables, Goldstein and Shelly (1984) provided data that could be used to generate an approximated correlation matrix. The five MAE datasets came from clinical samples (see Table 7). Three correlation matrices were investigated that included 3 MAE variables and 10 to 11 total variables (Cohen, Kelter, & Woll, 1980; Teng et al, 1989). It was necessary to calculate approximated correlation matrices in the remaining two MAE datasets (Jones & Ayers, 2006; Woodard et al., 1999). The approximated Jones and Ayers (2006) dataset included 20 total variables, 5 of which are similar to MAE tests, whereas the approximated Woodard et al. (1999) matrix included a total of 11 variables, 3 of which are related to MAE tests. 87

Table 7. Targeted Language measures Publ. Fac. (Var.) Quality Rating for Test Extraction criteria Measure Study Sample (n) Mean Age Data analyzed (Test/Tot. Var.) EV > 1 PA MAP RV BNT Holtzer et al. (2006) Adults (186) 78 3 (12) E. r-mat. (1/11) 3 3 3 3 1-2 2 Pontón et al. (2000) Adults (300) 38.4 5 (15) r-mat. (1/16) 1 4 2-3 1-3 2 2 Riva et al. (2000) Children (160) 5-11 1 2 (5) r-mat. (1/5) 1 1 1 1 1 1 Sewell et al. (1988) College students (60) 18-28 1 2-4 (9) r-mat. (1/9) 1 4 2 1-2 NPD 2 Teng et al. (1989) Neuro. (54) 68.1 3 (10) r-mat. (1/10) 1 3 1 2 NPD 1 AST 2 Goldstein & Shelly (1972) Neuro. / psych. (619) Adult 3 4 (25) r-mat. (1/25) 3 4 2-3 2-4 2-3 3 Goldstein & Shelly (1984) Neuro. / psych. (150) 47.7 5 (30) E. r-mat. (1/30) 3 5 2 4-5 NPD 2 Royce et al. (1976) Neuro. (176) 40.1 13 (54) r-mat. (1/39) 3 11 3-4 2-6 NPD 5 Williams & Shane (1986) Neuro. (197) 46.0 2 (8) r-mat. (8/8) 2 2 1 1 1 1 MAE Cohen et al. (1980) Broca s aphasia (20) 18-66 1 2 (11) r-mat. (3/11) 1 3 1 1-3 NPD 2 Wernicke s aphasia (20) 18-66 1 2 (11) r-mat. (3/11) 1 3 1 1-4 NPD 2 Jones & Ayers (2006) Neuro. (135) 71.1 2 (21) E. r-mat. (5/20) 3 2 2 2 2 2 Teng et al. (1989) Neuro. (54) 68.1 3 (10) r-mat. (3/10) 1 3 1 2 NPD 1 Woodard et al. (1999) Neuro. (130) 76.3 3 (12) E. r-mat. (3/11) 2 2 2 2 2 2 Note. Studies that are presented in italic text had sample sizes < 100. Publ. = Publication; Fac. = Factors; Var. = Variables; EV = Eigenvalue; PA = Parallel analysis; MAP = Velicer s (1976) minimum average partial procedure; RV = Random variable method; Ext. = Extracted; BNT = Boston Naming Test; E. = estimated; mat. = matrix; NPD = PCA resulted in a not positive definite matrix; Neuro. = neurologic; AST = Aphasia Screening Test; psych. = psychiatric; MAE = Multilingual Aphasia Examination. 1 Age range. 2 It was not possible to locate five relevant AST studies. 3 Mean age was not provided. Fac. Ext. 88

BNT. Reanalysis of Holtzer et al. s (2006; Table A-27) data produced a factor structure that included the BNT on a dimension that reflected predominantly narrow Gc abilities. This was the only high quality dataset included, and the structure is consistent with expectation regarding the relationship between the BNT and CHC theory. The BNT appeared on more diverse dimensions in the remaining samples, which is likely because these datasets did not include multiple test variables related to Gc. Reanalysis of Sewell, Downey, and Sinnett s sample of 60 college students (1988; Table A-59) yielded a solution where the BNT had a moderate loading on a Glr-Gc dimension. The BNT appeared on factors reflecting Gv, Gf, and Gs abilities in the remaining samples (Pontón et al, 2000 [Table A-53]; Riva et al., 2000 [Table A-54]; Teng et al., 1989 [Table A-67, n = 54]). AST. Williams and Shane s (1986) data are interesting because they provided more refined data than just a total AST score, which potentially permits for a multifactor AST structure. However, the AST variables they reported were not sufficiently different from one another to justify a multidimensional structure (see Table A-75). Reanalysis of data reported by Goldstein and Shelly (1972 [Table A-21], 1984 [Table A-22]) generally supported hypothesized connections between the AST and broad constructs Gc and Grw, as the AST appeared on dimensions that predominantly reflected these broad abilities. In the remaining sample (Royce et al., 1976 [Table A-55]), the AST appeared on a dimension that predominantly reflected Gc and speeded naming, which does not specifically support the hypothesized relationships between the AST and CHC constructs. MAE. These factor structures should be interpreted cautiously, as three of the five studies were of poor quality (see Table 7). MAE test variables appeared on conceptually 89

diverse dimensions in three samples (Jones & Ayers, 2006 [Table A-30]; Teng et al., 1989 [Table A-67, n = 54]; Woodard et al., 1999 15 [Table A-77]). Cohen et al. (1980) reported data from two small aphasic samples (n = 20 in each sample). The factor structures produced from this study were inconsistent and challenging to interpret (see Table A-12; Table A-13). Memory Measures The Wechsler Memory Scale (WMS), Rey Auditory Verbal Learning Test (RAVLT), and Wide Range Assessment of Memory and Learning (WRAML) were the targeted memory tests. These measures predominantly reflect the broad CHC constructs Glr, Gv, and Gsm (see Table 3). Three normative samples (Wechsler, 1997) and two clinical samples (Bowden, Cook, Bardenhagen, Shores, & Carstairs, 2004; Wilde et al., 2003) provided relevant WMS-III or WMS-R data (see Table 8). Bowden et al. reported a correlation matrix that included 11 WMS-R and 11 WAIS-R variables. The remaining WMS samples were correlation matrices that included 10 (Wilde et al., 2003) to 23 (Wechsler, 1997) relevant test variables. The RAVLT was investigated in five samples (see Table 8) and the total number of variables included in the correlation matrices ranged from 10 to 21. The number of RAVLT test variables included in these datasets varied from 3 (Graf et al., 1995; Ryan, Rosenberg, & Mittenberg, 1984) to 10 (Baños, Elliott, & Schmitt, 2005). The WRAML was investigated in five samples (see Table 8). 15 Woodard et al. s (1999) data produced an interesting factor structure. The extraction procedures clearly indicated it was appropriate to retain two factors, but the resulting structure is curious because every test variable had a significant loading on the first dimension and 6 of the 11 variables also had meaningful loadings on the second dimension. Further, the two dimensions were uncorrelated (r =.024). This structure should be interpreted cautiously because PCA was performed on an approximated correlation matrix generated from reported factor loadings. When the approximated correlation matrix was calculated an additional test variable related to psychopathology was included. When PCA was conducted, this variable was not included given it is unrelated to cognitive ability. 90

Three samples included 6 to 15 WRAML2 variables (Adams & Sheslow, 2003), and two samples provided 9 WRAML variables (Phelps, 1995; Adams & Sheslow, 1990). 91

Table 8. Targeted Memory measures Publ. Fac. (Var.) Quality Rating for Test Extraction criteria Measure Study Sample (n) Mean Age Data analyzed (Test/Tot. Var.) EV > 1 PA MAP RV Fac. Ext. WMS Bowden et al. (2004) Neuro. (277) 39.8 6 (22) r-mat. (11/22) 3 4 3 3-4 3 3 Wechsler (1997) Norm. (400) 16-29 1 NA r-mat. (23/23) 3 6 4-5 2-8 5 4 Norm. (400) 30-64 1 NA r-mat. (23/23) 3 6 3-5 2-8 3-4 4 Norm. (450) 65-89 1 NA r-mat. (23/23) 3 5 4 4-7 4 4 Wilde et al. (2003) Epilepsy (254) 35.6 2 (10) r-mat. (10/10) 2 3 1-2 1-2 NPD 2 RAVLT Baños et al. (2005) Spinal cord (221) 36.5 3 (10) r-mat. (10/10) 3 1 1 1 1 1 Graf et al. (1995) Adults (129) 65-95 1 5 (21) r-mat. (3/21) 3 5 2 1-4 2 2 Johnstone et al. (2000) Neuro. (291) 32.5 4 (15) r-mat. (5/13) 3 2 2 2-3 2 2 Ryan et al. (1984) Neuro./psych. (108) 45.4 4 (14) r-mat. (3/14) 3 3 2 3 NPD 3 Talley (1986) LD students (153) 7-16 1 3 (11) r-mat. (8/11) 3 3 1 1 1 1 WRAML Phelps (1995) LD students (115) 10.2 3 (9) r-mat. (9/9) 3 3 1 1 1 1 Adams & Sheslow (2003) Norm. 2 (1200) 5-Adlt 1 NA r-mat. (6/6) 1 1 1 1 1 1 Norm. 3 (320) 5-8 3 (6) 4 r-mat. (15/15) 3 4 2-3 1 2 2 Norm. 3 (880) 9-Adlt 1 3 (6) 4 r-mat. (15/15) 3 4 3 4 4 4 Adams & Sheslow (1990) Norm. (2363) 5-18 1 3 (9) r-mat. (9/9) 5 3 2 2 1 2 2 Note. Publ. = Publication; Fac. = Factors; Var. = Variables included in analyses; EV = Eigenvalue; PA = Parallel analysis; MAP = Velicer s (1976) minimum average partial procedure; RV = Random variable method; Ext. = Extracted; WMS= Wechsler Memory Scale; Neuro. = Neurologic; mat. = matrix; Norm. = Normative; NA = Not analyzed; NPD = PCA resulted in a not positive definite matrix; RAVLT = Rey Auditory Verbal Learning Test; Psych. = Psychiatric; LD = Learning disabled; WRAML = Wide Range Assessment of Memory and Learning. Adlt = Adult. 1 Age range. 2 WRAML2 Index scores. 3 WRAML2 Subtest scores. 4 Analyses were conducted on WRAML2 Primary subtests and made use of the total normative sample. 5 Correlation matrix generated by averaging two normative sample correlation matrices. 92

WMS. Reanalysis of Bowden et al. s (2004) data produced 3 dimensions that reflected CHC constructs; a verbal memory dimension reflecting related WMS-R verbal subtests (Glr-MM; Glr-MA) was clearly differentiated from dimensions that reflected the broad constructs Gc and Gv, respectively (see Table A-11). My reanalysis of all WMS-III subtests in the three normative reference samples produced a 4-dimensional structure in each (Wechsler, 1997; see Table A-69, Table A-70, Table A-71). These structures are interesting because modality-specific memory tasks appeared on different dimensions and reflect different narrow CHC abilities, which is consistent with the expectations that different WMS subtests are related to different narrow abilities. For example, two samples had differentiated verbal memory dimensions (see Table A-69, Table A-71; Story Memory [Glr-MM] versus Word Lists/Verbal Paired Associates [Glr-M6/Glr- Ma]). The visual memory dimensions were inconsistent across samples. For example, in one instance all visual memory tasks loaded onto a single dimension (Wechsler, 1997, Table A-71), whereas other solutions included the Visual Reproduction subtests defining a unique factor (Wechsler, 1997, Table A-70) or being associated with a dimension reflecting Gsm (Wechsler, 1997, Table A-69). A dimension of verbal and visual memory was differentiated from a specific facial memory dimension when Wilde et al. s (2003) data was reanalyzed (see Table A-74), which is inconsistent with the CHC model. RAVLT. Reanalysis of two datasets produced single factor solutions that reflected Glr (Baños et al., 2007 [Table A-6]; Talley, 1986 [Table A-65]). Across the remaining three samples, the RAVLT appeared on similar dimensions that reflected Glr, which were differentiated from dimensions that reflected Gv (e.g., Johnstone, Vieth, Johnson, & 93

Shaw, 2000 [Table A-29]), Gf-Gs (Graf et al., 1995 [Table A-23]), and Gc (Ryan et al., 1984 [Table A-56]) abilities. WRAML. Reanalysis of one sample produced a differentiated factor structure that reflected specific narrow CHC constructs, Glr-M6 (Verbal Learning), Glr-MM (Story Memory), Gv-MV (Design and Picture Memory), and Gsm (e.g., Number Letter, Sentence Memory) (Adams & Sheslow, 2003; Table A-4). No other WRAML dataset produced a structure that was similarly differentiated and consistent with the CHC model. For example, two datasets produced general memory factors that were differentiated from factors that predominantly reflected Gsm (Adams & Sheslow, 1990; Table A-1) and Glr- MA (Adams & Sheslow, 2005; Table A-3). General memory factors were observed in the remaining two samples (Adams & Sheslow, 2003 [Table A-2]; Phelps, 1995 [Table A- 51]), although one of these datasets was of low quality (see Table 8; Adams & Sheslow, 2003 [Table A-2]). Motor Function Measures The Finger Tapping Test (FTT), Grip Strength Test, and Purdue Pegboard Test (PPT) are the targeted motor functioning tasks (see Table 9). The FTT was hypothesized to be dependent on Gt; it was anticipated that the PPT would be related to Gs. It was not possible to link the Grip Strength test with CHC constructs. The FTT was investigated in five studies (see Table 9) that included 6 to 39 test variables in my final analyses. Each study presented correlation matrices that included 1 (Francis, Fletcher, Rourke, & York, 1992; Macciocchi, Fowler, & Ranseen, 1992; Teng et al., 1989) or 2 (Goldstein & Shelly, 1972; Royce et al. 1976) FTT variables. The Grip Strength Test was investigated in five different samples (see Table 9), each of which included 1 Grip Strength variable. The 94

correlation matrices presented in these studies included 12 to 23 test variables. The PPT was also investigated in five different samples and the number of total variables included in each study ranged from 9 to 39 (see Table 9). Each of these studies provided a correlation matrix that included 1 (Aftanas & Royce, 1969; Francis et al., 1992; Keefe et al., 2006) or 2 (Hartmann et al. 2007; Royce et al., 1976) PPT variables. 95

Table 9. Targeted Motor Function measures Quality Rating Extraction criteria Measure Study Sample (n) Mean Age Publ. Fac. (Var.) Data analyzed (Test/Tot. Var.) for Test EV > 1 PA MAP RV Fac. Ext. FTT Francis et al. (1992) LD students (722) 9-12 5 (12) r-mat. (1/12) 1 3 3 1-2 3 3 Goldstein & Shelly (1972) Neuro./psych. (619) Adult 1 4 (25) r-mat. (2/25) 1 4 2-3 2-4 2-3 3 Macciocchi et al. (1992) Neuro./psych. (161) 32.7 3 (9) r-mat. (1/6) 1 2 1 1-2 1 1 Royce et al. (1976) Neuro. (176) 40.1 13 (54) r-mat. (2/39) 1 11 3-4 2-6 NPD 5 Teng et al. (1989) Neuro. (54) 68.1 3 (10) r-mat. (1/10) 1 3 1 2 NPD 1 Grip Francis et al. (1992) LD students (722) 9-12 5 (12) r-mat. (1/12) 1 3 3 1-2 3 3 Sung & Davis (1981) Afr.-Amer. students (272) 14-18 2 10-14 (17) r-mat. (1/15) 1 5 4 1-2 3-4 4 Cauc. students (547) 14-18 2 10-14 (17) r-mat. (1/15) 1 5 3-4 1-2 3-4 4 Hisp. students (188) 14-18 2 10-14 (17) r-mat. (1/15) 1 4 3-4 1-3 3 3 Tirre & Field (2002) CC clients (329) 29.0 8 (24) r-mat. (1/23) 1 7 4-6 2-5 5 4 PPT Aftanas & Royce (1969) Adults (100) 16-70 2 12 (35) r-mat. (1/27) 3 9 1-2 1-3 NPD 3 Francis et al. (1992) LD students (722) 9-12 5 (12) r-mat. (1/12) 3 3 3 1-2 3 3 Hartmann et al. (2007) US Veterans (4321) 38.4 3 2 (19) r-mat. (2/17) 3 3 3 2-3 3 3 Keefe et al. (2006) Psych. (1332) 40.4 1 (9) r-mat. (1/9) 3 1 1 1 1 1 Royce et al. (1976) Neuro. (176) 40.1 13 (54) r-mat. (2/39) 3 11 3-4 2-6 NPD 5 Note. Studies that are presented in italic text had sample sizes < 100. Publ. = Publication; Fac. = Factors; Var. = Variables; EV = Eigenvalue; PA = Parallel analysis; MAP = Velicer s (1976) minimum average partial procedure; RV = Random variable method; Ext. = Extracted; FTT = Finger Tapping Test; LD = Students referred for learning disability evaluation; mat. = matrix; Neuro. = Neurologic; Psych. = Psychiatric; NPD = PCA resulted in a not positive definite matrix; Grip = Grip Strength Test; Afr.-Amer. = African-American; Cauc. = Caucasian; Hisp. = Hispanic; CC = Career counseling; PPT = Purdue Pegboard Test. 1 Mean age not reported. 2 Age range. 3 Based on a sample size of 4462. 96

FTT. Each targeted dataset that included that FTT was rated as being of low quality, thus it was not possible to adequately test the hypothesized relationship between the FTT and Gt. Goldstein and Shelly (1972) presented data from a diverse set of variables, and reanalysis produced a 3-dimensional solution that included markers related to (1) Gf-Gv, (2) Gc-Gsm, and (3) Gt constructs (see Table A-21). It is encouraging that the FTT provided the only significant loading on the dimension that reflected Gt. The FTT loaded onto more conceptually diverse dimensions across the remaining samples. The FTT appeared on a dimension that reflected a combination of Gt and manual strength when Francis et al. s (1992) data were analyzed, and the test appeared on a dimension reflecting a combination of Gt and Gs when Royce et al. s (1976) data was analyzed (see Table A-20, Table A-55). Single factor solutions were produced from the two remaining samples (Macciocchi et al., 1992 [Table A-42]; Teng et al., 1989 [Table A-67, n = 54]), which is not surprising because these datasets included only 6 or 10 test variables (see Table 9). Grip Strength. Each targeted dataset that included the Grip Strength test was rated as being of low quality, and these results should be interpreted cautiously. The test was included on a somewhat illogical dimension that reflected Gt and manual strength in one sample (Francis et al., 1992, Table A-20). Reanalysis of the remaining four samples also produced confusing factor structures that included the Grip Strength task on dimensions that reflected Gv and Gs (Sung & Davis, 1981 [Table A-62, Table A-63, Table A-64]; Tirre & Field, 2002 [Table A-68]). These results are limited because the datasets did not include other test variables that are related to manual strength, thus drastically decreasing 97

the likelihood of finding a strength dimension, and because it was not possible to link the Grip Strength test to specific CHC constructs. PPT. Reanalysis of Hartmann et al. s (2007) data produced a 3 factor solution. In this solution the PPT appeared on a dimension that reflected Gs relative to those reflecting Gv and predominantly Gf-Gc, respectively (see Table A-25). Similar results were obtained in two additional samples (Aftanas & Royce, 1969 [Table A-5]; Francis et al., 1992 [Table A-20]), as the PPT appeared on factors that reflected Gs. The PPT had mild to weak loadings on dimensions that appeared conceptually related to Gt in one sample (Royce et al., 1976 [Table A-55]), which is inconsistent with expectation. It is somewhat surprising that a complex single factor solution akin to g was produced when Keefe et al. s (2006) data were analyzed because the dataset included an adequate number of marker variables for a Gs dimension to be observed (see Table A-31). Visual Perception Measures The Hooper Visual Organization Test (HVOT), Rey-Osterrieth Complex Figure Test (ROCFT), and Judgment of Line Orientation (JOLO) test were the three targeted visual perception measures, which were hypothesized to be related to Gv (see Table 3). The HVOT was investigated in five studies (see Table 10). Three HVOT studies reported correlation matrices that included 1 HVOT test variable and 6 to 26 test variables overall (Aftanas & Royce, 1969; Merten, 2005; Meyers & Meyers, 1995). Two HVOT studies reported information to approximate correlation matrices that included 1 HVOT test variable and 14 total variables (Hermann, Seidenberg, Wyler, & Haltiner, 1993; Tamkin, Kunce, Blount, & Magharious, 1984). The ROCFT, or a very similar task, was investigated in five samples (see Table 10). Correlation matrices were reported in four 98

studies that included 2 (Pontón et al., 2000) to 5 (Meyers & Meyers, 1995) ROCFT variables and 5 to 17 total variables. The JOLO, or a very similar task, was also investigated in five studies (see Table 10), and each study included only 1 relevant JOLO test variable. Out of these five studies, three included correlation matrices with 7 to 20 test variables. Approximated correlation matrices were generated in the remaining JOLO studies and the matrices included 14 to 16 test variables. Wilde s (2006) data is unique because it included test variables that are very similar to the ROCFT and JOLO. The Wilde dataset has a total of 12 test variables. 99

Table 10. Targeted Visual Perception measures Publ. Fac. (Var.) Quality Rating for Test Extraction criteria Measure Study Sample (n) Mean Age Data analyzed (Test/Tot. Var.) EV > 1 PA MAP RV Fac. Ext. HVOT Aftanas & Royce (1969) Adults (100) 16-70 1 12 (35) r-mat. (1/27) 3 9 1-2 1-3 NPD 3 Hermann et al. (1993) Epilepsy (200) 30.9 2 3 (14) E. r-mat. (1/14) 3 3 3 3 3 3 Merten (2005) Neuro. (200) 53.5 4 (21) r-mat. (1/20) 3 4 2-3 2-4 NPD 3 Meyers & Meyers (1995) Neuro. (100) 52.3 5 (7) r-mat. (1/6) 3 2 2 1 NPD 2 Tamkin et al. (1984) Psych. (43) 41.3 2 (17) E. r-mat. (1/14) 1 2 1-2 2 NPD 2 ROCFT Hartmann et al. (2007) US Veterans (4321) 38.4 3 2 (19) r-mat. (3/17) 2 3 3 2-3 3 3 Meyers & Meyers (1995) Norm. (601) 18-89 1 5 (7) r-mat (5/5) 2 2 2 1 2 2 Neuro. (100) 52.3 5 (7) r-mat. (5/6) 3 2 2 1 NPD 2 Pontón et al. (2000) Adults (300) 38.4 5 (15) r-mat. (2/16) 3 4 2-3 1-3 2 2 Wilde (2006) Stroke (210) 61.9 2 (12) r-mat. (2/12) 3 2 2 2 2 2 JOLO Berry et al. (1991) Adults (87) 65.0 4 3 (16) E. r-mat. (1/16) 1 3 3 3 NPD 3 Hermann et al. (1993) Epilepsy (200) 30.9 2 3 (14) E. r-mat. (1/14) 3 3 3 3 3 3 Merten (2005) Neuro. (200) 53.5 4 (21) r-mat. (1/20) 3 4 2-3 2-4 NPD 3 Silverberg et al. (2007) TBI (77) 43.7 NA (7) r-mat. (1/7) 1 1 1 1 1 1 Wilde (2006) Stroke (210) 61.9 2 (12) r-mat. (1/12) 3 2 2 2 2 2 Note. Studies that are presented in italic text had sample sizes < 100. Publ. = Publication; Fac. = Factors; Var. = Variables included in analyses; EV = Eigenvalue; PA = Parallel analysis; MAP = Velicer s (1976) minimum average partial procedure; RV = Random variable method; Ext. = Extracted; HVOT = Hooper Visual Organization Test; mat. = matrix; NPD = PCA resulted in a non-positive definite matrix; E. = estimated; Neuro. = Neurologic; Psych. = Psychiatric; ROCFT = Rey-Osterrieth Complex Figure Test; Norm. = Normative; JOLO = Judgment of Line Orientation Test; TBI = Traumatic Brain Injury. 1 Age range. 2 Based on sample size of 158. 3 Based on sample size of 4462. 4 Based on sample size of 125. 100

HVOT. Consistent with expectation, in three samples the HVOT loaded onto dimensions that predominantly reflected the broad construct Gv (Hermann et al., 1993 [Table A-26]; Meyers & Meyers, 1995 [Table A-46]; Tamkin et al., 1984 [Table A-66, n = 43]). In one sample the HVOT appeared on a factor that predominantly reflected Gv-Gf (Aftanas & Royce, 1969; Table A-5), and in the remaining sample the HVOT appeared on a factor of general cognitive ability (Merten, 2005; Table A-44). ROCFT. Consistent with expectation, PCA conducted across three samples produced dimensions that reflected the broad constructs Gv and Glr (Hartmann et al., 2007 [Table A-25]; Meyers & Meyers, 1995 [Table A-45, Table A-46]). ROCFT recognition subtests were differentiated from more general Gv-Glr dimensions in two samples (Meyers & Meyers, 1995; see Table A-45, Table A-46), which likely occurred because these tasks are related to Glr-MA. Reanalysis of Wilde s (2006) data produced a factor structure where a task similar to the ROCFT appeared on a dimension that predominantly reflected Gv abilities, and to a lesser degree Glr and Gs abilities (see Table A-73). It is challenging to interpret reanalysis of the remaining sample, Pontón et al. (2000), because the ROCFT had significant loadings on a dimension that reflected Gv, Gs, Gc, and Gsm abilities (see Table A-53). JOLO. Consistent with expectation, in two samples the JOLO loaded onto dimensions that reflected Gv (Hermann et al., 1993 [Table A-26]; Merten, 2005 [Table A-44]). Reanalysis of Wilde s (2006) data produced a factor structure that included a JOLO-like task on a dimension that mainly reflected Gv abilities (see Table A-73). It is not surprising that the poor quality studies did not support the hypothesized relationship between the JOLO and Gv. The JOLO appeared on a dimension reflecting Gv and Glr in 101

one small sample (Berry, Allen, & Schmitt, 1991; n = 87; Table A-7) and on a heterogeneous dimension in another (Silverberg et al., 2007; n = 77; Table A-61). Neuropsychological Batteries/Comprehensive Measures Unlike other domains where three specific measures were targeted, five comprehensive neuropsychological tests were targeted. These tests include the Halstead- Reitan Neuropsychological Battery (HRNB), NEPSY, Luria-Nebraska Neuropsychological Battery (LNNB), Dementia Rating Scale (DRS), and Neurobehavioral Cognitive Status Examination (NCSE/Cognistat). The HRNB, NEPSY, and LNNB are considered to be more comprehensive neuropsychological batteries, and the DRS and NCSE are brief screening measures that evaluate different cognitive abilities. These five measures were hypothesized to be related to many different CHC constructs (see Table 3). The HRNB was investigated in five studies, and it was possible to obtain correlation matrices with 12 to 39 total variables (see Table 11). The number of HRNB variables included in these datasets ranged from 9 (Francis et al., 1992) to 34 (Royce et al., 1976). The NEPSY-II manual (Korkman et al., 2007) included correlation matrices for 5 samples, and each dataset included 14 to 29 NEPSY variables (see Table 11). Additionally, the LNNB was investigated in 5 samples (see Table 11). It was possible to obtain two LNNB correlation matrices. One correlation matrix included 6 total variables and 2 LNNB variables (Macciocchi et al., 1992), and the other included 25 total variables, 11 of which were LNNB variables (Shelly & Goldstein, 1982). Information was provided to generate approximated correlation matrices in the remaining LNNB samples. The approximated correlation matrices included 16 (Goldstein & Shelly, 1984) 102

to 51 (Moses, 1983) LNNB test variables. Moses (1983, 1984) reported item-level data and Goldstein and Shelly reported LNNB Index and Factor scales. The brief screening measures included in this project, the DRS and NCSE, were also each investigated in five samples (see Table 11). Woodard, Salthouse, Godsall, and Green (1996) provided a correlation matrix that included all 36 DRS tasks. Colantonio, Becker, and Huff (1993) provided a correlation matrix that included 17 DRS tasks/combinations of tasks. Similarly, Kessler, Roth, Kaplan, and Goode (1994) presented data that made it possible to generate a correlation matrix with 20 DRS tasks/combinations of tasks. Lyness, Hernandez, Chui, and Teng (2006) presented two correlation matrices that included only 4 and 5 DRS index scores, respectively. Four studies reported a correlation matrix or data that made it possible to generate an accurate NCSE correlation matrix (see Table 11), whereas an approximated NCSE correlation matrix was generated from data presented by Man, Tam, and Hui-Chan (2006). Each of these datasets included all 11 NCSE variables. 103

Table 11. Targeted Neuropsychological Batteries/ Comprehensive measures. Publ. Fac. (Var.) Quality Rating for Test Extraction criteria Measure Study Sample (n) Mean Age Data analyzed (Test/Tot. Var.) EV > 1 PA MAP RV HRNB Aftanas & Royce (1969) Adults (100) 16-70 1 12 (35) r-mat. (12/27) 3 9 1-2 1-3 NPD 3 Francis et al. (1992) LD students (722) 9-12 5 (12) r-mat. (9/12) 2 3 3 1-2 3 3 Goldstein & Shelly (1972) Neuro./ psych. (619) Adult 2 4 (25) r-mat. (25/25) 3 4 2-3 2-4 2-3 3 Halstead (1947) Adults (50) Adult 2 4 (13) r-mat. (13/13) 1 4 1 2 NPD 2 Royce et al. (1976) Neuro. (176) 40.1 13 (54) r-mat. (34/39) 3 11 3-4 2-6 NPD 5 NEPSY Korkman et al. (2007) Norm. (200) 3-4 1 NA r-mat. (14/14) 2 4 1 1 1 1 Norm. (200) 5-6 1 NA r-mat. (21/21) 3 6 2-3 1-2 2-3 2 Norm. (600) 7-12 1 NA r-mat. (26/26) 3 8 4-5 1-3 4 4 Norm. (200) 13-16 1 NA r-mat. (29/29) 3 9 3-7 1-3 3 3 Neuro./ psych. (238) 9.7 NA r-mat. (26/26) 3 6 3 1-4 2-3 3 LNNB Goldstein & Shelly (1984) Neuro./ psych. (150) 47.7 5 (30) E. r-mat. (16/30) 3 5 2 4-5 NPD 2 Macciocchi et al. (1992) Neuro./ psych. (161) 32.7 3 (9) r-mat. (2/6) 1 2 1 1-2 1 1 Moses (1983) Mixed 3 (542) NR 8 (51) 4 E. r-mat. (51/51) 2 8 8 8 8 8 Moses (1984) Mixed 3 (542) NR 5 (42) 5 E. r-mat. (42/42) 2 5 5 5 5 5 Shelly & Goldstein (1982) Neuro./ psych. (150) 47.5 4 (25) r-mat. (11/25) 3 4 2 4-6 NPD 4 DRS Colantonio et al. (1993) AD (219) 72.7 3 (17) E. r-mat. (17/17) 2 3 3 3 2-3 3 Kessler et al. (1994) Neuro./ psych. (185) 74.9 2 (20) r-mat. (20/20) 2 4 2-3 3-4 2-3 3 Lyness et al. (2006) Norm. (54) 6 70.8 NA r-mat. (4/4) 1 2 0-2 0-1 1-2 1 Norm. (30) 7 71.2 NA r-mat. (5/5) 1 3 0-1 0-1 1-2 1 Woodard et al. (1996) AD (171) 75.0 4 (36) r-mat. (36/36) 2 10 4 4 NPD 4 Fac. Ext. 104

Publ. Fac. (Var.) Quality Rating for Test Extraction criteria Measure Study Sample (n) Mean Age Data analyzed (Test/Tot. Var.) EV > 1 PA MAP RV NCSE Logue et al. (1993) Psych. (886) 15-92 1 NA r-mat. (10/10) 2 1 1 1 1 1 Man et al. (2006) Stroke (148) 70.4 2 (10) E. r-mat. (10/10) 2 2 1-2 2 1 1 Osmon et al. (1992) Neuro./ ortho. (36) 76.6 NA r-mat. (10/10) 1 3 1 1 NPD 1 Schwamm et al. (1987) Neuro. (30) 54.2 NA r-mat. (10/10) 1 2 1 2 NPD 1 Wiederman & Morgan Geropsych. (503) 74.7 NA r-mat. (10) 2 1 1 1 1 1 (1995) Note. Studies that are presented in italic text had sample sizes < 100. Publ. = Publication; Fac. = Factors; Var. = Variables; EV = Eigenvalue; PA = Parallel analysis; MAP = Velicer s (1976) minimum average partial procedure; RV = Random variable method; Ext. = Extracted; HRNB = Halstead-Reitan Neuropsychological Battery; mat. = matrix; NPD = PCA resulted in a not positive definite matrix; LD = Children referred for learning disability evaluation; Neuro. = Neurologic; Psych. = Psychiatric; NEPSY= NEPSY Neuropsychological Battery; Norm. = Normative; LNNB = Luria-Nebraska Neuropsychological Battery; E. = estimated; DRS = Mattis Dementia Rating Scale; AD = Probable and possible patients with Alzheimer s disease; NCSE = Neurobehavioral Cognitive Status Examination (Cognistat); Ortho. = Orthopedic; Geropsych. = Geropsychiatric. 1 Age range. 2 Mean age not reported. 3 Mixed patient sample included 55% neurologic patients, 33% psychiatric patients, and 12% controls subjects. 4 LNNB items that comprised the Motor Scale. 5 LNNB items that comprised the Expressive Speech Scale. 6 Spanish speaking participants. 7 English speaking participants. Fac. Ext. 105

HRNB. This section will focus on HRNB tests that have not previously been discussed. In four samples the Tactual Performance Test appeared on factors that predominantly reflected narrow Gv constructs (Aftanas & Royce, 1969 [Table A-5]; Francis et al., 1992 [Table A-20]; Goldstein & Shelly, 1972 [Table A-21]; Royce et al., 1976 [Table A-55]). A coherent dimension of narrow Gc abilities was observed in two samples that included meaningful loadings from WAIS verbal subtests and the AST (Goldstein & Shelly, 1972 [Table A-21]; Royce et al., 1976 [Table A-55]). The Speech Sounds Perception Test and Seashore Rhythm Test each were included in three samples, and the results did not support an association between these measures and Ga. The Speech Sound Perception Test appeared on a dimension that reflected Gc-Gsm constructs in one solution (Goldstein & Shelly, 1972, Table A-21), and on a general factor of cognitive ability in another (Halstead, 1947 [Table A-24]). It is notable that the latter structure was generated from a sample of 50 men who had recovered from head injury. The test did not have meaningful loadings on any dimension in the remaining solution (Aftanas & Royce, 1969; Table A-5). The Seashore Rhythm Test loaded onto dimensions that reflected Gs-Gt (Aftanas & Royce, 1969 [Table A-5]) and Gv (Royce et al., 1976, Table A-55) in two samples, whereas it was unrelated to 3 factors in a third (Goldstein & Shelly, 1972, Table A-21). It was also challenging to document consistent relationships between the Finger Number and Finger Discrimination tasks and CHC constructs. These tasks were only included in 1 and 2 datasets, respectively (see Goldstein & Shelly, 1972, [Table A-21]; Royce et al., 1976 [Table A-55]). NEPSY. Reanalysis of each NEPSY-II dataset produced one or more dimensions that were challenging to concisely describe because they included multiple CHC 106

constructs (Korkman et al., 2007; Table A-33, Table A-34, Table A-35, Table A-36, Table A-37). The only concise and coherent dimension that appeared across samples reflected Memory for Faces and Memory for Designs subtests (see Table A-34, Table 35). These dimensions reflect the broad construct Gv, and are consistent with expectation. Overall, it was very difficult to interpret these results as supporting hypothesized connections between specific NEPSY test variables and CHC constructs. A partial explanation for the discrepant factors across solutions is that 4 of the 5 samples were comprised of different combinations of test variables. LNNB. Reanalysis of LNNB data was limited for some of the same reasons that it was difficult to analyze NEPSY data. Different combinations of scales, Factor scales, and items were analyzed in different datasets. In one sample, 6 of the 11 LNNB scales defined a general factor of cognition (Shelly & Goldstein, 1982, Table A-60, Factor 1). Three additional dimensions were produced in this solution, the Reading, Writing, and Arithmetic scales loaded onto a dimension that reflected Grw, Gq, and Gsm (Factor 2), the Intellectual Processes scale loaded onto a dimension that reflected Gc (Factor 3), and the Motor, Visual, and Tactile Functions scales loaded onto a dimension that reflected Gv (Factor 4). In another dataset Reading and Expressive Speech Factor scales appeared on a dimension that predominantly reflected Grw and Gc, and the Receptive Speech and Writing Factor scales equally appeared on that dimension and another that emphasized Gv, Gs, and physical strength (Goldstein & Shelly, 1984, Table A-22). The Motor, Visual, and Tactile Functions scales loaded onto the latter dimension in this solution. It is not surprising that reanalysis of Macciocchi et al. s (1992) data produced a single factor 107

of general cognitive ability because the dataset was rated as being of low quality (see Table 11 and Table A-42). When just the LNNB Motor scale items were analyzed a complex structure emerged (Moses, 1983, Table A-47). Some of these factors reflected narrow CHC constructs (e.g., Factor 5 reflected Grw-RD), whereas others did not (e.g., Factor 8 reflected an individual s ability to use their right hand). When the Expressive Speech items were analyzed similar results were observed (Moses, 1984, Table A-48), some dimensions reflected very specific CHC constructs (e.g., Factor 2 reflected Grw-RD) and others were very challenging to conceptualize (e.g., Factor 4 is defined by 4 items that reflect narrow Ga, Grw, and Gs constructs). DRS. The composition of DRS test variables makes it difficult to test some of hypothesized relationships with the CHC model because there are not an adequate number of specific marker variables. The consequence of this is that none of the included studies were rated as being of high quality. In two samples, Graphomotor and Construction tasks loaded onto dimensions that reflected Gv (Kessler et al., 1994 [Table A-32: Factor 2]; Woodard et al., 1993 [Table A-76: Factor 3]). The other factors observed across studies were inconsistent, and a majority of these factors were very challenging to interpret because they reflected multiple CHC constructs (e.g., Colantonio et al., 1993 [Table A-14: Factors 1 and 2]; Kessler et al., 1994 [Table A-32: Factors 1 and 3]; Woodard et al., 1993 [Table A-76: Factor 1]). Lyness et al. (2006) reported DRS index scores obtained from two small samples. Single dimensions were retained in each sample, and these dimensions did not include significant loadings from each index score (Table A-40, n = 54; Table A-41, n = 30). 108

NCSE/Cognistat. Similar to the DRS, the composition of NCSE test variables makes it difficult to test hypothesized relationships with the CHC model when the measure is analyzed independently. Although specific NCSE tasks require different combinations of CHC abilities, the factor structures produced from reanalysis of five studies produced single factor solutions (Logue, Tupler, D Amico, & Schmitt, 1993 [Table A-39]; Man et al., 2006 [Table A-43]; Osmon, Smet, Winegarden, & Gandhavadi, 1992 [Table A-50, n = 36]; Schwamm et al., 1987 [Table A-58, n = 30]; Wiederman & Morgan, 1995 [Table A-72]). These dimensions were heterogeneous in nature, and in most instances all variables had moderate to high loadings on the single dimension. Hierarchical Exploratory Factor Analysis It was anticipated that hierarchical exploratory factor analysis (HEFA) would be conducted on multi-factor solutions to determine whether factors hang together under a higher-order broad CHC construct. Although three or more marker variables are generally needed to identify a distinct factor (Velicer & Fava, 1998), some solutions generated during reanalysis included coherent dimensions that reflected as few as two test variables (e.g., Fos et al., 2000; Table A-19). Given this, further analyses were conducted only on solutions that include four or more factors. This decision is justified because it potentially permits each dimension to be defined by at least two previously identified factors. Table 12 includes each sample that produced four or more factors and the results from two factor retention procedures. Overall, these preliminary analyses indicate there was not clear support for extracting more than a single higher-order dimension (reflecting 109

manifestations of the construct g) from any factor correlation matrix. Given this, HEFA was not conducted. Table 12. Targeted samples for hierarchical exploratory factor analysis Extraction Criteria EV > 1 PA MAP Study Sample (n) Adams & Sheslow (2003) Norm. 1 (880) 1 1 0-1 Boone et al. (1998) Nueorpsy. / Norm. (250) 1 1 0 Korkman et al. (2007) Norm. (600) 1-2 1 0 Moses (1983) Mixed (542) 1-2 1 0-1 Moses (1984) Mixed (542) 1 1 0-1 Royce et al. (1976) Neuro. (176) 1-2 1 0-1 Shelly & Goldstein (1982) Neuro./ psych. (150) 1 1 1 Sung & Davis (1981) Afr.-Amer. students (272) 1 1-2 0 Sung & Davis (1981) Cauc. students (547) 1-2 1 0 Tirre & Field (2002) CC clients (329) 1-2 1 0 Wechsler (1997) Norm. 16-29 yo. (400) 1 1 0 Wechsler (1997) Norm. 30-64 yo. (400) 1 1 0-1 Wechsler (1997) Norm. 65-89 yo. (450) 1 1 0-1 Woodard et al. (1996) AD (171) 1 1 0 Note. Fac. = Factors; Ext. = Extracted; EV = Eigenvalues; PA = Parallel analysis; MAP = Velicer s (1976) minimum average partial procedure; Norm. = Normative; Neuropsy. = Neuropsychiatric; Mixed = 55% neurologic patients, 33% psychiatric patients, and 12% controls subjects; Neuro. = Neurologic; Psych. = Psychiatric; Afr.-Amer. = African- American; Cauc. = Caucasian; CC = Career counseling; AD = Probable and possible patients with Alzheimer s disease. 1 WRAML2 Subtest scores. Factorial Invariance The factorial invariance of multidimensional structures was investigated when multiple samples included identical tests and the samples were sufficiently large. Five groups of studies met this criterion and are presented in Appendix B. Barrett s (2005) Orthosim program was used to generate congruency coefficients that reflect whether factors were similar across different samples. Congruence coefficients greater than.90 are typically interpreted as indicating that factors are similar (Barrett, Petrides, Eysenck, & Eysenck, 1998), although more refined benchmarks have also been suggested (e.g., 110

.98-1.00 excellent,.92-.98 good,.82-.92 borderline; see MacCallum, Widaman, Zhang, & Hong, 1999). Decisions regarding how many factors to retain in a specific sample were made independent of the decisions made when additional samples were analyzed. In some instances different numbers of factors were retained from datasets that included the same test variables. For example, a single WRAML factor was retained when Phelps (1995) correlation matrix was analyzed and 2 WRAML factors were retained from a correlation matrix provided by Adams and Sheslow (1990) that included exactly the same variables. To investigate the congruency of different dimensional structures it was sometimes necessary to either over- or under-extract factors so that similar x-dimensional structures could be compared. Factor structures largely exhibited borderline to good congruency across samples (see Appendix B). For example, when 2-dimensional WRAML factor structures were extracted from normative and clinical datasets the congruency coefficients ranged from.85 to.93 (see Table B-1). On the other hand, Sung and Davis (1981) presented data from three samples and a majority of the congruency coefficients across 3- and 4-dimensional comparisons were less than.90 (see Table B-4). These congruency coefficients are generally less than those observed between other samples. An additional general finding regarding factor structure congruency is that as fewer factors are compared, the congruency between solutions generally increases. For example, the majority of congruency coefficients between three factor WMS-III solutions are good, whereas many congruency coefficients between four factor solutions are lower, suggesting less congruency between factor structures (see Table B-5). A similar pattern 111

of increased congruency as fewer factors are retained was observed with NEPSY factor structures (see Table B-3), but not with Sung and Davis (1981, Table B-4). 112

Chapter IV Discussion The goal of this project was to determine whether frequently used neuropsychological measures evaluate broad and narrow CHC constructs. Although it was possible to reliably link CHC constructs with neuropsychological test variables (see Table 2), reanalysis of multiple datasets did not always support the hypothesized relationships between tests and CHC theory. The results of this project were used to create Table 13, which is a revised version of Table 3. Table 13 presents the associations between targeted measures and broad CHC constructs that were observed across samples. In this table, + indicates the test is related to a specific CHC construct, -- indicates the test is not related to a specific CHC construct, and O indicates the construct was not adequately evaluated across samples. In some instances Table 13 includes information that was obtained from a single high quality sample when the other targeted samples were of poor quality (e.g., the BNT). The decision was made not to include narrow CHC constructs in Table 13 because nearly all factor solutions identified broad, not narrow, CHC constructs. All targeted measures were included in Table 13. 113

Table 13. Documented relationships between tests and CHC broad ability constructs CHC Broad Ability Constructs Domain Gf Gc Gq Grw Gsm Gv Ga Glr Gs Gt Att. /Concent. TMT 1 -- -- -- + PASAT 1 + O O + Stroop + O + + Executive Func. WCST 1 + O O + -- Category Test 1 O O O + O Subtest III-VII + O O + O COWA 1 + + + Language BNT 1, 2 + AST 1 + O + O -- MAE 3 O O O O O -- Memory WMS-III 1 Verbal Subtests 4 O + Visual Subtests 5 + Working Mem. 6 -- O O + -- RAVLT + WRAML2 O -- + + Motor Func. FTT 1 O Grip Strength 1, 7 PPT 1 + Vis. Perc. HVOT + ROCFT 1 + + JOLO + 114

CHC Broad Ability Constructs Domain Gf Gc Gq Grw Gsm Gv Ga Glr Gs Gt Neurop. Bat. HRNB O O TPT + NEPSY 8 -- -- O O -- O -- O -- Vis. Mem. Subtests + LNNB 9 DRS 10 O -- O O O -- O -- Graph./Const. Tasks + Cognistat 11 O -- O -- O Note. Gf = Fluid Intelligence, Gc = Crystallized Intelligence, Gq = Quantitative Intelligence, Grw = Reading/Writing Intelligence, Gsm = Short- Term Memory, Gv = Visual Processing, Ga = Auditory Processing, Glr = Long-Term Storage and Retrieval, Gs = Processing Speed, Gt = Decision Speed/Reaction Time. TMT = Trail Making Test; PASAT = Paced Auditory Serial Addition Test; WCST = Wisconsin Card Sorting Test; COWA = Controlled Oral Word Association Test; BNT = Boston Naming Test; AST = Aphasia Screening Test; MAE = Multilingual Aphasia Examination; WMS-III =Wechsler Memory Scale-Third Edition; RAVLT = Rey Auditory Verbal Learning Test; WRAML-2 = Wide Range Assessment of Memory and Language- Second Edition; FTT = Finger Tapping Test; PPT = Purdue Pegboard Test; HVOT = Hooper Visual Organization Test; ROCFT = Rey-Osterrieth Complex Figure Test; JOLO = Judgment of Line Orientation Test; HRNB = Halstead-Reitan Neuropsychological Battery; TPT = Tactual Performance Test; Vis. = Visual; Mem. = Memory; LNNB = Luria-Nebraska Neuropsychological Battery; DRS = Dementia Rating Scale; Graph. = Graphomotor; Const. = Construction. 1 This test/task is included in the HRNB. 2 This task has previously been linked to CHC ability constructs. 3 Includes the COWA and a test similar to the BNT. 4 Verbal Memory subtests include Logical Memory, Word Lists, and Verbal Paired Associates tests, and the Auditory Recognition Delayed score. 5 Visual Memory subtests include Visual Reproduction, Faces, and Family Pictures tests. 6 Working Memory subtests include Digit Span, Letter-Number Sequencing, Spatial Span, and Mental Control tests. 7 The ability required by this test does not correspond with CHC constructs. 8 Includes subtests similar to the BNT, COWA, WMS, and WRAML. 9 It was not possible to adequately test relationships between the LNNB and CHC model because of discrepant datasets. 10 Includes subtests similar to the BNT, COWA, WMS, and, ROCFT. 11 Includes subtests similar to the BNT. 115

There are pros and cons to the simplified presentation of material in Table 13. It is in some ways conservative to only report the associations between CHC theory and test variables that were observed across multiple factor solutions, and this decision decreased the amount of information included in Table 13 (relative to Table 3). For example, because the hypothesized relationship between the FTT and Gt was only observed in one solution (see Goldstein & Shelly, 1972; Table A-21), this specific relationship is not reported. The advantage of only including the associations between CHC theory and test variables that were repeatedly observed is that clinicians can trust the associations are more reliable and less likely to have been observed by chance. The value of Table 13 is that it documents the relationships between popular neuropsychological measures and CHC constructs, and its use should improve assessment practice. This information is valuable when conducting cross-battery assessments. For example, if a clinician wanted to more thoroughly evaluate the broad construct Gs, reviewing Table 13 makes clear that the TMT and PPT are measures that could be considered. The PASAT and Stroop test could also be considered with the understanding that these tasks are not pure measures of Gs because performance is also dependent on additional broad ability constructs (see Table 13). Utilization of Table 13 in clinical practice is advantageous for several additional reasons. First, it is helpful to reference when planning which measures will be administered during an evaluation to ensure that specific constructs are not underrepresented. For example, if an individual were referred for a neuropsychological evaluation in the context of possible dementia, it would make sense to evaluate the 116

patient s memory, executive abilities, and verbal skills. These abilities are represented by the CHC framework, and Table 13 facilitates identifying construct relevant measures. Table 13 can also be used to generate testable hypotheses regarding how a patient might perform during an evaluation. For example, it could be anticipated that a patient with suspected frontal-temporal dementia has impaired Glr, Gf, and Gc abilities. Given this, it is likely that their performances on all measures associated with these constructs would be poor. It could also be hypothesized that this patient has intact Gv abilities, and will perform better on measures dependent on this broad ability. If an expected pattern of performance is not observed, additional diagnoses would need to be considered; if this patient exhibited intact Glr abilities and only mildly inefficient Gf abilities, a diagnosis of mild cognitive impairment (MCI) may be more appropriate than frontal-temporal dementia. Lastly, it is useful to reference Table 13 when interpreting test data. It is anticipated this will foster more accurate synthesis and integration of testing data obtained from different measures. It is good practice to first interpret an individual s performance within specific broad CHC constructs (e.g., determining the general level of performance on all Gc measures), and then across constructs to determine a general level of ability and whether relative strengths or weaknesses exist. Implications for Neuropsychological Assessment There are some consistent findings in Table 13 that are worth highlighting. Each measure in the attention and concentration domain reflected Gs. Similarly, a majority of the visual perception measures reflected Gv, memory tests evaluated Glr, and language measures reflected Gc. Given these consistent findings, it raises the question of whether it 117

would be reasonable to refer to these traditional neuropsychological domains as broad CHC domains. For example, the targeted measures in the attention and concentration domain could be included within a Gs domain. It would be advantageous to refer to traditional neuropsychological domains as CHC broad constructs because this terminology has been adopted by newer cognitive measures (e.g., the WJ-III Test of Cognitive Ability) and would facilitate the integration of results across different measures. Although in many respects it would be helpful to refer to traditional neuropsychological domains as CHC domains, there are potential drawbacks to this shift in conceptualization. For example, the three targeted measures of attention and concentration are associated with Gs, however not all measures of attention and concentration would be similarly associated with Gs (e.g., the Digit Span task is associated with Gsm and it would be inaccurate to consider it analogous to measures associated with Gs). A theoretical shift in conceptualizing assessment domains would be challenging because well-known groupings of tests (e.g. attention/concentration measures) would likely be split among more specific groupings of tests (e.g., groupings of tests that reflect Gs and Gsm abilities), which would be initially less familiar to some clinicians. The challenge of learning how to reclassify measures into potentially unfamiliar domains (i.e., CHC constructs) is worth the effort because the groupings would be more homogenous and consistent with the empirically supported CHC theory. Neuropsychological assessment practice should improve by using measures specifically developed to correspond with the CHC model (e.g., the WJ-III). Table 13 illustrates that many frequently used neuropsychological measures are multifaceted (i.e., 118

are related to multiple CHC constructs), and interpreting these measures in combination with those developed specifically to correspond with the CHC theory, which are more narrow and unidimensional, will clarify interpretation of multifaceted measures. For example, if an individual performed poorly on the WCST it could be related to impaired Gf or Gv abilities. If an individual performed poorly on WJ-III subtests related to Gf (e.g., Concept Formation and Analysis-Synthesis subtests) and within normal limits on subtests related to Gv (e.g., Spatial Relations and Picture Recognition subtests), it could be inferred that poor performance on the WCST is related to impaired Gf abilities. Implications for the CHC model If the CHC theory is going to be incorporated with and used to guide assessment practice, it may be beneficial that discussion of the model be limited to broad constructs as opposed to more specific narrower constructs. The CHC model is complicated and challenging to grasp quickly; the benefit of emphasizing 10 broad abilities as opposed to 70 narrow abilities would make it more manageable to integrate the theory with clinical practice. This theoretical shift in presentation was considered because of the current results. It was infrequent that factor solutions included differentiated dimensions that reflected narrow constructs from the same broad construct (e.g., a factor solution that included differentiated dimensions of Gf-I and Gf-RG). It was much more common for factor solutions to include broad construct dimensions that were comprised of related narrower constructs. Directing clinicians attention to more robust markers of cognitive ability (i.e., Stratum II broad abilities) is more advantageous than directing attention to more specific abilities (i.e., Stratum I narrow abilities) that are sometimes challenging to differentiate. 119

In some respects, it is more optimal to use the CHC model as a blueprint for test development than as a framework to comprehensively classify preexisting measures. When tests are developed to correspond with specific CHC constructs there is a clear and strong link between them. This is not always the case when preexisting measures are linked with CHC constructs, as measures are sometimes related to a range of broad constructs (see Table 13). For example, it is unclear whether successful WCST performance is more dependent on Gf or Gv abilities. It is acceptable and logical that some measures are related to multiple CHC abilities, but the multiple relationships introduce a degree of ambiguity into interpretation (e.g., what ability best explains poor performance). The results of this project were a first step in clarifying the relationships between neuropsychological measures and CHC theory, as evidenced by Table 13 being more coherent and usable than Table 3. Empirical data was instrumental in refining Table 3, which identified the micro skills required to complete specific tasks. This information made it possible to generate Table 13, which reveals that in practice some micro skills are more salient in performance than others (e.g., empirical evidence did not support the hypothesis that the WCST was related to a narrow Glr ability). Continued work towards clarifying how, and to what degree, measures are related to different CHC constructs seems to be warranted and will improve clinical practice. Given that the CHC model will likely continue to include narrow abilities, it is curious that the model does not provide an explanation for why different visual memory tests often appear on differentiated dimensions. The CHC model includes multiple narrow Glr abilities that are specific to different types of verbal tasks (see Figure 1), but 120

all visual memory tasks are associated with one narrow ability (Gv-MV). The current classification of visual memory tests seems to be inadequate, and the results of this project suggest that additional Gv narrow abilities are needed. WMS-III literature suggests the possibility of a new narrow ability related to facial memory. Three investigations have reported that the Faces subtests are different from other visual memory subtests (Hoelzle, Meyer, Pyykkonen, & Han, in press; Millis, Malina, Bowers, & Ricker, 1999; Tulsky & Price, 2003), which suggests that different specific abilities are involved. The CHC model would benefit from revisions taking into account these findings because the model would be more refined. It is not surprising that the Grip Strength test could not be linked with a specific CHC ability construct. It is unclear whether the CHC theory should account for physical strength. On one hand, if the theory strives to explain only cognitive processes, it does not seem necessary that revisions be made to account for the Grip Strength test. However, because neuropsychologists routinely administer the Grip Strength test to generate hypotheses regarding cerebral functioning it does seems relevant to include static strength in the CHC model. For example, performance differences between left and right hands on the FTT or Grip Strength test that exceed expectation can be strong indicators of lateralized dysfunction (e.g., see Ross, Thrasher, & Long, 1990). Although it seems challenging to determine which CHC broad construct the Grip Strength test is most similar to, reviewing Carroll s (1993) three-stratum model provides justification for including the ability with reaction time abilities. 121

Factorial Invariance There is debate whether the dimensional structure of tests should be consistent across samples (e.g., see Delis, Jacobson, Bondi, Hamilton, & Salmon, 2003). However, prior research has not appreciated the importance of utilizing accurate factor retention methods, as these methods increase dimensional congruency across samples. It was anticipated that the present study would fill a gap in the literature because it incorporated accurate factor retention methods and included multiple datasets. It was expected that it would be possible to investigate the congruency of multiple groups of studies that included the same combination of test variables. In reality, it was only possible to investigate the congruency of five groups of studies, and the congruency of factor structures was only borderline to good (see Appendix B). The question of whether or not using accurate factor retention methods increases the likelihood of observing similar factor structures across different samples is complicated because in many instances these procedures produced ambiguous results (e.g., see Table 8; MAP procedures recommended retaining 2 to 8 factors in two Wechsler [1997] datasets). What can be taken from these results is that in some instances the congruency of factor structures increased as fewer dimensions were extracted (e.g., see Tables B-1, B-3, and B-5). These results demonstrated the value of Barrett s (2005) orthogonal vector matrix comparison program for identifying a core, replicable factor structure across multiple samples. It would be optimal for future research to investigate the congruency of factor structures that are comprised of well-defined dimensions. It may be possible to use aspects of the collected data (e.g., choose specific groupings of variables across datasets 122

and investigate the congruency of resulting factor structures) to better understand the impact of factor retention procedures and whether these procedures increase congruency across factor structures. Methodological Limitations The most significant limitation of this project is that the results were dependent on the availability and quality of datasets. Some targeted measures were impacted more significantly by this limitation than others. For example, there were many quality WMS datasets to choose from that included large samples and a range of test variables, whereas all FTT and Grip Strength datasets were rated as being of poor quality. When fewer studies were identified by search strategies, it increased the likelihood that less optimal studies (i.e., smaller sample size, fewer test variables) were ultimately analyzed. Including datasets of poor quality made it challenging to evaluate whether hypothesized relationships between tests and CHC abilities were correct. The composition of test variables included in datasets significantly impacted the final solutions. It was more likely that a targeted measure would appear on a dimension consistent with expectation if the dataset included additional measures related to that construct. For example, when two FTT variables were included in a dataset the final solution included a dimension that reflected Gt (see Goldstein and Shelly, 1972; Table A- 21), whereas when only one FTT variable was included in other datasets it loaded onto dimensions that reflected a range of CHC constructs (e.g., see Francis et al., 1992; Table A-20). The FTT test was not linked with a specific CHC construct in Table 13 because consistent relationships between the test and extracted dimensions were not observed across multiple samples. It is hypothesized that the FTT would consistently appear on 123

dimensions that reflected Gt if datasets included an adequate number of Gt marker variables. An issue not yet fully addressed concerns the validity of my judgments classifying tests according to CHC constructs. Even when an expected factor was observed, such as Gv defined by HVOT scores, one could potentially question whether the underlying dimension really was Gv as opposed to some other construct. Although my classifications were very reliable and based on explicit links between test requirements and CHC definitions, it remains the case that most of the tests were classified based on my judgments rather than the consensus judgments of CHC experts. This is a limitation of the study that potentially impacts the validity of the results. This issue is less salient when datasets include test variables that have previously been linked to the CHC theory by others (e.g., McGrew & Flanagan, 1998) because these variables serve as an interpretive safeguard and provide external validation of relationships between measures and CHC constructs. The reanalysis of comprehensive neuropsychological batteries was significantly limited because datasets included different combinations of test variables. For example, it was challenging to identify how the LNNB was related to the CHC theory because some datasets included Index scores while others provided Factor scores or item-level data. The likelihood of observing consistent relationships between the LNNB and CHC theory would increase if multiple LNNB datasets were available that provided similar types of data. However, because one of the many significant limitations of the LNNB is that specific scales require a broad range of abilities (e.g., see Adams, 1980; Crosson & 124

Warren, 1982), it is likely that even with optimal datasets it would continue to be challenging to identify how the LNNB is related to the CHC theory. Conclusion This project attempted to bridge the gap between neuropsychological assessment and the CHC model. Reanalysis of multiple datasets produced results that largely clarified relationships between popular neuropsychological measures and CHC theory. These results make clear that it is possible to integrate neuropsychological assessment practice with CHC theory. Continued research towards understanding how neuropsychological measures are related to CHC constructs is recommended. There is reason to believe that if optimal datasets are obtained, it would be possible to document additional relationships between neuropsychological tests and CHC theory. In the event that future research does not support a specific relationship between a neuropsychological test and the CHC theory, this should foster critical questioning of both the measure and theory, which ultimately improves the practice of assessment and understanding of cognitive abilities. 125

References Ackerman, P. L., Beier, M. E., & Boyle, M. O. (2002). Individual differences in working memory within a nomological network of cognitive and perceptual speed abilities. Journal of Experimental Psychology: General, 131, 567-589. Ackerman, P. L., Bowen, K. R., Beier, M. E., & Kanfer, R. (2001). Determinants of individual differences and gender differences in knowledge. Journal of Educational Psychology, 93, 797-825. Adams, K. M. (1980). In search of Luria s Battery: A false start. Journal of Consulting and Clinical Psychology, 48, 511-516. Adams, W., & Sheslow, D. (1990). Wide Range Assessment of Memory and Learning Administration Manual. Wilmington, DE: Jastak. Adams, W., & Sheslow, D. (2003). Wide Range Assessment of Memory and Learning, Second Edition: Administration and Technical Manual. Wilmington, DE: Wide Range. Aftanas, M. S., & Royce, J. R. (1969). A factor analysis of brain damage tests administered to normal subjects with factor score comparisons across ages. Multivariate Behavioral Research, 4, 459-481. Alfonso, V. C., Flanagan, D. P., & Radwan, S. (2005). The impact of the Cattell-Horn- Carroll Theory on test development and interpretation of cognitive and academic abilities. In D. P. Flanagan, & P. L. Harrison (Eds.), Contemporary intellectual assessment; Theories, tests, and issues (2 nd ed., pp. 185-202). New York: Guilford. 126

Banich, M. T. (2004). Cognitive neuroscience and neuropsychology (2 nd ed.). New York: Houghton Mifflin. Baños, J. H., Elliott, T. R., & Schmitt, M. (2005). Factor structure of the Rey Auditory Verbal Learning Test in adults with spinal cord injury. Rehabilitation Psychology, 50, 375-380. Barrett, P. T. (2005) Orthosim 2 (version 2.0): Target-comparison matrix fitting [Computer software and manual]. Retrieved January 13, 2006 from http://www.pbarrett.net/factor_similarity.htm. Barrett, P. T., Petrides, K. V., Eysenck, S. B. G., & Eysenck, H. J. (1998). The Eysenck Personality Questionnaire: An examination of the factorial similarity of P, E, N, and L across 34 countries. Personality and Individual Differences, 25, 805-819. Bejar, I. I. (1980). Biased assessment of program impact due to psychometric artifacts. Psychological Bulletin, 87, 513-524. Benton, A. L., & Hamsher, K. (1978). Multilingual Aphasia Examination. Iowa City, IA: AJA Associates. Benton, A. L., & Hamsher, K. (1989). Multilingual Aphasia Examination (2 nd Ed.). Iowa City, IA: AJA Associates. Benton, A. L., Hamsher, K., Rey, G. J., & Sivan, A. B. (1994). Multilingual Aphasia Examination (3 rd Ed.). Iowa City, IA: AJA Associates. Benton, A. L., Varney, N. R., & Hamsher, K. (1978). Visuospatial judgment: A clinical test. Archives of Neurology, 35, 364-367. 127

Berry, D. T. R., Allen, R. S., & Schmitt, F. A., (1991). Rey-Osterrieth Complex Figure: Psychometric characteristics in a geriatric sample. The Clinical Neuropsychologist, 5, 143-153. Binet, A., & Simon, T. (1905). Methodes nouvelles pour le diagnostic du niveau intellectuel des anormaux. L annee psychologique, 11, 191-244. Boone, K. B., Pontón, M. O., Gorsuch, R. L., González, J. J., & Miller, B. L. (1998). Factor analysis of four measures of prefrontal lobe functioning. Archives of Clinical Neuropsychology, 13, 585-595. Bowden, S. C., Cook, M. J., Bardenhagen, F. J., Shores, E. A., & Carstairs, J. R. (2004). Measurement invariance of core cognitive abilities in heterogeneous neurological and community samples. Intelligence, 32, 363-389. Bowden, S. C., Fowler, K. S., Bell, R. C., Whelan, G., Clifford, C. C., Ritter, A. J., et al. (1998). The reliability and internal validity of the Wisconsin Card Sorting Test. Neuropsychological Rehabilitation, 8, 243-254. Brinkman, J. J., Decker, S. L., & Dean, R. S. (2005). Assessing and understanding brain function through neuropsychologically based ability tests. In R. C. D Amato, E. Fletcher-Janzen, & C. R. Reynolds (Eds.) Handbook of school neuropsychology (pp. 303-326). Hoboken, NJ: John Wiley & Sons, Inc. Bryne, B. M., Shavelson, R. J., & Muthèn, B. (1989). Testing for the equivalence of factorial covariance and mean structures: The issue of partial measurement invariance. Psychological Bulletin, 105, 456-466. 128

Camara, W. J., Nathan, J. S., & Puente, A. E. (2000). Psychological test usage: Implications in professional psychology. Professional Psychology: Research and Practice, 31, 141-154. Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. New York: Cambridge University Press. Carroll, J. B. (1995). On methodology in the study of cognitive abilities. Multivariate Behavioral Research, 30, 429-452. Carroll, J. B. (1997). The three-stratum theory of cognitive abilities. In D. P. Flanagan, J. L. Genshaft, & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 122-130). New York: Guilford. Carroll, J. B. (2003). The higher-stratum structure of cognitive abilities: Current evidence supports g and about ten broad factors. In N. Nyborg (Ed.), The scientific study of general intelligence: Tribute to Arthur R. Jenson (pp. 5-22). Amsterdam: Pergamon Press. Cattell, R. B. (1941). Some theoretical issues in adult intelligence testing. Psychological Bulletin, 38, 592. Cattell, R. B. (1943). The measurement of adult intelligence. Psychological Bulletin, 40, 153-193. Cattell, R. B. (1957). Personality and motivation structure and measurement. New York: World Book. Cattell, R. B. (1971). Abilities: Their structure, growth, and action. Boston: Houghton Mifflin. 129

Chen, J. Q., & Gardner, H. (2005). Assessment based on Multiple-Intelligences theory. In D. P. Flanagan, & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (2 nd ed., pp. 77-102). New York: Guilford. Cicchetti, D. V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment, 6, 284-290. Cohen, R., Kelter, S., & Woll, G. (1980). Analytical competence and language impairment in aphasia. Brain and Language, 10, 331-347. Colantonio, A., Becker, J. T., & Huff, J. (1993). Factor structure of the Mattis Dementia Rating Scale among patients with probable Alzheimer's disease. The Clinical Neuropsychologist, 7, 313-318. Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis (2 nd ed.). Hillside, NJ: Lawrence Erlbaum Associates. Corwin, J., & Bylsma, F. W. (1993) Translations of excerpts from André Rey s Psychological examination of traumatic brain encephalopathy and P. A. Osterrieth s The Complex Figure Copy Test. The Clinical Neuropsychologist, 7, 3-15. Costa, P. T., & McCrae, R. R. (1992). Revised NEO Personality Inventory (NEO PI-R) and NEO Five-Factor Inventory (NEO-FFI) professional manual. Odessa, FL: Psychological Assessment Resources. Costa, P. T., & Piedmont, R. L. (2003). Multivariate assessment: NEO PI-R profiles of Madeline G. In J. S. Wiggins (Ed.) Paradigms of personality assessment (pp. 262-280). New York: Guilford. 130

Cota, A. A., Longman, R. S., Holden, R. R., Fekken, G. C., & Xinaris, S. (1993). Interpolating 95 th percentile eigenvalues from random data: An empirical example. Educational and Psychological Measurement, 53, 585-596. Crawford, J. R., Obonsawin, Allan, K. M. (1998). PASAT and components of WAIS-R performance: Convergent and discriminant validity. Neuropsychological Rehabilitation, 8, 255-272. Crosson, B., & Warren, R. L. (1982). Use of the Luria-Nebraska Neuropsychological Battery in aphasia: A conceptual critique. Journal of Consulting and Clinical Psychology, 50, 22-31. Cunningham, W. R. (1991). Issues in factorial invariance. In L. M. Collins, & J. L. Horn (Eds.), Best methods for the analysis of change: Recent advances, unanswered questions, future directions. Washington, DC: American Psychological Association. Das, J. P. (2004). Theories of intelligence: Issues and applications. In G. Goldstein, & S. Beers (Eds.) Comprehensive handbook of psychological assessment: Intellectual and neuropsychological assessment (p. 5-23). Hoboken, NJ: John Wiley & Sons, Inc. Das, J. P., Kar, B. C., & Parrila, R. K. (1996). Cognitive planning: The psychological basis of intelligent behavior. Thousand Oaks, CA: Sage. Das, J. P., Naglieri, J. A., & Kirby, J. R. (1994). Assessment of cognitive processes. Boston: Allyn & Bacon. 131

de Jong, P. F., & Das-Smaal, E. A. (1993). Factor structure of standard attention tests for children: A distinction between perceptual speed and working memory. European Journal of Psychological Assessment, 9, 94-106. Dean, R. S., & Woodcock, R. W. (2003). Dean-Woodcock Neuropsychological Assessment System. Itasca, IL: Riverside. Deary, I. J., Langan, S. J., Hepburn, D. A., & Frier, B. M. (1991). Which abilities does the PASAT test? Personality and Individual Differences, 12, 983-987. Delis, D.C., Jacobson, M., Bondi, M. W., Hamilton, J. N., & Salmon, D. P. (2003). The myth of testing construct validity using factor analysis or correlations with normal or mixed clinical populations: Lessons from memory assessment. Journal of the International Neuropsychological Society, 9, 936-946. Delis, D. C., Kramer, J. H., Kaplan, E., & Ober, B. A. (1987). California Verbal Learning Test. San Antonio, TX: The Psychological Corporation. Delis, D. C., Kramer, J. H., Kaplan, E., & Ober, B. A. (2000). California Verbal Learning Test Second Edition, Adult Version. San Antonio, TX: The Psychological Corporation. Demakis, G. J. (2006). Meta-analysis in neuropsychology: Basic approaches, findings, and applications. The Clinical Neuropsychologist, 20, 10-26. Ekstrom, R. B., French, J. W., & Harman, M. H. (1979). Cognitive factors: Their identification and replication. Multivariate Behavioral Research Monographs, 79, 84. 132

Evans, J. J., Floyd, R. G., McGrew, K. S., & Legforgee, M. H. (2002). The relations between measures of Cattell-Horn-Carroll (CHC) cognitive abilities and reading achievement during childhood and adolescence. School Psychology Review, 31, 246-262. Fava, J. L., & Velicer, W. F. (1992). An empirical comparison of factor, image, component, and scale scores. Multivariate Behavioral Research, 27, 301-322. Flanagan, D. P., Genshaft, J. L., & Harrison, P. L. (1997). Contemporary intellectual assessment: Theories, tests, and issues. New York: Guilford. Flanagan, D. P., & Harrison, P. L. (2005). Contemporary intellectual assessment: Theories, tests, and issues (2 nd Ed.) New York: Guilford. Flanagan, D. P., McGrew, K. S., & Ortiz, S. O. (2000). The Wechsler Intelligence Scale and Gf-Gc theory: A contemporary approach to interpretation. Boston: Allyn and Bacon. Flanagan, D. P., Ortiz, S. O., Alfonso, V. C., & Mascolo, J. T. (2002). The achievement test desk reference (ADTR): Comprehensive assessment and learning disabilities. Boston: Allyn & Bacon. Floyd, R. G. (2005). Information-processing approaches to interpretation of contemporary intellectual assessment instruments. In D. P. Flanagan & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (2 nd ed., pp. 203-233). New York: Guilford. Floyd, R. G., Evans, J. J., & McGrew, K. S. (2003). Relations between measures of Cattell-Horn-Carroll (CHC) cognitive abilities and mathematics achievement across the school-age years. Psychology in the Schools, 40, 155-171. 133

Fos, L. A., Greve, K. W., South, M. B., Mathias, C., & Benefield, H. (2000). Paced visual serial addition test: An alternative measure of information processing speed. Applied Neuropsychology, 7, 140-146. Francis, D. J., Fletcher, J. M., Rourke, B. P., & York, M. J. (1992). A five-factor model of motor, psychomotor, and visual-spatial tests used in neuropsychological assessment of children. Journal of Clinical and Experimental Neuropsychology, 14, 625-637. Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books. Gardner, H. (1993). Frames of mind: The theory of multiple intelligences (10 th Anniversary Ed.). New York: Basic Books. Gardner, H. (1999). Intelligence reframed: Multiple intelligences for the 21 st century. New York: Basic Books. Gardner, H. (2006). On failing to grasp the concept of MI theory: A response to Visser et al. Intelligence, 34, 503-505. Glorfeld, L. (1995). An improvement on Horn s parallel analysis methodology for selecting the correct number of factors to retain. Educational and Psychological Measurement, 55, 377-393. Goldberg, L. R., & Velicer, W. F. (2006). Principles of exploratory factor analysis. In S. Strack (Ed.), Differentiating normal and abnormal personality: Second edition (pp. 209-237). New York, NY: Springer. 134

Golden, C. J., Purisch, A. D., & Hammeke, T. A. (1985). Luria-Nebraska Neuropsychological Battery: Forms I and II. Los Angeles: Western Psychological Services. Goldstein, G., & Beers, S. (2004). Comprehensive handbook of psychological assessment: Intellectual and neuropsychological assessment. Hoboken, NJ: John Wiley & Sons, Inc. Goldstein, G., & Shelly, C. H. (1972). Statistical and normative studies of the Halstead Neuropsychological Test Battery relevant to a neuropsychiatric hospital setting. Journal of Perceptual and Motor Skills, 34, 603-620. Goldstein, G., & Shelly, C. H. (1984). Relationship between language skills as assessed by the Halstead-Reitan Battery and the Luria-Nebraska language-related factor scales in a nonaphasic patient population. Journal of Clinical Neuropsychology, 6, 143-156. Gorsuch, R. L. (1983). Factor Analysis (2 nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates. Graf, P., Uttl, B., & Tuokko, H. (1995). Color- and picture-word Stroop tests: Performance changes in old age. Journal of Clinical and Experimental Neuropsychology, 17, 390-415. Greve, K. W., Stickle, T. R., Love, J. M., Bianchini, K. J., & Stanford, M. S. (2005). Latent structure of the Wisconsin Card Sorting Test: A confirmatory factor analytic study. Archives of Clinical Neuropsychology, 20, 355-364. Gronwall, D. M. A. (1977). Paced Auditory Serial-Addition Task: A measure of recovery from concussion. Perceptual and Motor Skills, 44, 367-373. 135

Guilford, J. P. (1959). Three faces of intellect. American Psychologist, 14, 459-479. Guilford, J. P. (1967). The nature of human intelligence. New York: McGraw-Hill. Gustafsson, J. E. (1984). A unifying model for the structure of intellectual abilities. Intelligence, 8, 179-203. Hakstain, A. R., & Cattell, R. B. (1975). The comprehensive ability battery. Champaign, IL: Institute for Personality and Ability Testing. Halstead, W. C. (1947). Brain and Intelligence: A Quantitative Study of the Frontal Lobes. University of Chicago Press, Chicago, IL. Halsted, W. C., & Wepman, J. M. (1959). The Halstead-Wepman Aphasia Screening Test. Journal of Speech and Hearing Disorders, 14, 9-15. Hartmann, P., Kruuse, N. H. S., & Nyborg, H. (2007). Testing the cross-racial generality of Spearman's hypothesis in two samples. Intelligence, 35, 47-57. Heaton, R. K. (1981). Wisconsin Card Sorting Test manual. Odessa, FL: Psychological Assessment Resources, Inc. Heaton, R. K., Chelune, G. J., Talley, J. L., Kay, G. G., & Curtis, G. (1993). Wisconsin Card Sorting Test manual: Revised and expanded. Odessa, FL: Psychological Assessment Resources. Hermann, B. P., Seidenberg, M., Wyler, A., & Haltiner, A. (1993). Dissociation of object recognition and spatial localization abilities following temporal lobe lesions in humans. Neuropsychology, 7, 343-350. Hill, T. D., Reddon, J. R., & Jackson, D. N. (1985). The factor structure of the Wechsler scales: A brief review. Clinical Psychology Review, 5, 287-306. 136

Hoelzle, J. B., Meyer, G. J., Pyykkonen, B. A., & Han, D. Y. (in press). The Wechsler Memory Scale-Third Edition: Rethinking the factor structure. The Clinical Neuropsychologist. Holtzer, R., Verghese, J., Xue, X., & Lipton, R. B. (2006). Cognitive processes related to gait velocity: Results from the Einstein Aging Study. Neuropsychology, 20, 215-223. Hooper, H. E. (1958). The Hooper Visual Organization Test: Manual. Beverly Hills, CA: Western Psychological Services. Hooper, H. E. (1983). The Hooper Visual Organization Test: Manual. Los Angeles: Western Psychological Services. Horn, J. L. (1965). Fluid and crystallized intelligence: A factor analytic and developmental study of the structure among primary mental abilities. Unpublished doctoral dissertation, University of Illinois, Champaign. Horn, J. L. (1968). Organization of abilities and the development of intelligence. Psychological Review, 75, 242-259. Horn, J. L. (1972). The structure of intellect: Primary abilities. In R. M. Dreger (Ed.), Multivariate personality research (pp. 451-511). Baton Rouge, LA: Claitor s. Horn, J. L. (1985). Remolding old models of intelligence. In B. B. Wolman (Ed.), Handbook of intelligence: Theories, measurement and applications (pp. 267-300). New York: Wiley. Horn, J. L. (1988). Thinking about human abilities. In J. R. Nesselroade (Ed.), Handbook of multivariate psychology (pp. 645-685). New York: Academic Press. 137

Horn, J. L. (1989). Models for intelligence. In R. Linn (Ed.) Intelligence: Measurement, theory, and public policy (pp. 29-73). Urbana, IL: University of Illinois Press. Horn, J. L. (1991). Measurement of intellectual capabilities: A review of theory. In K. S. McGrew, J. K. Werder, & R. W. Woodcock, WJ-R technical manual (pp. 197-232). Chicago: Riverside. Horn, J. L. (1998). A basis for research on age differences in cognitive capabilities. In J. J. McArdle, & R. W. Woodcock (Eds.), Human cognitive abilities in theory and practice (pp. 57-91). Mahwah, NJ: Erlbaum. Horn, J. L, & Blankson, N. (2005). Foundations for better understanding of cognitive abilities. In D. P. Flanagan, & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (2 nd ed., pp. 41-68). New York: Guilford. Horn, J. L., & Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized intelligence. Journal of Educational Psychology, 57, 253-270. Horn, J. L., McArdle, J. J., & Mason, R. (1983). When is invariance not invariant: A practical scientist s look at the ethereal concept of factor invariance. The Southern Psychologist, 1, 179-188. Horn, J. L., & Noll, J. (1997). Human Cognitive Capabilities: Gf-Gc Theory. In D. P. Flanagan, J. L. Genshaft, & P. L. Harrison (Eds.) Contemporary intellectual assessment: Theories, tests, and issues (pp. 53-91). New York: Guilford. Hubbard, R., & Allen, S. J. (1987). An empirical comparison of alternate methods for principle components extraction. Journal of Business Research, 15, 173-190 138

Humphreys, L. G., & Lubinski, D. (1996). Assessing spatial visualization: An underappreciated ability for many school and work settings. In C. P. Benbow, & D. Lubinski (Eds.), Intellectual talent: Psychometric and social issues (pp. 116-140). Baltimore, MD: John Hopkins University Press. Jastak, J., & Jastak, S. (1978). The Wide Range Achievement Test. Wilmington, DE: Jastak Associates. Jastak, S., & Wilkinson, G. S. (1984). The Wide Range Achievement Test Revised. Wilmington, DE: Jastak Associates. Jensen, A. R., & Rohwer, W. D. (1966). The Stroop Color-Word Test: A review. Acta Psychologica, 25, 36-93. Johnson, W., & Bouchard, T. J. (2005a). The structure of human intelligence: It is verbal, perceptual, and image rotation (VPR), not fluid and crystallized. Intelligence, 33, 393-416. Johnson, W., & Bouchard, T. J. (2005b). Constructive replication of the visualperceptual-image rotation model in Thurstone s (1941) battery of 60 tests of mental ability. Intelligence, 33, 417-430. Johnstone, B., Holland, D., & Hewett, J. E. (1997). The construct validity of the Category Test: Is it a measure of reasoning or intelligence? Psychological Assessment, 9, 28-33. Johnstone, B., Vieth, A. Z., Johnson, J. C., & Shaw, J. A. (2000). Recall as a function of single versus multiple trials: Implications for rehabilitation. Rehabilitation Psychology, 45, 3-19. 139

Jones, J. J. S., van Schaik, P., & Witts, P. (2006). A factor analysis of the Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) in a low IQ sample. British Journal of Clinical Psychology, 45, 145-152. Jones, S. N., & Ayers, C. R. (2006). Psychometric properties and factor structure of an expanded CERAD neuropsychological battery in an elderly VA sample. Archives of Clinical Neuropsychology, 21, 359-365. Jöreskog, K. G. (1971). Simultaneous factor analysis in several populations. Psychometrika, 36, 409-426, Jurica, P. J., Leitten, C. L., & Mattis, S. (2001). Dementia Rating Scale-2. Odessa, FL: Psychological Assessment Resources. Kaiser, H. F. (1960). The application of electronic computers to factor analysis. Educational and Psychological Measurement, 20, 141-151. Kamphaus, R. W. (1993). Clinical assessment of children s intelligence. Boston: Allyn and Bacon. Kaplan, E. P., Goodglass, H., & Weintraub, S. (1978, 1983). The Boston Naming Test: Experimental edition (1978). Boston: Kaplan & Goodglass. Kaplan, E. P., Goodglass, H., & Weintraub, S. (2001). The Boston Naming Test (2 nd ed.). Philadelphia, PA: Lippincott Williams & Wilkins. Kaufman, A. S., & Kaufman, N. L. (1993). The Kaufman Adolescent and Adult Intelligence Test. Circle Pines, MN: American Guidance Service. Kaufman, A. S., & Lichtenberger, E. O. (2002) Assessing adolescent and adult intelligence (2 nd ed.). Boston: Allyn & Bacon. 140

Keefe, R. S. E., Bilder, R. M., Harvey, P. D., Davis, S. M., Palmer, B. W., Gold, J. M., et al. (2006). Baseline neurocognitive deficits in the CATIE schizophrenia trial. Neuropsychopharmacology, 31, 2033-2046. Kessler, H. R., Roth, D. L., Kaplan, R. F., & Goode, K. T. (1994). Confirmatory factor analysis of the Mattis Dementia Rating Scale. The Clinical Neuropsychologist, 8, 451-461. Korkman, M., Kirk, U., & Kemp, S. (1998). NEPSY: A developmental neuropsychological assessment manual. San Antonio, TX: Psychological Corporation. Korkman, M., Kirk, U., & Kemp, S. (2007). NEPSY-II: Clinical and interpretive manual. San Antonio, TX: Harcourt Assessment. Labouvie, E. W. (1980). Identity versus equivalence of psychological measures and constructs. In L. W. Poon (Ed.), Aging in the 1980 s (pp. 493-502). Washington, DC: American Psychological Association. Lehto, J. E., Juujärvi, P., Kooistra, L., & Pulkkinen, L. (2003). Dimensions of executive functioning: Evidence from children. British Journal of Developmental Psychology, 21, 59-80. Lezak, M. D. (2003). Principles of neuropsychological assessment. In T. E. Feinberg, & M. J. Farah (Eds), Behavioral neurology and neuropsychology (2 nd ed., pp. 33-44). New York: McGraw-Hill. Lezak, M. D., Howieson, D. B., & Loring, D. W. (2004). Neuropsychological assessment (4 th ed.). New York: Oxford. 141

Logue, P. E., Tupler, L. A., D Amico, C., & Schmitt, F. A. (1993). The Neurobehavioral Cognitive Status Examination: Psychometric properties in use with psychiatric inpatients. Journal of Clinical Psychology, 49, 80-89. Longman, R. S., Cota, A. A., Holden, R. R., & Fekken, G. C. (1989). A regression equation for the parallel analysis criterion in principle components analysis: Mean and 95 th percent eigenvalues. Multivariate Behavioral Research, 24, 59-69. Luria, A. R. (1966). Human brain and psychological processes. New York: Harper & Row. Luria, A. R. (1973). The working brain. New York: Basic Books. Luria, A. R. (1980). Higher cortical functions in man (2 nd ed.). New York: Basic Books. Lyness, S. A., Hernandez, I., Chui, H. C., & Teng, E. L. (2006). Performance of Spanish speakers on the Mattis dementia rating scale (MDRS). Archives of Clinical Neuropsychology, 21, 827-836. MacCallum, R. C., Widaman, K. F., Zhang, S., & Hong, S. (1999). Sample size in factor analysis. Psychological Methods, 4, 84-99. Macciocchi, S. N., Fowler, P. C., & Ranseen, J. D. (1992). Trait analyses of the Luria- Nebraska Intellectual Processes, Motor Functions, and Memory Scales. Archives of Clinical Neuropsychology, 7, 541-551. Man, D. W., Tam, S. F., & Hui-Chan, C. (2006). Prediction of functional rehabilitation outcomes in clients with stroke. Brain Injury, 20, 205-211. Matthews, C. G., & Klove, K. (1964). Instruction manual for the Adult Neuropsychology Test Battery. Madison, WI: University of Wisconsin Medical School. 142

Mattis, S. (1988). Dementia Rating Scale: Professional manual. Odessa, FL: Psychological Assessment Resources. McCrae, R. R., Zonderman, A. B., Costa, Jr., P. T., Bond, M. H., & Paunonen, S. V. (1996). Evaluating replicability of factors in the Revised NEO Personality Inventory: Confirmatory factor analysis versus Procrustes rotation. Journal of Personality and Social Psychology, 70, 552-566. McGrew, K. S. (1997). Analysis of the major intelligence batteries according to a proposed comprehensive Gf-Gc framework. In D. P. Flanagan, J. L. Genshaft, & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 151-180). New York: Guilford. McGrew, K. S. (2005). The Cattell-Horn-Carroll theory of cognitive abilities: Past, present, and future. In D. P. Flanagan, & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (2 nd ed., pp. 136-181). New York: Guilford. McGrew, K. S., & Flanagan, D. P. (1998). An intelligence test desk reference (ITDR): A cross-battery approach to intelligence test interpretation. Boston: Allyn & Bacon. McGrew, K. S., & Woodcock, R. W. (2001). Woodcock-Johnson III technical manual. Itasca, IL: Riverside Publishing. Meredith, W. (1993). Measurement invariance, factor analysis and factorial invariance. Psychometrika, 58, 525-543. Merten, T. (2005). Factor structure of the Hooper Visual Organization Test: A cross cultural replication and extension. Achieves of Clinical Neuropsychology, 20, 123-128. 143

Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons responses and performances as scientific inquiry into score meaning. In A. E. Kazdin (Ed.), Methodological issues and strategies in clinical research (2 nd ed., pp. 241-261). Washington, DC: American Psychological Association. Meyer, G. J., Finn, S. E., Eyde, L. D., Kay, G. G., Moreland, K. L., Dies, R. R., et al. (2001). Psychological testing and psychological assessment: A review of evidence and issues. American Psychologist, 56, 128-165. Meyers, J. E., & Meyers K. R. (1995). Rey Complex Figure Test and Recognition Trial: Professional Manual. Odessa, FL: Psychological Assessment Resources. Millis, S. R., Malina, A. C., Bowers, D. A., & Ricker, J. H. (1999). Confirmatory factor analysis of the Wechsler Memory Scale-III. Journal of Clinical and Experimental Neuropsychology, 21, 87-93. Mitrushina, M., Boone, K. B., Razani, J., & D Elia, L. F. (2005). Handbook of normative data for neuropsychological assessment (2 nd ed.). New York: Oxford. Moses, J. A. (1983). An orthogonal factor solution of the Luria-Nebraska Neuropsychological Battery Items; I. Motor, Rhythm, Tactile, and Visual Scales. Clinical Neuropsychology, 5, 181-185. Moses, J. A. (1984). An orthogonal factor solution of the Luria-Nebraska Neuropsychological Battery Items: II. Receptive Speech, Expressive Speech, Writing and Reading Scales. The International Journal of Clinical Neuropsychology, 6, 24-28. 144

Naglieri, J. A. (1997). Planning, Attention, Simultaneous, and Successive (PASS) Theory and the Cognitive Assessment System (CAS): A new theory-based measure of intelligence. In D. P. Flanagan, J. L. Genshaft, & P. L. Harrison (Eds), Contemporary intellectual assessment: Theories, tests, and issues (pp. 247-267). New York: Guilford. Naglieri, J. A., & Das, J. P. (1997). Das-Naglieri: Cognitive Assessment System. Itasca, IL: Riverside. Naglieri, J. A., & Das, J. P. (2005). Planning, Attention, Simultaneous, Successive (PASS) Theory: A revision of the concept of intelligence. In D. P. Flanagan & P. L. Harrison (Eds), Contemporary intellectual assessment: Theories, tests, and issues (2 nd ed., pp. 120-135). New York: Guilford. Neisser, U., Boodoo, G., Bouchard, T. J., Boykin, A. W., Brody, N., Ceci, S. J., et al. (1996). Intelligence: Knowns and unknowns. American Psychologist, 51, 71-101. Nolte, J. (2002). The human brain: An introduction to its functional anatomy (5 th ed.). St. Louis, MO: Mosby, Inc. Northern California Neurobehavioral Group, Inc. (1988). Manual for the Neurobehavioral Cognitive Status Examination. Fairfax, CA: Author. O Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer s MAP test. Behavioral Research Methods, Instruments & Computers, 23, 396-402. 145

O Connor, B. P. (2002). The search for dimensional structure differences between normality and abnormality: A statistical review of published data on personality and psychopathology. The Journal of Personality and Social Psychology, 83, 962-982. O Connor, T. A., & Burns, N. R. (2003). Inspection time and general speed of processing. Personality and Individual Differences 35, 713-724. O'Donnell, J. P., MacGregor, L. A., Dabrowski, J. J., Oestreicher, J. M., & Romero, J. J. (1994). Construct validity of neuropsychological tests of conceptual and attentional abilities. Journal of Clinical Psychology, 50, 596-600. Osmon, D. C., Smet, I. C., Winegarden, B., & Gandhavadi, B. (1992). Neurobehavioral Cognitive Status Examination: Its use with unilateral stroke patients in a rehabilitation setting. Archives of Physical Medicine and Rehabilitation, 73, 414-418. Osterrieth, P. A. (1944). Le test de copie d une figure complexe. Archives de Psychologie, 30, 206-356. Phelps, L. (1995). Exploratory factor analysis of the WRAML with academically at-risk students. Journal of Psychoeducational Assessment, 13, 384-390. Pineda, D. A., & Merchan, V. (2003). Executive function in young Columbian adults. International Journal of Neuroscience, 113, 397-410. Pontón, M. O., Gonzalez, J. J., Hernandez, I., Herrera, L., & Higareda, I. (2000). Factor analysis of the Neuropsychological Screening Battery for Hispanics (NeSBHIS). Applied Neuropsychology, 7, 32-39. 146

Porteus, S. D. (1946). The Porteus maze test: Revised. New York: Psychological Corporation. Psychological Corporation. (1992). Wechsler Individual Achievement Test. San Antonio, TX: Author. Psychological Corporation (1997). WAIS-III-WMS-III technical manual. San Antonio, TX: Author. Psychological Corporation. (2002). Wechsler Individual Achievement Test Second Edition. San Antonio, TX: Author. Rabin, L. A., Barr, W. B., & Burton, L. A. (2005). Assessment practices of clinical neuropsychologists in the United States and Canada: A survey of INS, NAN, and APA Division 40 members. Archives of Clinical Neuropsychology, 20, 33-65. Raven, J. C. (1938). Guide to using progressive matrices (rev. 1954, 1956). London: H. K. Lewis. Reitan, R. M. (1969). Manual for administration of neuropsychological test batteries for adults and children. Indianapolis, IN. Reitan, R. M., & Davidson, L. A. (1974). Clinical Neuropsychology: Current Status and Applications. Washington, DC: V. H. Winston. Reitan, R. M., & Wolfson, D. (1985). The Halstead-Reitan Neuropsychological Test Battery: Theory and interpretation. Tucson, AZ: Neuropsychological Press. Reitan, R. M., & Wolfson, D. (1993). The Halstead-Reitan Neuropsychological Test Battery: Theory and clinical interpretation (2 nd ed). Tucson, AZ: Neuropsychology Press. 147

Reitan, R. M., & Wolfson, D. (2004). Theoretical, methodological, and validation bases of the Halstead-Reitan Neuropsychological Test Battery. In G. Goldstein, & S. R. Beers (Eds.), Comprehensive handbook of psychological assessment: Intellectual and neuropsychological assessment (p. 105-131). New York: John Wiley & Sons, Inc. Rey, A. (1941). L examen psychologique dans les cas d encépalopathie traumatique. Archives de Psychologie, 28, 286-340. Rey, A. (1964). L examen clinique en psychologie. Paris: Presses Universitaires de France. Reynolds, C. R., & Kamphaus, R. W. (1990). Handbook of psychological and educational assessment of children: Intelligence and achievement. New York: Guilford. Riva, D., Nichelli, F., & Devoti, M. (2000). Developmental aspects of verbal fluency and confrontation naming in children. Brain and Language, 71, 267-284. Roberts, R. D., & Stankov, L. (1999). Individual differences in speed of mental processing and human cognitive abilities: Toward a taxonomic model. Learning and Individual Differences, 11, 1-120. Rock, D. A., Werts, C. E., & Flaugher, R. L. (1978). The use of analysis of covariance structures for comparing the psychometric properties of multiple variables across populations. Multivariate Behavioral Research, 13, 403-418. Roid, G. H. (2003). Stanford-Binet Intelligence Scales, Fifth Edition. Itasca, IL: Riverside Publishing. 148

Royce, J. R., Yeudall, L. T., & Bock, C. (1976). Factor analytic studies of human brain damage: I. First and second-order factors and their brain correlates. Multivariate Behavioral Research, 11, 381-418. Ross, L., Thrasher, M., & Long, C. J. (1990). Decision strategies in neuropsychology: Volume I. Determination of lateralized cerebral dysfunction. Archives of Clinical Neuropsychology, 3, 273-285. Ryan, J. J., Rosenberg, S. J., & Mittenberg, W. (1984). Factor analysis of the Rey Auditory-Verbal Learning Test. The International Journal of Clinical Neuropsychology, 4, 239-241. Salthouse, T. A., Fristoe, N., & Rhee, S. H. (1996). How localized are age-related effects of neuropsychological measures? Neuropsychology, 10, 272-285. Schwamm, L. H., Van Dyke, C., Kiernan, R. J., Merrin, E. L., & Mueller, J. (1987). The Neurobehavioral Cognitive Status Examination: Comparison with the Cognitive Screening Examination and the Mini-Mental State Examination in a neurosurgical population. Annals of Internal Medicine, 107, 486-491. Sewell, K. W., Downey, R. G., & Sinnett, E. R. (1988). Convergence and divergence of clinical memory tests. Psychological Reports, 62, 219-297. Shea, D. L., Lubinski, D., & Benbow, C. P. (2001). Importance of assessing spatial ability in intellectually talented young adolescents: A 20-year longitudinal study. Journal of Educational Psychology, 93, 604-614. Shelly, C., & Goldstein, G. (1982). Intelligence, achievement, and the Luria-Nebraska Battery in a neuropsychiatric population: A factor analytic study. Clinical Neuropsychology, 4, 164-169. 149

Silverberg, N. D., Hanks, R. A., & McKay, C. (2007). Cognitive estimation in traumatic brain injury. Journal of the International Neuropsychological Society, 13, 898-902. Sörbom, D., & Jöreskog, K. G. (1976). Confirmatory factor analysis with model modification (COFAMM): A Fortran IV program. Chicago: National Educational Resources, Inc. Spearman, C. E. (1904). General Intelligence, objectively determined and measured. American Journal of Psychiatry, 15, 201-293. Spearman, C. E. (1927). The abilities of man. London: Macmillan. Spearman, C. E. (1939). Thurstone s work reworked. Journal of Educational Psychology, 30, 1-16. Stankov, L. (2000). Structural extensions of a hierarchical view on human cognitive abilities. Learning and Individual Differences, 12, 31-51. Sternberg, R. J. (1985). Beyond IQ: A triarchic theory of human intelligence. New York: Cambridge University Press. Sternberg, R. J. (1988). The triarchic mind: A new theory of human intelligence. New York: Viking. Sternberg, R. J. (2005). The Triarchic theory of successful intelligence. In D. P. Flanagan, & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (2 nd ed., pp. 103-119). New York: Guilford. Strauss, E., Sherman, E. M. S., & Spreen, O. (2006). A compendium of neuropsychological tests: Administration, norms, and commentary (2 nd ed.). New York: Oxford. 150

Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18, 643-662. Sung, Y. H., & Dawis, R. V. (1981). Level and factor structure differences in selected abilities across race and sex groups. Journal of Applied Psychology, 66, 613-624. Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate statistics (4 th ed.). Toronto: Allyn and Bacon. Talley, J. L. (1986). Memory in learning disabled children: Digit span and the Rey Auditory Verbal Learning Test. Archives of Clinical Neuropsychology, 1, 315-322. Tamkin, A. S., Kunce, J. T., Blount, J. B., & Magharious, W. (1984). The effectiveness of the Weigl Color-Form sorting test in screening for brain dysfunction. Journal of Clinical Psychology, 40, 1454-1459. Taub, G. E., & McGrew, K. S. (2004). A confirmatory factor analysis of the Cattell- Horn-Carroll theory and cross-age invariance of the Woodcock-Johnson Tests of Cognitive Abilities III. School Psychology Quarterly, 19, 72-87. Taylor, T. R. (1994). A review of three approaches to cognitive assessment, and a proposed integrated approach based on a unifying theoretical framework. South African Journal of Psychology, 24, 183-193. Teng, E. L., Wimer, C., Roberts, E., Damasio, A. R., Eslinger, P. J., Folstein, M. F., et al. (1989). Alzheimer's dementia: Performance on parallel forms of the Dementia Assessment Battery. Journal of Clinical and Experimental Neuropsychology, 11, 899-912. 151

Terman, L. M. (1916). The measurement of intelligence: An explanation of and a complete guide for the use of the Stanford revision and extension of the Binet- Simon intelligence scale. Boston: Houghton Mifflin. Terman, L. M., & Merrill, M. A. (1937). Measuring intelligence. Boston: Houghton Mifflin. Terman, L. M., & Merrill, M. A. (1960). Stanford-Binet Intelligence Scale: Manual for the Third Revision Form L-M. Boston: Houghton Mifflin Terman, L. M., & Merill, M. A. (1973). Stanford-Binet Intelligence Scale: Manual for the Third Revision Form L-M (1972 norms tables by R. L. Thorndike). Boston: Houghton, Mifflin. Thorndike, E. L. (1920). The reliability and significance of tests of intelligence. Journal of Educational Psychology, 11, 284-287. Thorndike, E. L., Lay, W., & Dean, P. R. (1909). The relation between accuracy in sensory discrimination to general intelligence. American Journal of Psychology, 20, 364-369. Thorndike, R. L. (1997). The early history of intelligence testing. In D. P. Flanagan, J. L. Genshaft, & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (p. 3-16). New York: Guilford. Thorndike, R. L., Hagen, E. P., & Sattler, J. M. (1986). Stanford-Binet Intelligence Scale Fourth Edition guide for administering and scoring. Itasca, IL: Riverside Publishing. Thurstone, L. L. (1938). Primary mental abilities. Chicago: University of Chicago Press. Thurstone, L. L. (1947). Multiple factor analysis. Chicago: University of Chicago Press. 152

Thurstone, L. L., & Thurston, T. G. (1941). Factorial studies of intelligence. Chicago: University of Chicago Press. Tirre, W. C., & Field, K. A. (2002). Structural models of abilities measured by the Ball Aptitude Battery. Educational and Psychological Measurement, 62, 830-856. Tulsky, D. S., Ivnik, R. J., Price, L. R., & Wilkins, C. (2003). Assessment of cognitive functioning with the WAIS-III and WMS-III: Development of a six-factor model. In D. S. Tulsky, D. H. Saklofske, G. J. Chelune, R. K. Heaton, R. J. Ivnik, R. Bornstein, A. Prifitera, & M. F. Ledbetter (Eds.), Clinical interpretation of the WAIS-III and WMS-III (pp. 149-182). New York: Academic Press. Tulsky, D. S., & Price, L. R. (2003). The joint WAIS-III and WMS-III factor structure: development and cross-validation of a six-factor model of cognitive functioning. Psychological Assessment, 15, 149-162. Tulsky, D. S., Saklofske, D. H., & Ricker, J. (2003). Historical overview of intelligence and memory: Factors influencing the Wechsler Scales. In D. S. Tulsky, D. H. Saklofske, G. J. Chelune, R. K. Heaton, R. J. Ivnik, R. Bornstein, A. Prifitera, & M. F. Ledbetter (Eds.), Clinical interpretation of the WAIS-III and WMS-III (pp. 7-41). New York: Academic Press. Tulving, E. (2002). Episodic memory: Yesterday and today. Invited lecture, International Neuropsychological Society midyear meeting, Stockholm, Sweden, July 25. Velicer, W. F. (1976). Determining the number of components from the matrix of partial correlations. Psychometrika, 31, 321-327. 153

Velicer, W. F., Eaton, C. A., & Fava, J. L. (2000). Construct explication through factor or component analysis: A review and evaluation of alternative procedures for determining the number of factors or components. In R. E. Goffin, & E. Helmes (Eds.), Problems and solutions in human assessment (pp. 41-71). Boston: Kluwar Academic. Velicer, W. F., & Jackson, D. N. (1990). Component analysis versus common factor analysis: Some issues in selecting an appropriate procedure. Multivariate Behavioral Research, 25, 1-28. Vernon, P. E. (1964). The structure of human abilities. London: Methuen. Vernon, P. E. (1965). Ability factors and environmental influences. American Psychologist, 20, 723-733. Visser, B. A., Ashton, M. C., & Vernon, P. A. (2006a). Beyond g: Putting Multiple Intelligences theory to the test. Intelligence, 34, 487-502. Visser, B. A., Ashton, M. C., & Vernon, P. A. (2006b). g and the measurement of Multiple Intelligences: A response to Gardner. Intelligence, 34, 507-510. Watson, D., & Tellegen, A. (1985). Toward a consensual structure of mood. Psychological Bulletin, 98, 219-235. Wechsler, D. (1939). The measurement of adult intelligence. Baltimore: Williams & Wilkins. Wechsler, D. (1974). Manual for the Wechsler Intelligence Scale for Children Revised. New York: The Psychological Corporation. Wechsler, D. (1981). Wechsler Adult Intelligence Scale Revised. New York: The Psychological Corporation. 154

Wechsler, D. (1987). Wechsler Memory Scale Revised. San Antonio, TX: The Psychological Corporation. Wechsler, D. (1991). Wechsler Intelligence Scale for Children Third Edition. San Antonio, TX: The Psychological Corporation. Wechsler, D. (1997). Wechsler Memory Scale Third Edition Administration and Scoring Manual. San Antonio, TX: The Psychological Corporation. Wechsler, D. (2002). Wechsler Preschool and Primary Scale of Intelligence Third Edition. San Antonio, TX: Psychological Corporation. Wechsler, D. (2003a). Wechsler Intelligence Scale for Children Fourth Edition Administration Manual. San Antonio, TX: The Psychological Corporation. Wechsler, D. (2003b). Wechsler Intelligence Scale for Children Fourth Edition Integrated. San Antonio, TX: The Psychological Corporation. Wiederman, M. C., & Morgan, C. D. (1995). The Neurobehavioral Cognitive Status Exam (NCSE) with Geriatric Inpatients. Clinical Gerontologist, 15, 35-47. Wilde, M. C. (2006). The validity of the Repeatable Battery of Neuropsychological Status in acute stroke. The Clinical Neuropsychologist, 20, 702-715. Wilde, N. J., Strauss, E., Chelune, G. J., Hermann, B. P., Hunter, M., Loring, D. W., et al. (2003). Confirmatory factor analysis of the WMS-III in patients with temporal lobe epilepsy. Psychological Assessment, 15, 56-63. Wilkinson, G. S. (1993). Wide Range Achievement Test 3 (WRAT3). Wilmington, DE: Wide Range, Inc. Wilkinson, G. S., & Robertson, G. J. (2006). Wide Range Achievement Test 4 (WRAT4): Professional Manual. Lutz, FL: Psychological Assessment Resources, Inc. 155

Williams, J. M., & Shane, B. (1986). The Reitan-Indiana Aphasia Screening Test: Scoring and factor analysis. Journal of Clinical Psychology, 42, 156-160. Wood, J. M., Tataryn, D. J., & Gorsuch, R. L. (1996). Effects of under- and overextraction on principal axis factor analysis with varimax rotation. Psychological Methods, 1, 354-365. Woodard, J. L., Goldstein, F. C., Roberts, V. J., & McGuire, C. (1999). Convergent and discriminant validity of the CVLT (Dementia Version). Journal of Clinical and Experimental Neuropsychology, 21, 553-558. Woodard, J. L., Salthouse, T. A., Godsall, R. E., & Green, R. C. (1996). Confirmatory factor analysis of the Mattis Dementia Rating Scale in patients with Alzheimer's disease. Psychological Assessment, 8, 85-91. Woodcock, R. W. (1990). Theoretical foundations of the WJ-R measures of cognitive ability. Journal of Psychoeducational Assessment, 8, 231-258. Woodcock, R. W. (1994). Measures of fluid and crystallized intelligence. In R. J. Stenberg (Ed.), Encyclopedia of human intelligence (pp. 452-456). Woodcock, R. W., & Johnson, M. B. (1989a). Woodcock-Johnson Psycho-Educational Battery Revised. Chicago: Riverside. Woodcock, R. W., & Johnson, M. B. (1989b). Woodcock-Johnson-Revised Tests of Achievement. Itasca. IL: Riverside Publishing. Woodcock, R. W., McGrew, K. S., & Mather, N. (2001a). Woodcock-Johnson Psychoeducational Battery Third Edition. Itasca, IL: Riverside Publishing. Woodcock, R. W., McGrew, K. S., & Mather, N. (2001b). Woodcock-Johnson III Tests of Achievement. Itasca, IL: Riverside Publishing. 156

Woodcock, R. W., McGrew, K. S., Mather, N., & Schrank, F. A. (2003). Diagnostic Supplement to the Woodcock-Johnson III Tests of Cognitive Abilities. Itasca, IL: Riverside. Yerkes, R. M. (1919). The measurement and utilization of brain power in the Army. Science, 44, 221-226; 251-259. Yerkes, R. M. (1921). Psychological examining in the United States Army: Memoirs of the National Academy of Science, Vol. 15. Washington, DC: Government Printing Office. Zachary, R. A. (1990). Wechsler s intelligence scales: Theoretical and practical considerations. Journal of Psychoeducational Assessment, 8, 276-289. Zhu, J., Weiss, L. G., Prifitera, A., & Coalson, D. (2004). The Wechsler Intelligence Scales for children and adults. In G. Goldstein, & S. R. Beers (Eds.), Comprehensive handbook of psychological assessment: Intelligence and neuropsychological assessment (pp. 51-75). Hoboken, NJ: John Wiley & Sons, Inc. Zillmer, E. A., Waechtler, C., Harris, B., Khan, F., & Fowler, P. C. (1992). The effects of unilateral and multifocal lesions on the WAIS-R: A factor analytic study of stroke patients. Archives of Clinical Neuropsychology, 7, 29-40. Zwick, W. R., & Velicer, W. F. (1982). Factors influencing four rules for determining the number of components to retain. Multivariate Behavioral Research, 17, 253-269. Zwick, W. R., & Velicer, W. F. (1986). Comparison of five rules for determining the number of components to retain. Psychological Bulletin, 99, 432-442. 157

Appendix A Pattern Matrices Table A-1 Rotated pattern matrix for Adams & Sheslow (1990) (n = 2363) Factor 1 Factor 2 WRAML Design Memory.75.10 WRAML Picture Memory.72.20 WRAML Visual Learning.65 -.01 WRAML Verbal Learning.56 -.13 WRAML Story Memory.52 -.25 WRAML Sound Symbol.44 -.25 WRAML Finger Windows.34 -.31 WRAML Number-Letter -.09 -.89 WRAML Sentence Memory.09 -.84 r between factors Factor 2 -.36 Eigenvalue 3.16 1.17 Note. Loadings.40 are bolded. WRAML = Wide Range Assessment of Memory and Learning. Table A-2 Unrotated component matrix for Adams & Sheslow (2003) (n = 1200) Factor 1 WRAML2 Verbal Memory.82 WRAML2 Working Memory.81 WRAML2 Verbal Recognition.79 WRAML2 Attention/Concentration.68 WRAML2 Visual Memory.63 WRAML2 Visual Recognition.63 Eigenvalue 3.20 Note. Loadings.40 are bolded. WRAML2 = Wide Range Assessment of Memory and Learning- Second Edition. 158

Table A-3 Rotated pattern matrix for Adams & Sheslow (2003): 5-8 year olds (n = 320) Factor 1 Factor 2 WRAML2 Verbal Learning Recall.77.06 WRAML2 Verbal Learning Recognition.74.14 WRAML2 Verbal Learning.71 -.03 WRAML2 Story Recognition.62 -.28 WRAML2 Story Memory Recall.60 -.32 WRAML2 Story Memory.59 -.35 WRAML2 Picture Memory Recognition.54.04 WRAML2 Design Memory.47 -.11 WRAML2 Picture Memory.44.40 WRAML2 Design Recognition.40 -.06 WRAML2 Finger Windows.36 -.25 WRAML2 Sound Symbol Recall.10 -.83 WRAML2 Sound Symbol.13 -.82 WRAML2 Number Letter.19 -.48 WRAML2Sentence Memory.41 -.45 r between factors Factor 2 -.30 Eigenvalue 5.36 1.46 Note. Loadings.40 are bolded. WRAML2 = Wide Range Assessment of Memory and Learning- Second Edition. Table A-4 Rotated pattern matrix for Adams & Sheslow (2003): 9 year olds- Adult age (n = 880) Factor 1 Factor 2 Factor 3 Factor 4 WRAML2 Verbal Learning Recall.91.02 -.00 -.05 WRAML2 Verbal Learning Recognition.81.06.06 -.08 WRAML2 Verbal Learning.79 -.14 -.01 -.12 WRAML2 Number Letter -.13 -.89.02.05 WRAML2 Symbolic Working.06 -.75.04 -.07 WRAML2 Verbal Working.05 -.75.07 -.08 WRAML2 Sentence Memory -.09 -.73 -.06 -.31 WRAML2 Finger Windows.22 -.58.03.13 WRAML2 Design Memory.15 -.06.79.15 WRAML2 Design Recognition.05 -.09.74.13 WRAML2 Picture Memory -.06.05.68 -.17 WRAML2 Picture Memory Recognition -.09.03.58 -.23 WRAML2 Story Memory.11 -.11.08 -.82 WRAML2 Story Memory Recall.16 -.08.07 -.82 WRAML2 Story Recognition.17 -.08.05 -.78 r between factors 2 -.37 3.33 -.32 4 -.31.36 -.34 Eigenvalue 6.23 1.56 1.33 1.19 Note. Loadings.40 are bolded. WRAML2 = Wide Range Assessment of Memory and Learning- Second Edition. 159

Table A-5 Rotated pattern matrix for Aftanas & Royce (1969) (n = 100) Factor 1 Factor 2 Factor 3 Hooper Visual Organization Test.76.15.20 Coloured Progressive Matrices.70 -.11 -.26 Tactual Performance Test (Localization).69 -.13.19 Halstead Category Test.65 -.13 -.04 Tactual Performance Test (Memory).63 -.07.11 Proverbs.57 -.13 -.31 Porteus Mazes.50 -.17 -.09 Tactual Performance Test (Latency) -.48.27 -.38 Grassi Accuracy.47.11.08 Grassi Time Credits.39 -.25 -.05 Binaural Beats.27.20 -.01 Purdue Pegboard Test (Total) -.11 -.69.01 Reaction Time Simple -.04.69.31 Reaction Time Choice.02.68 -.04 Trail Making Test -.27.67 -.04 Seashore Rhythm Test.15 -.48 -.21 Symbol Gestalt.29 -.44.03 Sound Localization (Separation) -.24 -.25.20 Sound Localization (Localization).01.25 -.06 Minnesota Percepto-Diagnostic -.24.01 -.57 Critical Flicker Fusion Test (Mean).14 -.12.53 Retinal Rivalry.14 -.37.44 Apparent Motion -.01.11.38 Speech Sounds Perception Test.18 -.20 -.36 Kahn Test (Recall).15 -.30.33 Muller-Lyer Error.00.23 -.27 Kahn Test (Symbolization).08 -.01 -.14 r between factors Factor 2 -.22 Factor 3 -.03 -.11 Eigenvalue 5.52 2.09 1.73 Note. Loadings.40 are bolded. Table A-6 Unrotated component matrix for Baños et al. (2005) (n = 221) Factor 1 RAVLT Trial IV List A.91 RAVLT Trial V List A.90 RAVLT Trail VII List A.87 RAVLT Trial VI List A.86 RAVLT Trial III List A.85 RAVLT Trial II List A.84 RAVLT Delayed Recognition Hits.72 RAVLT Trial I List A.64 RAVLT Trial I List B.53 RAVLT Recognition False Positive -.34 Eigenvalue 5.88 Note. Loadings.40 are bolded. RAVLT = Rey Auditory Verbal Learning Test. Patients were administered 5 learning trials (Trials I V List A) prior to the presentation of the alternate list interference trial (Trial I List B). Immediate and delayed memory of List A correspond with Trials VI and VII, respectively. 160

Table A-7 Rotated pattern matrix for Berry et al. (1991) (n = 87) Factor 1 Factor 2 Factor 3 ROCFT Delayed Memory.91 -.09 -.01 ROCFT Immediate Memory.87 -.05 -.09 WMS Visual Reproduction Delayed Memory.84 -.02 -.02 WMS Visual Reproduction Immediate Memory.83 -.01 -.12 Judgment of Line Orientation.71.26.17 ROCFT Copy.64.14.11 WAIS-R Digit Span Forward -.03.83.00 WAIS-R Digit Span Reverse.20.78.06 Controlled Oral Word Fluency -.07.74 -.17 WAIS-R Full Scale IQ.46.52 -.26 WCST Categories Correct -.31.03 -.79 2 & 7 Total Letters Correct.00.22 -.70 TMT Part B -.36.29.64 WMS Logical Memory Immediate Memory.17.36 -.61 WMS Logical Memory Delayed Memory.18.38 -.59 TMT Part A -.49.14.59 r between factors 2.24 3 -.27 -.20 Eigenvalue 6.70 2.30 1.75 Note. Loadings.40 are bolded. ROCFT = Rey-Osterrieth Complex Figure Test Table A-8 Rotated pattern matrix for Boone et al. (1998) (n = 250) Factor 1 Factor 2 Factor 3 Factor 4 WCST % of Conceptual Level Resp..97 -.07 -.03 -.04 WCST Errors -.97.06.02.02 WCST Categories.96 -.09 -.05 -.13 WCST Perseveration -.86.12.02.01 WMS Visual Reproduction % Retent..46.12.10.09 WMS Logical Memory % Retent..24 -.00.08.15 Stroop Word.05.88 -.02 -.10 Stroop Color -.08.87 -.01 -.01 Stroop Color-Word Interference -.11.82.04 -.02 Controlled Oral Word Association Test.08 -.49.35.10 ACT Perseveration -.11 -.09 -.76.08 ACT Sequences.10.19 -.67.21 WAIS-R Digit Span -.07.00.66.36 ACT Total.22 -.06.63.11 WAIS-R PIQ.02.01.06.88 WAIS-R Digit-Symbol Coding -.09 -.28 -.03.72 WAIS-R VIQ.16 -.07.41.56 ROCFT 1.16 -.08 -.22.52 r between factors 2 -.31 3.21 -.22 4.38 -.33.16 Eigenvalue 6.58 2.18 1.68 1.29 Note. Loadings.40 are bolded. WCST = Wisconsin Card Sorting Test; Resp. = Responses; % Retent. = Percent retention; ACT = Auditory Consonant Trigrams; ROCFT = Rey-Osterrieth Complex Figure Test. 1 Percent retention after a 3 minute interval. 161

Table A-9 Unrotated component matrix for Bowden et al. (1998): Alcohol Dependent Sample (n = 151) Factor 1 WCST Categories -.89 WCST Perseverative Errors.80 WCST Correct -.72 WCST Perseverative Responses.67 WCST Non-perseverative Errors.63 Eigenvalue 2.80 Note. Loadings.40 are bolded. WCST = Wisconsin Card Sorting Test. Table A-10 Unrotated component matrix for Bowden et al. (1998): College Student Sample (n = 75) Factor 1 WCST Perseverative Errors.94 WCST Perseverative Responses.94 WCST Categories -.89 WCST Non-perseverative Errors.85 WCST Correct.19 Eigenvalue 2.80 Note. Loadings.40 are bolded. WCST = Wisconsin Card Sorting Test. Table A-11 Rotated pattern matrix for Bowden et al. (2004) (n = 277) Factor 1 Factor 2 Factor 3 WAIS-R Block Design.82 -.21 -.18 WMS-R Visual Reproduction I.80 -.06 -.02 WMS-R Visual Memory Span.78.05 -.02 WAIS-R Object Assembly.74 -.14 -.09 WAIS-R Digit-Symbol Coding.74 -.05.14 WMS-R Visual Reproduction II.72.05.17 WAIS-R Picture Completion.63 -.33 -.04 WMS-R Visual Paired Associates I.63.16.34 WMS-R Figural Memory.57.15.14 WAIS-R Picture Arrangement.49 -.30.21 WMS-R Visual Paired Associates II.48.17.40 WMS-R Mental Control.32 -.31.01 WAIS-R Vocabulary -.02 -.90.08 WAIS-R Information -.10 -.88.12 WAIS-R Comprehension.06 -.86.05 WAIS-R Similarities.32 -.64.01 WAIS-R Arithmetic.38 -.56 -.02 WAIS-R Digit Span.38 -.41 -.01 WMS-R Logical Memory II -.07 -.35.81 WMS-R Verbal Paired Associates II.10.10.79 WMS-R Logical Memory I -.06 -.46.69 WMS-R Verbal Paired Associates I.30.00.66 r between factors 2 -.39 3.44 -.19 Eigenvalue 10.00 2.40 1.56 Note. Loadings.40 are bolded. 162

Table A-12 Rotated pattern matrix for Cohen et al. (1980): Broca s aphasia sample (n = 20) Factor 1 Factor 2 Picture Naming 1.99 -.06 Token Test 1.99 -.12 TMT Part A.84 -.01 Peabody Picture Vocabulary Test 1.74.11 Color-to-picture Naming.73.11 Sound-to-picture Naming.53.41 Picture Coordination "Situations" -.16.90 Picture-to-picture Scale 2 "Situations".13.80 Picture Coordination "Parts" -.06.74 Picture-to-picture Scale 3 "Actions".19.60 Picture-to-picture Scale 1 "Attributes".31.53 r between factors.51 Eigenvalue 6.00 1.46 Note. Loadings.40 are bolded. 1 These test variables are similar to MAE test variables. Table A-13 Rotated pattern matrix for Cohen et al. (1980): Wernicke s aphasia sample (n = 20) Factor 1 Factor 2 Picture-to-picture Scale 1 "Attributes".96 -.22 Peabody Picture Vocabulary Test 1.93.03 Picture Naming 1.89.08 Picture-to-picture Scale 3 "Actions".75.09 Token Test 1.73.17 Sound-to-picture Naming.72.11 Color-to-picture Naming -.09.95 Picture Coordination "Situations" -.07.87 Picture Coordination "Parts".07.68 Picture-to-picture Scale 2 "Situations".30.58 TMT Part A.09.56 r between factors.52 Eigenvalue 5.99 1.51 Note. Loadings.40 are bolded. 1 These test variables are similar to MAE test variables. 163

Table A-14 Rotated pattern matrix for Colantonio et al. (1993) (n = 219) Factor 1 Factor 2 Factor 3 MDRS Priming/Inductive Reasoning (Task X).95 -.27.11 MDRS Similarities (Task W).85 -.09.18 MDRS Similarities Multiple Choice (Task Z).83.06 -.04 MDRS Differences (Task Y).76.15.02 MDRS Sentence Creation (Task AB).73.08 -.09 MDRS Identities and Oddities (Task V).66.26.05 MDRS Digit Span (Task A).60.37 -.15 MDRS Verbal Fluency (Tasks E and F).50.25.33 MDRS Graphomotor (Tasks L, M, N, and O) -.15.92.10 MDRS Construction (Tasks P, Q, R, S, T, and U) -.03.88 -.02 MDRS Alternating Movements (Tasks I, J, and K).02.81 -.07 MDRS Attention 1.15.76.05 MDRS Recognition of Words/Designs (Tasks AI and AK).20.62.22 MDRS Verbal Repetitions (Tasks G and H).07.54.20 MDRS Commands (Tasks B, C, and D).37.45 -.18 MDRS Sentence Recall (Tasks AF and AG) -.05.05.94 MDRS Orientation (Task AC).20.11.80 r between factors 2.51 3.30.21 Eigenvalue 8.52 1.85 1.45 Note. Loadings.40 are bolded. MDRS = Mattis Dementia Rating Scale. 1 Combination of test variable includes Design Matching, Word Reading, Letter Counting tasks (Tasks AD, AH, AJ, respectively). Table A-15 Unrotated component matrix for Crawford et al. (1998) (n = 152) Factor 1 WAIS-R Information.79 WAIS-R Vocabulary.78 WAIS-R Similarities.78 WAIS-R Block Design.73 WAIS-R Arithmetic.72 WAIS-R Picture Completion.72 PASAT.72 WAIS-R Comprehension.71 WAIS-R Picture Arrangement.71 WAIS-R Digit Span.66 WAIS-R Object Assembly.63 WAIS-R Digit-Symbol Coding.54 Eigenvalue 6.06 Note. Loadings.40 are bolded. PASAT = Paced Auditory Serial Addition Test. 164

Table A-16 Rotated pattern matrix for Deary et al. (1991) (n = 94) Factor 1 Factor 2 Factor 3 WAIS-R Comprehension.94 -.04 -.03 WAIS-R Information.90.05.00 WAIS-R Similarities.88.10 -.13 WAIS-R Vocabulary.85 -.06.23 WAIS-R Arithmetic.55.14.29 WAIS-R Block Design.04.88 -.03 WAIS-R Object Assembly.03.87 -.02 WAIS-R Picture Completion.11.81 -.11 WAIS-R Digit-Symbol Coding -.13.65.44 PASAT (4 second interval).13 -.07.88 PASAT (2 second interval).04.07.86 r between factors 2.43 3.35.33 Eigenvalue 5.62 1.61 1.30 Note. Loadings.40 are bolded. PASAT = Paced Auditory Serial Addition Test. Table A-17 Rotated pattern matrix for de Jong & Das-Smaal (1993): Boys (n = 175) Factor 1 Factor 2 Stroop Color.79 -.03 Stroop Word.77.08 Cancellation Test.71.22 Stroop Color-Word.65 -.13 1 Minute Reading -.60 -.02 WISC-R Digit-Symbol Coding -.58 -.03 TMT Part A.54.11 TMT Part B.45 -.09 WISC-R Digit Span Backward -.41.24 Controlled Oral Word Association Test -.36.33 WISC-R Digit Span Forward -.32.26 RAVLT List B.13.90 RAVLT List A Immediate Learning Trials -.00.85 r between factors Factor 2 -.24 Eigenvalue 3.97 1.66 Note. Loadings.40 are bolded. RAVLT = Rey Auditory Verbal Learning Test. 165

Table A-18 Rotated pattern matrix for de Jong & Das-Smaal (1993): Girls (n = 204) Factor 1 Factor 2 Stroop Word.75 -.06 Stroop Color.74 -.18 Cancellation Test.71.11 TMT Part A.63.28 1 Minute Reading -.61.20 Stroop Color-Word.58 -.38 TMT Part B.56.12 WISC-R Digit-Symbol Coding -.55.20 Controlled Oral Word Association Test -.29.28 RAVLT List B.14.80 RAVLT List A Immediate Learning Trials.01.79 WISC-R Digit Span Backward -.02.58 WISC-R Digit Span Forward -.13.19 r between factors Factor 2 -.26 Eigenvalue 4.09 1.62 Note. Loadings.40 are bolded. RAVLT = Rey Auditory Verbal Learning Test. Table A-19 Rotated pattern matrix for Fos et al. (2000) (n = 74) Factor 1 Factor 2 Factor 3 PASAT Subtest (1.2 second interval).97.28.11 PASAT Subtest (2.0 second interval).91 -.11 -.04 PASAT Subtest (1.6 second interval).90 -.09 -.07 PASAT Subtest (2.4 second interval).80 -.12 -.21 Stroop Test (Color-Word Interference Trial).20 -.09 -.00 Digit Span Backward (Max Span).12 -.92.19 Digit Span Forward (Max Span) -.02 -.90 -.20 TMT Part A.11.07.94 TMT Part B -.24 -.11.84 r between factors 2 -.26 3 -.25.17 Eigenvalue 4.08 1.54 1.41 Note. Loadings.40 are bolded. 166

Table A-20 Rotated pattern matrix for Francis et al. (1992) (n = 722) Factor 1 Factor 2 Factor 3 WISC Object Assembly.78.02.02 WISC Block Design.78 -.05.14 WISC Picture Completion.70 -.16.07 Tactual Performance Test.58.11 -.03 Target Test.56.27 -.01 Holes Test -.24.76.25 WISC Coding.17.67 -.18 Mazes Test.04.61.42 TMT Part A.23.59 -.29 Grooved Pegboard Test.37.45.22 Finger Tapping Test.04.15.72 Grip Strength.21 -.11.69 r between factors 2.30 3.17.14 Eigenvalue 3.91 1.41 1.18 Note. Loadings.40 are bolded. Table A-21 Rotated pattern matrix for Goldstein & Shelly (1972) (n = 619) Factor 1 Factor 2 Factor 3 Halstead Tactual Performance Test (Time).80.11.04 Halstead Tactual Performance Test (Location).75 -.03 -.18 WAIS Object Assembly.73 -.12 -.03 Halstead Tactual Performance Test (Memory).72 -.08 -.04 Halstead Category Test.68 -.20 -.03 WAIS Block Design.67 -.24.03 Finger-tip Number Writing (Left).62.26.25 WAIS Picture Arrangement.61 -.31 -.03 Trail Making Test.61 -.26 -.01 WAIS Digit Symbol.54 -.27.21 Finger-tip Number Writing (Right).52.14.37 WAIS Picture Completion.50 -.44.00 Finger Discrimination (Left).46.07.34 WAIS Vocabulary -.11 -.94 -.02 WAIS Information -.02 -.88 -.10 WAIS Comprehension.01 -.80 -.03 WAIS Similarities.06 -.79 -.01 Reitan-Heimburger Aphasia Test.11 -.67.14 WAIS Arithmetic.24 -.61 -.00 WAIS Digit Span.04 -.61.19 Speech Sound Perception Test.15 -.55.29 Seashore Rhythm Test.18 -.37.21 Finger Tapping Dominant -.07 -.08.84 Finger Tapping Non-Dominant -.09 -.11.80 Finger Discrimination (Right).29 -.05.47 r between factors Factor 2 -.47 Factor 3.44 -.17 Eigenvalue 10.71 2.61 1.30 Note. Loadings.40 are bolded. 167

Table A-22 Rotated pattern matrix Goldstein & Shelly (1984) (n = 150) Factor 1 Factor 2 WRAT Reading.92 -.20 WRAT Spelling.88 -.14 WAIS Vocabulary.85 -.05 Rg1 (Reading Complex Material) 1.82.00 W1 (Spelling) 1.78.16 WAIS Information.74.12 Aphasia Screening Test.72.22 E3 (Reading Polysyllable Words) 1.68.17 Rg2 (Reading Simple Material) 1.64 -.05 WAIS Digit Span.63.12 WAIS Comprehension.63.13 Speech-Sounds Perception Test.60.39 WAIS Similarities.58.28 Rc2 (Using Relational Concepts) 1.49.44 Rc6 (Logical Grammatical Relations) 1.49.27 E2 (Word Repetition) 1.43.37 E1 (Simple Phonetic Reading) 1.40.19 Rc5 (Word Comprehension) 1.36 -.11 Rc4 (Verbal-Spatial Relationships) 1.31.22 Tactual Performance Test Location -.25.78 Luria-Nebraska Vision Scale.17.75 Grip Strength (Dominant Hand) -.15.75 Luria-Nebraska Motor Scale.22.75 Luria-Nebraska Kinesthetic Scale.23.70 Finger Tapping Test (Dominant Hand).03.64 WAIS Block Design.21.60 Rc3 (Concept Recognition) 1.12.53 Seashore Rhythm Test.26.48 Rc1 (Phonemic Discrimination) 1.39.44 W2 (Motor Writing Skill) 1.23.44 r between factors Factor 2.49 Eigenvalue 13.60 2.37 Note. Loadings.40 are bolded. WRAT = Wide Range Achievement Test; Rg = Reading; W = Writing; E = Expressive Speech; Rc = Receptive Speech. 1 Luria-Nebraska Factor Scale. 168

Table A-23 Rotated pattern matrix for Graf et al. (1995) (n = 129) Factor 1 Factor 2 Color-Word Stroop Test Congruent -.86.24 Color-Word Stroop Test Word -.85.26 Picture-Word Stroop Test Congruent -.80.17 Color-Word Stroop Test Color -.77 -.02 Card Sorting -.68 -.19 Color-Word Stroop Test Incongruent -.66 -.17 Cancel H -.63 -.26 Picture-Word Stroop Test Picture -.61.01 Picture-Word Stroop Test Incongruent -.60 -.20 WAIS-R Digit-Symbol Coding.57.33 Verbal Fluency 1.49.22 Digit Span.49 -.05 Animal Fluency 1.47.22 WAIS-R Comprehension.32.26 RAVLT (Trials I-V List A).06.82 RAVLT (Trial VI List A)) -.07.81 Buschke Free Recall (Trial I) -.12.73 RAVLT (Trial I List B).03.59 Benton Visual Memory Test.36.38 WAIS-R Similarities.29.35 WAIS-R Block Design.21.34 r between factors Factor 2.39 Eigenvalue 7.71 2.10 Note. Loadings.40 are bolded. RAVLT = Rey Auditory Verbal Learning Test. 1 Tasks are similar to the Controlled Oral Word Association test. Table A-24 Rotated pattern matrix for Halstead (1947) (n = 50) Factor 1 Factor 2 Halstead DVFT (Central Color).88 -.07 Halstead DVFT (Central Form).86.03 Halstead Finger-Oscillation Test.72.02 Halstead Flicker-Fusion Test.62 -.07 Speech Sounds Perception Test.55.15 Halstead Time-Sense Test.34.27 Carl Hollow-Square Performance Test of Intelligence -.19.73 Halstead Category Test.14.72 Halstead Tactual Performance Test (Localization).26.59 Halstead Tactual Performance Test (Recall).12.59 Henmon-Nelson Tests of Mental Ability.28.59 Halstead DVFT (Peripheral Component) -.20.56 Halstead Tactual Performance Test (Speed).21.46 r between factors Factor 2.32 Eigenvalue 4.44 1.68 Note. Loadings.40 are bolded. DVFT = Dynamic Visual Field Test. 169

Table A-25 Rotated pattern matrix for Hartmann et al. (2007) (n = 4321) Factor 1 Factor 2 Factor 3 Army Classification Battery 1 (Verbal Reasoning).94 -.09 -.04 Wide Range Achievement Test.88 -.13 -.02 WAIS-R Information.83.01 -.07 Army Classification Battery 1 (Arithmetic Reasoning).82.04 -.02 General Information Test.76.02 -.04 Armed Forces Qualification Test.73.22.03 Controlled Oral Word Association Test.62 -.10.04 Paced Auditory Serial Addition Test.52.05.15 Pattern Analysis Test.45.38.09 California Verbal Learning Test.41.23 -.04 Wisconsin Card Sorting Test.31.19.14 ROCFT Delayed Recall -.05.96 -.05 ROCFT Immediate Recall -.05.96 -.04 ROCFT Copy.04.62.09 WAIS-R Block Design.33.45.17 Grooved Pegboard Right -.03 -.04.91 Grooved Pegboard Left -.03 -.01.90 r between factors 2.47 3.32.31 Eigenvalue 7.17 1.76 1.37 Note. Loadings.40 are bolded. ROCFT = Rey-Osterrieth Complex Figure Test. 1 First Administration. Table A-26 Rotated pattern matrix for Hermann et al. (1993) (n = 200) Factor 1 Factor 2 Factor 3 WAIS-R Vocabulary.96 -.09.01 WAIS-R Similarities.91 -.07.08 WAIS-R Comprehension.91 -.03.00 WAIS-R Information.85.09 -.08 WAIS-R Digit Symbol Coding.55.12.21 WAIS-R Arithmetic.51.30 -.49 Judgment of Line Orientation -.11.85 -.13 WAIS-R Object Assembly -.03.84.16 WAIS-R Block Design.09.82 -.09 WAIS-R Picture Completion.02.82 -.06 WAIS-R Picture Arrangement.17.70.10 Hooper Visual Organization Test.02.64.55 Facial Recognition Test.24.08.85 WAIS-R Digit Span.33.29 -.41 r between factors Factor 2 0.49 Factor 3-0.01 0.02 Eigenvalue 6.43 1.90 1.50 Note. Loadings.40 are bolded. 170

Table A-27 Rotated pattern matrix for Holtzer et al. (2006) (n = 186) Factor 1 Factor 2 WAIS 1 Information.91.14 WAIS 1 Vocabulary.89.09 Boston Naming Test.79.00 Verbal Fluency (Phonemic).69 -.14 WAIS 1 Digit Span.64 -.01 Verbal Fluency (Semantic).52 -.0 TMT Part A.09.90 TMT Part B -.08.83 WAIS 1 Digit-Symbol Coding.35 -.65 Free and Cued Selective Reminding Test -.09 -.62 WAIS Block Design.36 -.45 r between factors Factor 2 -.40 Eigenvalue 4.99 1.61 Note. Loadings.40 are bolded. 1 It is unclear whether included WAIS subtests were from the WAIS-R or WAIS-III. Table A-28 Unrotated component matrix for Johnstone et al. (1997) (n = 308) Factor 1 Halstead Category Test Subtest VII.81 Halstead Category Test Subtest VI.80 Halstead Category Test Subtest IV.74 Halstead Category Test Subtest V.70 TMT Part B.58 Halstead Category Test Subtest III.58 Halstead Category Test Subtest I.27 Halstead Category Test Subtest II.17 Eigenvalue 3.11 Note. Loadings.40 are bolded. TMT = Trail Making Test. 171

Table A-29 Rotated pattern matrix for Johnstone et al. (2000) (n = 291) Factor 1 Factor 2 RAVLT Trial III.92 -.03 RAVLT Trial II.89 -.01 RAVLT Trial IV.88 -.03 RAVLT Trial I.86.12 RAVLT Trial V.82 -.08 WMS-R Logical Memory.52 -.30 Tactual Performance Test Trial II.13.92 Tactual Performance Test Trial III.04.90 Tactual Performance Test Trial I.11.87 Tactual Performance Test (Localization).16 -.66 Tactual Performance Test (Memory).13 -.64 Category Test Subtest 7 -.19.46 WMS-R Visual Reproduction.33 -.43 r between factors Factor 2 -.53 Eigenvalue 6.71 1.73 Note. Loadings.40 are bolded. RAVLT = Rey Auditory Verbal Learning Test. Table A-30 Rotated pattern matrix for Jones & Ayers (2006) (n = 135) Factor 1 Factor 2 WAIS-III Digit Span Backward.85 -.18 WRAT3 Reading.84 -.27 Indiana University Token Test 1.80.14 MMSE Language/Praxis Items.79.04 MMSE Attention Items.76.08 WAIS-III Digit Span Forward.71 -.13 Verbal Fluency Phonemic 1.70.11 Figure Copy.68.18 Boston Naming Test 1.60.12 Poppelreuter Superimposed Figures Test.58.09 Greek Cross Drawing -.55.03 Go/No-go Test.46.39 MMSE Registration Items.38.01 CERAD Word List Delayed Recall -.12.90 MMSE Recall Items.02.74 CERAD Word List Recognition Trials (Yes) -.04.72 CERAD Word List Learning Trial 3.26.64 CERAD Word List Recognition Trials (No) -.04.60 MMSE Orientation Items.37.53 Verbal Fluency Semantic 1.42.42 r between factors.36 Eigenvalue 7.97 2.41 Note. Loadings.40 are bolded. WRAT3 = Wide Range Achievement Test- Third Edition; MMSE = Mini Mental Status Examination; CERAD = Consortium to establish a registry for Alzheimer s disease. 1 These test variables are similar to MAE test variables. 172

Table A-31 Unrotated component matrix for Keefe et al. (2006) (n = 1332) Factor 1 WAIS-R Digit-Symbol Coding.80 Letter-Number Sequencing.76 CPT Mean Response Time.70 Controlled Oral Word Association Test.70 HVLT (Recall across 3 Learning Trials).68 WISC-III Mazes Raw Score.66 Grooved Pegboard 1.66 Visuospatial Working Memory.65 WCST 2.54 Eigenvalue 4.24 Note. Loadings.40 are bolded. CPT = Continuous Performance Test; HVLT = Hopkins Verbal Leaning Test; WCST = Wisconsin Card Sorting Test. 1 Dominant hand averaged across two trials. 2 Mean of Perseverative Errors (sign reversed) and Categories Complete z-scores (64 cards). Table A-32 Rotated pattern matrix for Kessler et al. (1994) (n = 185) Factor 1 Factor 2 Factor 3 MDRS Search-Count A's (Tasks AD and AE).79 -.11 -.08 MDRS Read Word List (Task AH).77.09.03 MDRS Match Designs (Task AJ).69 -.37 -.15 MDRS Sentence Generation (Task AB).68.15.23 MDRS Word Recognition (Task AI).68 -.14.16 MDRS Design Recognition (Task AK).67 -.33 -.03 MDRS Identities and Oddities (Task V).57 -.25.12 MDRS Write Name (Task U).33.07.09 MDRS Simple Copy (Task S, T).12 -.83 -.10 MDRS Complex Copy (Tasks P, Q, and R) -.01 -.79.12 MDRS Alternating Movements (Tasks I, J, and K) -.01 -.76.09 MDRS Graphomotor (Tasks L, M, N, and O).08 -.71.20 MDRS Sentence Recall (Tasks AF and AG).10.13.77 MDRS Orientation (Task AC).18 -.10.73 MDRS Verbal Fluency (Task E and F).18 -.11.68 MDRS Verbal Reasoning (Tasks W, X, Z, and Y).21 -.00.67 MDRS Repeat Phonemes (Tasks G and H) -.07 -.25.57 MDRS Digit Span (Task A).26 -.08.56 MDRS Follow Commands 1.11.02.56 MDRS Imitate Movements 2 -.17 -.04.52 r between factors 2 -.37 3.43 -.31 Eigenvalue 8.15 1.91 1.48 Note. Loadings.40 are bolded. MDRS = Mattis Dementia Rating Scale. 1 Includes the following subtasks, B-1, B-2, C-1, C-2, C-3, and C-4. 2 Includes the following subtests, D-1, D-2, D-3, and D-4. 173

Table A-33 Unrotated component matrix for Korkman et al. (2007): 3-4 year old sample (n = 185) Factor 1 Body Part Identification.70 Affect Recognition.67 Comprehension of Instructions.67 Body Part Naming.67 Sentence Repetition.66 Narrative Memory Free and Cued Recall.64 Theory of Mind.59 Design Copy Process.53 Block Construction.52 Phonological Processing.51 Memory for Designs.49 Speeded Naming Combined.48 Visuomotor Precision Combined.44 Statue.42 Eigenvalue 4.67 Note. Loadings.40 are bolded. Table A-34 Rotated pattern matrix for Korkman et al. (2007): 5-6 year old sample (n = 200) Factor 1 Factor 2 Phonological Processing.70 -.02 Sentence Repetition.70 -.15 Speeded Naming.64 -.04 Comprehension of Instructions.62.07 Narrative Memory Free and Cued Recall.55.07 Inhibition-Naming.54.12 Block Construction.53.23 Theory of Mind.53.23 Design Copying Process.50.23 Fingertip Tapping Sequences.49 -.18 Statue.46 -.11 Visuomotor Precision.42.16 Inhibition-Inhibition.42.15 Fingertip Tapping Repetitions.39 -.28 Auditory Attention.38.09 Arrows.37.34 Memory for Faces -.06.72 Memory for Designs.20.71 Memory for Faces Delayed.01.69 Memory for Designs Delayed.12.65 Affect Recognition.17.17 r between factors Factor 2.26 Eigenvalue 5.42 1.83 Note. Loadings.40 are bolded. 174

Table A-35 Rotated pattern matrix for Korkman et al. (2007): 7-12 year old sample (n = 600) Factor 1 Factor 2 Factor 3 Factor 4 Inhibition-Inhibition.61.09.05 -.05 Inhibition-Naming.61.14 -.05.05 Inhibition-Switching.61.09 -.08 -.21 Clocks.47 -.05.05 -.22 Visuomotor Precision.47 -.14.04.07 Response Set.42.36.00 -.11 Speeded Naming.39.26 -.03 -.22 Design Copying Process.32 -.10.32 -.09 Block Construction.31 -.16.18 -.20 Fingertip Tapping Sequences.14.77.10 -.04 Fingertip Tapping Repetitions.04.74.05.08 Arrows.20 -.28.19 -.24 Memory for Faces Delayed -.23.12.75 -.02 Memory for Faces -.38.16.72 -.12 Memory for Designs Delayed.22 -.11.67.12 Memory for Designs.28 -.17.63.11 Picture Puzzles.31 -.16.35 -.25 Affect Recognition.15.07.34 -.09 Word List Interference-Recall -.15 -.07 -.00 -.76 Word List Interference-Repetition -.16.10 -.07 -.73 Comprehension of Instructions.05.03.01 -.70 Narrative Memory Free and Cued Recall.10 -.05 -.09 -.59 Phonological Processing.19 -.03.12 -.57 Animal Sorting Total Correct Sorts.11 -.06.05 -.43 Auditory Attention.07.25.12 -.40 Geometric Puzzles.15 -.22.13 -.28 r between factors 2 -.03 3.25 -.08 4 -.38 -.06 -.32 Eigenvalue 5.77 1.97 1.54 1.34 Note. Loadings.40 are bolded. 175

Table A-36 Rotated pattern matrix for Korkman et al. (2007): 13-16 year old sample (n = 200) Factor 1 Factor 2 Factor 3 Geometric Puzzles.68 -.15 -.06 Picture Puzzles.66 -.14 -.31 Comprehension of Instructions.58.16.00 Design Copying Process.57 -.22 -.24 Memory for Designs.55 -.01 -.07 Phonological Processing.55.02 -.04 Narrative Memory Free and Cued Recall.54.15.13 Arrows.54 -.23 -.12 Memory for Designs Delayed.52.03 -.06 Block Construction.51 -.32.00 Inhibition-Switching Combined.51.10.03 Word List Interference-Recall.48.22.23 Clocks.47.03.07 Inhibition-Inhibition.43.14 -.13 Word List Interference-Repetition.40.19.12 Animal Sorting (Correct Sorts).34.26.08 Word Generation-Semantic 1 -.02.61 -.13 Fingertip Tapping Sequences.06.59.03 Speeded Naming.17.51.05 Word Generation-Initial Letter 1.09.47 -.03 Memory for Names Delayed.01.45 -.44 Fingertip Tapping Repetitions -.20.44 -.05 Memory for Names.12.42 -.42 Inhibition-Naming.26.29.06 Memory for Faces -.14 -.03 -.81 Memory for Faces Delayed -.04.06 -.78 Affect Recognition.34 -.18 -.49 Response Set.07.18 -.34 Auditory Attention.14.19 -.23 r between factors 2.18 3 -.18 -.06 Eigenvalue 5.54 2.28 1.99 Note. Loadings.40 are bolded. 1 Tasks are similar to the Controlled Oral Word Association Test. 176

Table A-37 Rotated pattern matrix for Korkman et al. (2007): Clinical sample (n = 238) Factor 1 Factor 2 Factor 3 Picture Puzzles.77 -.12 -.12 Design Copying Process.76 -.00 -.02 Arrows.69 -.06 -.10 Memory for Faces.64.08.11 Memory for Designs.59 -.06 -.24 Memory for Faces Delayed.56.20.06 Block Construction.55.18 -.10 Geometric Puzzles.53.08 -.13 Affect Recognition.53 -.06 -.04 Clocks.52.05 -.16 Visuomotor Precision.52.05.18 Memory for Designs Delayed.49.04 -.20 Fingertip Tapping Repetitions -.16.78.05 Fingertip Tapping Sequences -.06.73 -.03 Response Set.11.61.04 Inhibition-Switching.13.50 -.10 Auditory Attention.09.45 -.13 Inhibition-Naming.18.44.00 Inhibition-Inhibition.28.37 -.09 Speeded Naming.01.33 -.29 Word List Interference-Recall.06 -.04 -.78 Comprehension of Instructions.04.13 -.71 Narrative Memory Free and Cued Recall.10 -.07 -.69 Word List Interference-Repetition -.17.24 -.69 Phonological Processing.11.13 -.66 Animal Sorting.29 -.12 -.54 r between factors 2.35 3 -.43 -.32 Eigenvalue 8.00 1.99 1.60 Note. Loadings.40 are bolded. 177

Table A-38 Rotated pattern matrix for Lehto et al. (2003) (n = 108) Factor 1 Factor 2 Factor 3 Matching Familiar Figures: Reaction time.77.26.02 Tower of London: Perfect solutions.77 -.19 -.06 Tower of London: Additional moves -.75.29.02 Matching Familiar Figures: Errors -.74 -.15 -.01 NEPSY Word Fluency.35 -.30.24 TMT Part C (Letters).05.88.05 TMT Part A (Numbers).18.83.00 TMT Part B (Alternating Numbers and Letters) -.17.70 -.06 NEPSY Auditory Attention and Response Set: Part A -.26.12.87 NEPSY Auditory Attention and Response Set: Part B -.07.07.82 Spatial Working Memory: Errors across trials -.34.04 -.54 WISC-R Mazes.38.08.48 Spatial Span.04 -.21.46 Spatial Working Memory: Strategy score -.13.11 -.36 r between factors 2 -.16 3.31 -.17 Eigenvalue 3.97 2.06 1.62 Note. Loadings.40 are bolded. TMT = Trail Making Test. Table A-39 Unrotated component matrix for Logue et al. (1993) (n = 886) Factor 1 NCSE Constructions.73 NCSE Repetition.69 NCSE Similarities.68 NCSE Calculations.67 NCSE Naming.66 NCSE Memory.65 NCSE Comprehension.65 NCSE Orientation.63 NCSE Attention.55 NCSE Judgment.48 Eigenvalue 4.13 Note. Loadings.40 are bolded. NCSE = Neurobehavioral Cognitive Status Examination. Table A-40 Rotated pattern matrix for Lyness et al. (2006): Spanish speaking participants (n = 54) Factor 1 MDRS Initiation/Perseveration.82 MDRS Attention -.75 MDRS Conceptualization.24 MDRS Memory -.10 Eigenvalue 1.30 Note. Loadings.40 are bolded. MDRS = Mattis Dementia Rating Scale. 178

Table A-41 Unrotated component matrix for Lyness et al. (2006): English speaking participants (n = 30) Factor 1 MDRS Construction.85 MDRS Conceptualization.73 MDRS Attention -.37 MDRS Initiation/Perseveration -.27 MDRS Memory -.19 Eigenvalue 1.51 Note. Loadings.40 are bolded. MDRS = Mattis Dementia Rating Scale. Table A-42 Unrotated component matrix for Macciocchi et al. (1992) (n = 161) Factor 1 LNNB Memory.88 LNNB Motor Functions.88 WAIS-R Verbal IQ -.82 Buschke Selective Reminding Test -.69 Finger Tapping Test (Dominant Hand) -.62 Grooved Pegboard Test (Dominant Hand).61 Eigenvalue 3.47 Note. Loadings.40 are bolded. LNNB = Luria-Nebraska Neuropsychological Battery. Table A-43 Unrotated component matrix for Man et al. (2006) (n = 148) Factor 1 NCSE Similarities.82 NCSE Calculations.81 NCSE Orientation.80 NCSE Constructions.76 NCSE Memory.76 NCSE Comprehension.75 NCSE Naming.74 NCSE Attention.71 NCSE Repetition.69 NCSE Judgment.65 Eigenvalue 5.62 Note. Loadings.40 are bolded. NCSE = Neurobehavioral Cognitive Status Examination. 179

Table A-44 Rotated pattern matrix for Merten (2005) (n = 200) Factor 1 Factor 2 Factor 3 TMT Part A -.87 -.15.18 3-D Block Construction.82 -.09 -.11 WAIS-R Block Design.76 -.00.14 Corsi Block Tapping.73 -.07.02 Hooper Visual Organization Test.71.13.03 Reaction time to visual stimuli -.70 -.12.09 TMT Part B -.68 -.25 -.04 Standard Progressive Matrices.66 -.03.36 Judgment of Line Orientation.62 -.21.40 VOSP Silhouettes.50.13.18 Buschke Verbal Memory Delay Recall.00.76.02 Naming Speed.05 -.76.03 Immediate Recall of Named Objects -.11 -.71 -.02 Confrontation Naming.16.68.02 Mini-Mental State.24.51.29 Buschke Verbal Memory Trial 1 -.04.48.40 Vocabulary (Multiple Choice).07 -.08.80 WAIS-R Vocabulary -.06.20.78 WAIS-R Digit Span Forward -.02.02.74 WAIS-R Digit Span Backward.18.19.60 r between factors Factor 2.39 Factor 3.43.38 Eigenvalue 8.72 2.01 1.44 Note. Loadings.40 are bolded. VOSP = Visual Object and Space Perception Battery. Table A-45 Rotated pattern matrix for Meyers & Meyers (1995): Normative Sample (n = 601) Factor 1 Factor 2 ROCFT Delayed Recall.94 -.01 ROCFT Immediate Recall.92.02 ROCFT Copy.59 -.01 ROCFT Recognition True Negative.22.82 ROCFT Recognition False Negative -.23.80 r between factors Factor 2 -.27 Eigenvalue 2.18 1.31 Note. Loadings.40 are bolded. ROCFT = Rey-Osterrieth Complex Figure Test. 180

Table A-46 Rotated pattern matrix for Meyers & Meyers (1995): Neurologic Sample (n = 100) Factor 1 Factor 2 ROCFT Delayed Recall.92 -.05 ROCFT Immediate Recall.91 -.05 ROCFT Copy.78.09 Hooper Visual Organization Test.58.42 ROCFT Recognition False Negative.18.88 ROCFT Recognition False Positive.57 -.61 r between factors Factor 2.06 Eigenvalue 2.98 1.33 Note. Loadings.40 are bolded. ROCFT = Rey-Osterrieth Complex Figure Test. 181

Table A-47 Rotated pattern matrix Moses (1983): Motor scale (n = 542) LNNB Item Fac. 1 Fac. 2 Fac. 3 Fac. 4 Fac. 5 Fac. 6 Fac. 7 Fac. 8 4.78.01.24 -.12.02.05.00 -.32 1.77.09 -.03 -.01.04.01.00.11 2.77.02.07.05.10 -.07.04 -.14 21.77.01.01 -.01.05.04 -.03.02 22.74.08.04.02 -.01.07 -.12.02 3.73 -.00 -.18 -.09.11.07 -.04.22 23.72.10.03.11 -.01.05 -.08.01 33.63.04 -.04.21 -.04 -.00 -.05.01 17.20.01 -.01.04.05 -.14 -.12.19 45.03.87.01 -.04 -.05 -.04 -.05.03 39.06.87.00 -.01 -.12 -.01.10.03 43 -.08.87.02 -.09.01 -.06 -.08.05 47.02.86.09 -.04 -.02 -.02 -.04.02 41.06.81.06.05.06.04.06 -.02 37.03.80 -.13 -.05 -.08.08.07.02 27.02.34 -.09.21.20.01.05 -.06 12.08.04.79 -.12.00 -.00.02.08 10.02.01.74 -.06.03.08 -.09.12 16.01 -.01.70.20 -.04.17 -.07.15 20 -.02 -.03.55.52.04.11 -.07.01 14.08.01.44.14.26 -.04 -.02.41 18.15.02.30.03.12.02 -.03.06 24.07.19.27.19.24.05 -.22 -.14 25.12 -.07 -.04.70 -.02 -.14.31.09 51.12 -.02.13.68.03 -.07.06 -.06 26 -.02.01 -.07.61.05.20 -.11.05 50 -.17.09.10.56 -.01 -.05 -.37 -.12 48.06.00 -.02.53.06.14 -.01.00 49 -.03.06 -.06.38.12 -.04 -.11.06 32.18 -.04.01.30.03.26.10.14 31.15 -.02.05.30 -.06.04 -.04.04 44.03 -.08 -.03 -.10.78.06 -.11.14 40 -.00.00 -.01.11.74.04 -.15 -.08 38.10 -.09.06 -.01.72.08.06.02 46.14 -.03.10.07.70 -.10.05 -.06 42.08.05 -.03.12.34 -.18 -.02 -.05 36 -.04.05.00 -.05.34.09.05.03 29 -.07.01.20.02.06.76.01 -.07 28.13 -.09 -.27.17.21.67.05.10 30.05.07.05 -.10.10.60 -.21 -.22 35.16.05.12.04 -.15.56.20.22 7.11.00.09 -.09.11 -.00 -.74.02 5.12.01 -.31.03.02.13 -.69.38 8.25 -.06.17.06 -.04 -.16 -.62.05 6.15 -.03.43.06 -.04.10 -.50 -.03 15 -.05.07.02.09 -.07.06 -.08.79 9 -.02 -.00.25 -.09.07.01.02.67 19 -.01.01 -.05.22 -.04.02 -.28.64 11.11.07.22 -.17.10 -.09.11.62 13.03.05.46.11.25 -.06.08.47 34.02 -.01.13.11 -.11.15 -.12.18 182

Fac. 1 Fac. 2 Fac. 3 Fac. 4 Fac. 5 Fac. 6 Fac. 7 Fac. 8 r btw factors Factor 2.19 Factor 3.26.07 Factor 4.28.08.12 Factor 5.32.15.12.21 Factor 6.13.03.13.12.06 Factor 7 -.20 -.08 -.17 -.20 -.11 -.07 Factor 8.23.04.21.17.07.10 -.11 Eigenvalue 10.97 4.28 3.02 2.28 2.03 1.96 1.91 1.69 Note. Loadings.40 are bolded. LNNB = Luria-Nebraska Neuropsychological Battery; Fac. = Factor. 183

Table A-48 Rotated pattern matrix Moses (1984): Expressive speech scale (n = 542) LNNB Item Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 151.80 -.13.14 -.14.04 149.73 -.04.02 -.05 -.18 148.68 -.32 -.04 -.13.10 140.62.08 -.27.06 -.05 144.60 -.07 -.14.16.10 138.49.05 -.40 -.03 -.11 172.40 -.07.12.35 -.08 145.33 -.26 -.15 -.30 -.24 174.28.02.01.17 -.07 173.28.01.00.16 -.04 169.28 -.01 -.09.09.02 146 -.04 -.83 -.01 -.04 -.11 147.02 -.81 -.16 -.12 -.01 152.11 -.66.14.08 -.25 150.06 -.64 -.04.0 -.02 153.18 -.63 -.28 -.26.18 158 -.17 -.47.02.45 -.29 159 -.01 -.46 -.31.09.10 164.06 -.16 -.05.04 -.01 134.15.01 -.72.13.15 135.08 -.05 -.64 -.10 -.19 139 -.07 -.06 -.62.18 -.00 137.21 -.06 -.62 -.14 -.07 136 -.10 -.30 -.62.08.28 142.10 -.14 -.61 -.19 -.11 133 -.23 -.13 -.54.21 -.21 154.34 -.05 -.40 -.26 -.30 160 -.10 -.13 -.38.22 -.21 155.30.14 -.37.09 -.32 141.25 -.04 -.35.26 -.12 143.02 -.01 -.09.65 -.14 157.20 -.17.03.49 -.17 170.20 -.21 -.09.48.35 171.36 -.10.02.47.15 168.02.10 -.14.31 -.01 162 -.06 -.14.05.08 -.71 167 -.06 -.18 -.06 -.11 -.54 163.04 -.06 -.25.10 -.53 161.25 -.04.08.26 -.50 156.32.07 -.19.11 -.38 166.09.12 -.16 -.01 -.25 165.09 -.17.06.02 -.17 r between factors Factor 2 -.26 Factor 3 -.31.29 Factor 4.23 -.13 -.17 Factor 5 -.26.17.23 -.13 Eigenvalue 10.50 2.54 2.07 1.93 1.80 Note. Loadings.40 are bolded. LNNB = Luria-Nebraska Neuropsychological Battery. 184

Table A-49 Unrotated component matrix for O Donnell et al. (1994) (n = 117) Factor 1 TMT Part B.82 PASAT (Number correct across 4 trials).80 VSAT Cancellation Test.71 Category Test (Number of errors).66 WCST Perseverative Responses.54 Eigenvalue 2.55 Note. Loadings.40 are bolded. PASAT = Paced Auditory Serial Addition Test; VSAT = Visual Search and Attention Test; WCST = Wisconsin Card Sorting Test. Table A-50 Unrotated component matrix for Osmon et al. (1992) (n = 36) Factor 1 NCSE Comprehension.78 NCSE Orientation.77 NCSE Naming.77 NCSE Memory.77 NCSE Similarities.63 NCSE Constructions.57 NCSE Attention.57 NCSE Judgment.56 NCSE Calculations.45 NCSE Repetition.43 Eigenvalue 4.14 Note. Loadings.40 are bolded. NCSE = Neurobehavioral Cognitive Status Examination. Table A-51 Unrotated component matrix for Phelps (1995) (n = 115) Factor 1 WRAML Design Memory.69 WRAML Sound Symbol.68 WRAML Sentence Memory.64 WRAML Story Memory.59 WRAML Finger Windows.55 WRAML Verbal Learning.54 WRAML Picture Memory.51 WRAML Number/Letter.47 WRAML Visual Learning.39 Eigenvalue 2.91 Note. Loadings.40 are bolded. WRAML = Wide Range Assessment of Memory and Learning. 185

Table A-52 Rotated pattern matrix for Pineda & Merchan (2003) (n = 100) Factor 1 Factor 2 Factor 3 WCST Categories -.90.04 -.27 WCST Non-perseverative Errors.81.08 -.08 WCST Perseverative Errors.79 -.02.12 WCST Failure to Maintain Set.48.00 -.21 Stroop Color.07.83 -.01 Stroop Color-Word.06.70.19 Controlled Oral Word Association Test (Semantic) -.05 -.50.06 Controlled Oral Word Association Test (Phonemic).12 -.45.08 Stroop Word.03.41.10 TMT Part A -.00.01.92 TMT Part B -.01.06.86 r between factors 2.09 3.05.16 Eigenvalue 2.76 1.89 1.38 Note. Loadings.40 are bolded. WCST = Wisconsin Card Sorting Test. Table A-53 Rotated pattern matrix for Pónton et al. (2000) (n = 300) Factor 1 Factor 2 WAIS-R Digit Symbol Coding 1.80.10 Block Design.80.02 Color Trails 2 -.76.01 Raven s Progressive Matrices.73.12 Color Trails 1 -.71.01 Pin Test Dominant Hand.68 -.04 ROCFT Copy.68.03 Controlled Oral Word Association Test.59 -.06 ROCFT Delayed Recall.59.14 Boston Naming Test.57 -.04 Pin Test Non-Dominant Hand.56 -.02 WAIS-R Digit Span Forward.50 -.10 WAIS-R Digit Span Backward.43.13 Auditory Verbal Learning Test Short Delay 2 -.03.89 Auditory Verbal Learning Test Immediate Recall 3.01.87 Auditory Verbal Learning Test Long Delay.08.87 r between factors.34 Eigenvalue 6.23 1.87 Note. Loadings.40 are bolded. 1 Digit Symbol Coding was slightly altered, the number of target symbols was 6 instead of the standardized version, which includes 9. 2 Original words recalled after a distracter list. 3 Original words recalled after 5 immediate learning trials. 186

Table A-54 Unrotated component matrix for Riva et al. (2000) (n = 160) Factor 1 Semantic Fluency (Animal).84 Boston Naming Test.80 Semantic Fluency (Food).78 Phonemic Fluency (Letter B).74 Phonemic Fluency (Letter S).71 Eigenvalue 3.01 Note. Loadings.40 are bolded. 187

Table A-55 Rotated pattern matrix for Royce et al. (1976) (n = 176) Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 TPT Memory (Correct) -.65.01 -.05 -.12 -.00 TPT non-preferred (Latency).60.16.01.22 -.19 TPT Preferred (Latency).59.10 -.03.00 -.33 TPT Localization (Correct) -.58 -.14.11 -.23.05 TMT Part B.57 -.14.08.01 -.20 Organic Integrity Test.53 -.15 -.03.01 -.11 Seashore Rhythm Test.53 -.19.21 -.22 -.06 WAIS Block Design -.49.13 -.15 -.39.07 WAIS Object Assembly -.45.12 -.15 -.31.11 Colored Progressive Matrices II.43 -.16.14.43.07 Colored Progressive Matrices I.42 -.12.15.32.07 TMT Part A.41 -.13.37 -.04 -.13 Binaural Beats -.38 -.09 -.37.05 -.22 WAIS Vocabulary -.22.78 -.05 -.03 -.08 Color Cognition Memory -.19 -.66.36 -.01 -.10 WAIS Similarities -.43.61 -.02 -.08 -.05 Wepman-Jones Test for Aphasia 1.15 -.58.10 -.05 -.09 Oral Word Fluency -.31.53 -.03.02.24 Apparent Motion (Frequency).11.27.12 -.03 -.08 Halstead Category Test I -.28 -.28.71.05 -.05 Color Cognition Sorting -.09 -.18.70.18 -.05 Face Hand Displacements.05.04.67.12.09 Finger Discrimination (Right) -.02 -.08.65.17 -.12 Finger Discrimination (Left).12.18.55.24 -.14 Halstead Category Test II.13.04.50 -.08 -.17 Face Hand Omissions.16.17.42.01.07 Symbol-Gestalt -.22.25 -.34 -.05.25 Halstead Category Test VII.05.15.15.72 -.16 Halstead Category Test VI.04 -.01.11.71 -.10 Halstead Category Test V -.03.05.20.65 -.11 Halstead Category Test IV.23 -.19 -.16.62 -.04 Halstead Category Test III -.02.19.11.52.03 Minute Estimation.11.35.11 -.47.05 Colored Progressive Matrices III.41 -.22.04.42.04 FTT Non-Dominant Hand.04 -.13.05 -.07.78 FTT Dominant Hand.07.17.08.04.78 PPT Dominant Hand -.10 -.02 -.30 -.02.51 Retinal Rivalry (Frequency) -.07 -.02 -.01 -.06.50 PPT Non-Dominant Hand -.26 -.30 -.17 -.14.43 r between factors 2 -.09 3.38 -.09 4.32 -.11.27 5 -.30.09 -.20 -.26 Eigenvalue 11.77 2.69 1.98 1.75 1.64 Note. Loadings.40 are bolded. TPT = Tactual Performance Test; Loc. = Localization; FTT = Finger Tapping Test; PPT= Purdue Pegboard Test. 1 Precursor to the Aphasia Screening Test. 188

Table A-56 Rotated pattern matrix for Ryan et al. (1984) (n = 108) Factor 1 Factor 2 Factor 3 RAVLT Trial VI (Short Delay).99 -.06 -.13 RAVLT Trials I-V.91 -.06.09 WMS Associate Learning.77.01.09 RAVLT Recognition.76.08.02 WMS Logical Memory I.58.35 -.00 WAIS-R Vocabulary -.01.95 -.05 WAIS-R Information -.06.90 -.01 WAIS-R Similarities.11.76.13 WMS Digit Span -.18 -.08.91 WAIS-R Block Design.01.23.75 WAIS-R Digit Symbol.37 -.11.68 WMS Visual Reproduction I.35.06.49 WAIS-R Picture Arrangement.27.19.49 WMS Mental Control.07.25.37 r between factors 2.40 3.43.38 Eigenvalue 6.56 1.68 1.37 Note. Loadings.40 are bolded.. RAVLT = Rey Auditory Verbal Learning Test. Table A-57 Rotated pattern matrix for Salthouse et al. (1996) (n = 259) Factor 1 Factor 2 WCST Perseveration Errors.86 -.19 WCST Conceptual Learning.85 -.18 WAIS-R Block Design.73.05 TMT Part A.72.03 Shipley Abstraction.71.17 TMT Part B.70.14 WAIS-R Digit Symbol.68.22 Letter Comparison (letters similar or diff.).65.14 WAIS-R Object Assembly.58.11 Pattern Comparison (patterns similar or diff.).57.31 RAVLT Trial 2 -.11.85 RAVLT Trial 6 -.01.80 Paired Associates 2.01.69 Paired Associates 1.10.68 Verbal Fluency (Letter F) 1.13.53 Verbal Fluency (Letter S) 1.21.52 r between factors Factor 2.56 Eigenvalue 7.49 1.46 Note. Loadings.40 are bolded. WCST = Wisconsin Card Sorting Test; diff. = Different; RAVLT = Rey Auditory Verbal Learning Test. 1 Tasks are similar to the Controlled Oral Word Association test. 189

Table A-58 Unrotated component matrix for Schwamm et al. (1987) (n = 30) Factor 1 NCSE Comprehension.87 NCSE Attention.86 NCSE Similarities.77 NCSE Naming.73 NCSE Calculations.73 NCSE Judgment.72 NCSE Repetition.72 NCSE Memory.65 NCSE Constructions.63 NCSE Orientation.61 Eigenvalue 5.37 Note. Loadings.40 are bolded. NCSE = Neurobehavioral Cognitive Status Examination. Table A-59 Rotated pattern matrix for Sewell et al. (1988) (n = 60) Factor 1 Factor 2 WMS Memory Passages I.86 -.07 WMS Memory Passages II.84 -.04 Quick Test (Verbal Intelligence).72 -.0 WAIS-R Information.69.08 Boston Naming Test.59.18 WMS Visual Reproduction II -.11.95 WMS Visual Reproduction I -.07.95 WMS Digit Span.14.52 WMS Associate Learning.01.27 r between factors Factor 2.26 Eigenvalue 3.15 1.85 Note. Loadings.40 are bolded. 190

Table A-60 Rotated pattern matrix Shelly & Goldstein (1982) (n = 150) Factor 1 Factor 2 Factor 3 Factor 4 LNNB Receptive Speech.54.12.38.10 LNNB Expressive Speech.53.21.37.04 LNNB Rhythm.48.22.11.27 WRAT Spelling -.13.90.03.06 WRAT Reading -.09.90.10 -.09 LNNB Reading.39.77.04 -.18 LNNB Writing.40.66.01.09 WAIS Digit Span.07.59.08.14 LNNB Arithmetic.25.51.15.18 WAIS Arithmetic -.14.42.34.29 WAIS Comprehension -.02 -.11.99 -.03 WAIS Vocabulary -.08.16.93 -.12 WAIS Information -.04.23.70.07 WAIS Similarities.08.03.68.15 LNNB Intellectual Processes.32.05.54.23 WAIS Object Assembly -.03 -.10.01.91 WAIS Block Design -.10.17 -.04.86 WAIS Digit Symbol.16 -.01.15.65 WAIS Picture Arrangement.10.01.24.59 LNNB Motor Functions.47.08.02.56 LNNB Visual Functions.47.02.04.54 WRAT Arithmetic -.23.52.09.54 LNNB Tactile Functions.42.14 -.00.51 WAIS Picture Completion.13.01.41.49 LNNB Memory.30.07.31.35 r between factors Factor 2.32 Factor 3.38.61 Factor 4.41.43.55 Eigenvalue 14.77 2.03 1.12 1.01 Note. Loadings.40 are bolded. LNNB = Luria-Nebraska Neuropsychological Battery; WRAT = Wide Range Achievement Test. Table A-61 Unrotated component matrix for Silverberg et al. (2007) (n = 77) Factor 1 TMT Part B -.83 WAIS Letter-Number Sequencing.80 Stroop (Interference Trial) -.79 CVLT Recognition Discriminability.75 Grooved Pegboard Test (Dominant Hand) -.75 Judgment of Line Orientation.71 Biber Cognitive Estimation Test.55 Eigenvalue 3.88 Note. Loadings.40 are bolded. CVLT = California Verbal Learning Test 191

Table A-62 Rotated pattern matrix for Sung & Davis (1981): African-American student sample (n = 272) Factor 1 Factor 2 Factor 3 Factor 4 BAB Pitch Discrimination.88 -.33 -.11 -.08 BAB Vocabulary.65.15.17.02 BAB Tonal Memory.62.13.17 -.11 BAB Idea Fluency.55.07.05.44 BAB Analytical Reasoning.45.16 -.06.33 BAB Shape Assembly.13.71.07.15 BAB Grip (Right) -.34.65 -.12 -.17 BAB Paper Folding.33.58.01.22 BAB Word Associations -.12 -.05.83.08 Seashore Rhythm.24.02.69 -.12 BAB Inductive Reasoning.13.47.51.01 BAB Writing Speed -.10 -.13 -.02.81 BAB Ideaphoria.23.17 -.22.66 BAB Clerical Test -.04 -.32.50.59 BAB Finger Dexterity (Right) -.08.16.16.53 r between factors 2.15 3.20.08 4.26.11.21 Eigenvalue 4.26 1.68 1.48 1.35 Note. Loadings.40 are bolded. BAB = Ball Aptitude Battery. Table A-63 Rotated pattern matrix for Sung & Davis (1981): Caucasian student sample (n = 547) Factor 1 Factor 2 Factor 3 Factor 4 BAB Pitch Discrimination.81 -.06 -.15 -.04 BAB Tonal Memory.76 -.03.01 -.13 Seashore Rhythm.67.08.02 -.04 BAB Vocabulary.56.01.18.15 BAB Analytical Reasoning.55.12.16.29 BAB Shape Assembly.20.76.06.15 BAB Grip (Right) -.13.69 -.07 -.19 BAB Paper Folding.28.52.19.34 BAB Clerical.29 -.46.29.33 BAB Writing Speed.06 -.31.76 -.10 BAB Ideaphoria.12 -.06.74 -.08 BAB Finger Dexterity (Right) -.30.10.62.08 BAB Idea Fluency.22.20.60 -.06 BAB Word Associations -.26 -.17 -.17.80 BAB Inductive Reasoning.28.19.06.54 r between factors 2.12 3.29.04 4.18.00.19 Eigenvalue 4.12 1.83 1.33 1.18 Note. Loadings.40 are bolded. BAB = Ball Aptitude Battery. 192

Table A-64 Rotated pattern matrix for Sung & Davis (1981): Hispanic student sample (n = 188) Factor 1 Factor 2 Factor 3 BAB Ideaphoria.85 -.02 -.18 BAB Idea Fluency.79 -.29 -.16 BAB Vocabulary.68 -.21.16 BAB Analytical Reasoning.66 -.10.20 BAB Paper Folding.59.46.10 BAB Writing Speed.50.07.09 BAB Inductive Reasoning.36.14.20 BAB Grip (Right) -.18.74.00 BAB Shape Assembly.48.60 -.03 BAB Finger Dexterity (Right).02.52.22 BAB Clerical.20 -.48.18 BAB Tonal Memory.00.10.86 BAB Pitch Discrimination -.14 -.28.74 Seashore Rhythm.06.19.62 BAB Word Associations.20.04.46 r between factors 2.03 3.31.08 Eigenvalue 4.01 1.96 1.54 Note. Loadings.40 are bolded. BAB = Ball Aptitude Battery. Table A-65 Unrotated component matrix for Talley (1986) (n = 153) Factor 1 RAVLT Trial V List A.89 RAVLT Trial III List A.86 RAVLT Trial IV List A.83 RAVLT Trial VII (List A Immediate Recall).82 RAVLT Trial VIII (List A Delayed Recall).77 RAVLT Trial II List A.76 RAVLT Trial I List A.52 RAVLT Trial VI (List B Interference List).44 WISC-R Digit Span Backward.32 WISC-R Digit Span Forward.28 WISC-R FSIQ.17 Eigenvalue 4.73 Note. Loadings.40 are bolded. RAVLT = Rey Auditory Verbal Learning Test. Patients were administered 5 learning trials (Trials I V List A) prior to the presentation of the alternate list interference trial (Trial VI List B). Immediate and delayed memory of List A correspond with Trials VII and VIII, respectively. 193

Table A-66 Rotated pattern matrix for Tamkin et al. (1984) (n = 43) Factor 1 Factor 2 Hooper Visual Organization Test.89 -.07 WAIS Block Design.89 -.07 WAIS Object Assembly.83.08 WAIS Digit Symbol.83.08 WAIS Picture Completion.81.12 WAIS Picture Arrangement.78.12 Weigl Color-Form Sorting Test.76 -.17 WAIS Similarities.60.38 Benton Visual Retention Test.43.40 WAIS Arithmetic -.07.88 WAIS Information.04.87 WAIS Vocabulary.07.83 WAIS Digit Span -.08.83 WAIS Comprehension.38.59 r between factors.47 Eigenvalue 7.76 1.98 Note. Loadings.40 are bolded. Table A-67 Unrotated component matrix for Teng et al. (1989) (n = 54) Factor 1 Digit Cancellation Test.87 Token Test 1 85 Visual Memory.83 Symbol-digit Substitution.81 Copy Design.74 Word Fluency (Phonemic) 1.74 Verbal Memory.63 Boston Naming Test 1.51 Digit Span (Forward).43 Finger Tapping Test.43 Eigenvalue 4.93 Note. Loadings.40 are bolded. 1 These test variables are similar to MAE test variables. 194

Table A-68 Rotated pattern matrix for Tirre & Field (2002) (n = 329) Factor 1 Factor 2 Factor 3 Factor 4 BAB Shape Assembly.79.15.08.03 CAB Mechanical Ability.72 -.23 -.12.03 BAB Paper Folding.72.23 -.08 -.05 BAB Grip (Right).57 -.23 -.00 -.10 BAB Inductive Reasoning.50.09.08.09 CAB Esthetic Judgment.40 -.02 -.02 -.02 BAB Analytical Reasoning.40.15 -.14.24 CAB Representative Drawing.32.06 -.13.13 BAB Finger Dexterity (Right).09.54.01.02 CAB Associative Memory.18.49 -.36.07 BAB Associative Memory.07.45 -.39 -.00 CAB Meaningful Memory.27.45 -.20.10 BAB Word Association.06 -.43 -.16.01 BAB Numerical Comprehension.14 -.17 -.78 -.02 BAB Numerical Reasoning.22 -.23 -.71.04 BAB Auditory Memory.04.00 -.65 -.01 BAB Clerical Speed -.18.25 -.64 -.06 CAB Idea Fluency -.12 -.10 -.12.79 CAB Originality.18 -.06.24.72 BAB Idea Fluency.17.11.11.71 BAB Vocabulary.04 -.45 -.38.49 BAB Idea Generation -.22.37 -.05.47 BAB Writing Speed -.23.32 -.31.32 r between factors 2 -.01 3 -.20 -.12 4.18.18 -.25 Eigenvalue 4.72 2.38 1.76 1.54 Note. Loadings.40 are bolded. BAB = Ball Aptitude Battery; CAB = Comprehensive Ability Battery. 195

Table A-69 Rotated pattern matrix for Wechsler (1997): 16-29 year olds (n = 400) Factor 1 Factor 2 Factor 3 Factor 4 WMS-III Logical Memory II Thematic.88 -.04 -.03 -.05 WMS-III Logical Memory I Thematic.87.03 -.05.01 WMS-III Logical Memory I Recall.87.07 -.02 -.11 WMS-III Logical Memory II Recall.82.02.04 -.20 WMS-III Auditory Recognition.63.01.06 -.22 WMS-III Spatial Span Forward.04.73.00.24 WMS-III Spatial Span Backward.00.73.02.16 WMS-III Letter-Number Sequencing.10.71 -.16 -.11 WMS-III Digit Span.05.69 -.05.04 WMS-III Visual Reproduction II Recognition.00.63.10 -.13 WMS-III Visual Reproduction I.02.59.13 -.26 WMS-III Visual Reproduction II Copy -.16.58.00 -.19 WMS-III Mental Control.03.54.05 -.09 WMS-III Verbal Reproduction II -.01.46.11 -.42 WMS-III Faces I -.15 -.11.77 -.17 WMS-III Family Pictures I.33.12.74.30 WMS-III Family Pictures II.33.15.73.24 WMS-III Faces II -.22.01.73 -.23 WMS-III Verbal Paired Associates II.10.13 -.00 -.70 WMS-III Verbal Paired Associates I.27.12 -.06 -.64 WMS-III Word Lists II.23.07.13 -.61 WMS-III Word Lists I.25.13.11 -.54 WMS-III Word Lists II Recognition.22 -.04.15 -.48 r between factors 2.33 3.22.21 4 -.28 -.28 -.23 Eigenvalue 7.91 2.35 1.93 1.54 Note. Loadings.40 are bolded. 196

Table A-70 Rotated pattern matrix for Wechsler (1997): 30-64 year olds (n = 400) Factor 1 Factor 2 Factor 3 Factor 4 WMS-III Logical Memory I Thematic.95 -.12.02 -.09 WMS-III Logical Memory I Recall.94 -.04.15 -.10 WMS-III Logical Memory II Thematic.93 -.05.04 -.11 WMS-III Logical Memory II Recall.91.00.09.00 WMS-III Auditory Recognition.74.02.15 -.02 WMS-III Word Lists I.49.20 -.01.23 WMS-III Word Lists II Recognition.43.21 -.21.29 WMS-III Word Lists II Recall.43.33 -.17.22 WMS-III Verbal Paired Associates I.38.35 -.13.29 WMS-III Visual Reproduction II -.06.84.01.03 WMS-III Visual Reproduction I.00.78.15.00 WMS-III Visual Reproduction II Recognition.01.76.16.03 WMS-III Visual Reproduction II Copy.00.59.13 -.16 WMS-III Verbal Paired Associates II.35.37 -.14.29 WMS-III Digit Span.07.09.67.12 WMS-III Mental Control.32 -.12.64.16 WMS-III Letter-Number Sequencing.19.21.56.12 WMS-III Spatial Span Forward -.04.22.56.05 WMS-III Spatial Span Backward.05.31.49 -.01 WMS-III Faces I -.12 -.13.15.87 WMS-III Faces II -.11 -.09.17.86 WMS-III Family Pictures I.20.29 -.10.46 WMS-III Family Pictures II.25.29 -.11.41 r between factors 2.44 3.19.26 4.34.34.11 Eigenvalue 8.87 2.17 1.69 1.32 Note. Loadings.40 are bolded. 197

Table A-71 Rotated pattern matrix for Wechsler (1997): 65-89 year olds (n = 450) Factor 1 Factor 2 Factor 3 Factor 4 WMS-III Logical Memory I Thematic.87.05.02 -.01 WMS-III Logical Memory II Thematic.86.02.09.03 WMS-III Logical Memory I Recall.80.17 -.03.15 WMS-III Logical Memory II Recall.80.05.09.16 WMS-III Auditory Recognition.48.16.04.36 WMS-III Digit Span.10.76 -.11 -.07 WMS-III Spatial Span Backward -.04.68.21 -.09 WMS-III Mental Control.14.67 -.16 -.02 WMS-III Letter-Number Sequencing.08.64.04.20 WMS-III Spatial Span Forward -.05.63.04 -.02 WMS-III Faces II.09 -.04.75 -.13 WMS-III Faces I.12 -.08.69 -.12 WMS-III Visual Reproduction I -.03.21.58.22 WMS-III Family Pictures I.40 -.04.55.08 WMS-III Visual Reproduction II -.07.03.53.36 WMS-III Family Pictures II.39.01.50.13 WMS-III Visual Reproduction II Recognition -.15.20.48.41 WMS-III Visual Reproduction II Copy -.10.31.42.12 WMS-III Word Lists II Recognition.05 -.01 -.05.78 WMS-III Word Lists II -.03 -.08 -.05.77 WMS-III Verbal Paired Associates II.13 -.02.04.72 WMS-III Verbal Paired Associates I.18 -.03.01.70 WMS-III Word Lists I.14.09.01.68 r between factors 2.27 3.27.26 4.43.28.34 Eigenvalue 8.56 1.98 1.79 1.46 Note. Loadings.40 are bolded. Table A-72 Unrotated component matrix for Wiederman & Morgan (1995) (n = 503) Factor 1 NCSE Comprehension.75 NCSE Similarities.75 NCSE Orientation.74 NCSE Naming.73 NCSE Judgment.69 NCSE Constructions.69 NCSE Calculations.67 NCSE Repetition.66 NCSE Memory.62 NCSE Attention.50 Eigenvalue 4.69 Note. Loadings.40 are bolded. NCSE = Neurobehavioral Cognitive Status Examination. 198

Table A-73 Rotated pattern matrix for Wilde et al. (2003) (n = 254) Factor 1 Factor 2 WMS-III Logical Memory I.94 -.21 WMS-III Logical Memory II.92 -.16 WMS-III Verbal Paired Associates I.82.04 WMS-III Verbal Paired Associates II.81 -.01 WMS-III Letter-Number Sequencing.63.08 WMS-III Family Pictures II.62.32 WMS-III Family Pictures I.60.32 WMS-III Spatial Span.54.15 WMS-III Faces I -.01.87 WMS-III Faces II.06.86 r between factors Factor 2.41 Eigenvalue 5.19 1.28 Note. Loadings.40 are bolded Table A-74 Rotated pattern matrix for Wilde (2006) (n = 210) Factor 1 Factor 2 RBANS List Learning.86 -.03 RBANS List Recognition.80 -.01 RBANS Story Memory.78 -.09 RBANS Story Recall.77.09 RBANS List Recall.74.08 RBANS Semantic Fluency.70.08 RBANS Picture Naming.69 -.03 RBANS Digit Span.50 -.02 RBANS Figure Copy 1 -.15.96 RBANS Line Orientation 2 -.04.84 RBANS Figure Recall 1.10.79 RBANS Coding.31.66 r between factors.36 Eigenvalue 5.33 1.98 Note. Loadings.40 are bolded. RBANS = Repeatable Battery for the Assessment of Neuropsychological Status. 1 These test variables are similar to ROCFT test variables. 2 This test variable is similar to the JOLO test variable. 199

Table A-75 Unrotated component matrix for Williams & Shane (1986) (n = 197) Factor 1 AST Spelling.75 AST Reading.73 AST Arithmetic.72 AST Naming.64 AST Construction.63 AST Ideomotor Apraxia.62 AST Repetition.61 AST Writing.59 Eigenvalue 3.52 Note. Loadings.40 are bolded. AST = Aphasia Screening Test. 200

Table A-76 Rotated pattern matrix for Woodard et al. (1996) (n = 171) Factor 1 Factor 2 Factor 3 Factor 4 MDRS Verbal Fluency-Food (Task E).76 -.07 -.10 -.01 MDRS Orientation (Task AC).73.04.05 -.03 MDRS Verbal Fluency- Appearance (Task F).68.09 -.13 -.10 MDRS Similarities (Task W).61.09 -.07.10 MDRS Differences (Task Y).56.00 -.11.18 MDRS Verbal Recall-Reading (Task AF).54 -.02.05 -.11 MDRS Consonant Perseveration (Task G).53.02 -.07.10 MDRS Similarities-Multiple choice (Task Z).51.08 -.02.34 MDRS Priming and Inductive Reasoning (Task X).50.14 -.05.11 MDRS Verbal Recall-Sentence Initiation (Task AG).49 -.12.02 -.11 MDRS Vowel Perseveration (Task H).44.03 -.16.15 MDRS Counting Distraction 2 (Task AE).40.11 -.22 -.24 MDRS Visual Memory (Task AK).35.15 -.04.29 MDRS Identities and Oddities (Task V).35 -.06 -.26.31 MDRS Verbal Recognition (Task A1).32.19.03.22 MDRS Digit Span (Task A).29.27 -.14.17 MDRS Double Alternating Movements 2 (Task J) -.14.96.06 -.06 MDRS Double Alternating Movements 1 (Task I) -.02.93.03 -.11 MDRS Alternate Tapping (Task K) -.08.93 -.02 -.15 MDRS Counting Distraction 1 (Task AD).09.41 -.01.08 MDRS Construction Design 4 (Task S).06.06 -.75 -.05 MDRS Construction Design 3 (Task R).19.03 -.72 -.00 MDRS Construction Design 2 (Task Q).26 -.03 -.64 -.11 MDRS Construction Design 5 (Task T) -.05 -.07 -.62.31 MDRS Graphomotor Design 4 (Task O).12.17 -.59 -.10 MDRS Construction Design 1 (Task P).05 -.01 -.47.18 MDRS Graphomotor Design 3 (Task N) -.10.15 -.46.45 MDRS Graphomotor Design 1 (Task L).28.10 -.42 -.28 MDRS Graphomotor Design 2 (Task M) -.16.16 -.35.26 MDRS Visual Matching (Task AJ).19.25 -.32.31 MDRS Imitation (Task D) -.04 -.08 -.11.74 MDRS Construction Design 6 (Task U) -.03.08 -.19.56 MDRS Two successive commands (Task B).31.11.23.42 MDRS Single command (Task C).23.01.32.36 MDRS Make up a sentence (Task AB).31.13.01.34 MDRS Verbal Recognition Presentation (Task AH).30.19.01.33 r between factors 2.30 3 -.27 -.31 4.24.26 -.13 Eigenvalue 9.30 2.60 2.11 1.83 Note. Loadings.40 are bolded. MDRS = Mattis Dementia Rating Scale 201

Table A-77 Rotated pattern matrix for Woodard et al. (1999) (n = 130) Factor 1 Factor 2 CVLT-9 (Trials 1-5).81 -.05 WMS Visual Reproduction I.79.07 Verbal Fluency (Semantic) 1.73.27 WMS Visual Reproduction II.72 -.44 CVLT-9 (Long Delay Free Recall).70 -.39 WMS Logical Memory I.66 -.58 MDRS Total.66.16 Judgment of Line Orientation.58.56 WMS Logical Memory II.56 -.72 Verbal Fluency (Phonemic) 1.50.63 MAE Visual Naming.55.58 r between factors.02 Eigenvalue 4.88 2.33 Note. Loadings.40 are bolded. CVLT-9 = California Verbal Learning Test (Dementia Version); MDRS = Mattis Dementia Rating Scale; MAE = Multilingual Aphasia Examination. 1 These test variables are similar to MAE test variables. 202

Appendix B Factor Structure Congruency Coefficients Table B-1 Congruency coefficients between Adams & Sheslow (1990) and Phelps (1995) samples Target Matrix Aligned AS P AS P Sample F1 F2 F1 F2 F1 F2 F3 F1 F2 F3 AS 1.91.87.83.87.81 P 2.93.85.88.86.77 Note. Congruence coefficients <.90 are bolded for emphasis. Two factors were extracted from Adams & Sheslow (1990) and one factor was extracted from Phelps (1995) in the main analyses (see Table A-1; Table A-51). The datasets were comprised of WRAML test variables. AS = Adams & Sheslow (1990); P = Phelps (1995). 1 n = 2363. 2 n = 115. Table B-2 Congruency coefficients between de Jong & Das-Small (1993) male and female samples Target Matrix Aligned Male Female Male Female Sample F1 F2 F1 F2 F1 F2 F3 F1 F2 F3 Male 1.98.94.98.92.95 Female 2.98.93.98.92.94 Note. Congruence coefficients <.90 are bolded for emphasis. Two factors were extracted in the main analyses from each sample (see Table A-17; Table A-18). A diverse range of test variables were included in these datasets. 1 n = 175. 2 n = 204. 203

Table B-3 Congruency coefficients between Korkman et al. (2007) clinical and 7-12 year old normative samples Target Matrix Aligned Clinical 7-12 yo Clinical 7-12 yo Clinical 7-12 yo Sample F1 F2 F1 F2 F1 F2 F3 F1 F2 F3 F1 F2 F3 F4 F1 F2 F3 F4 Clinical 1.95.96.83.79.88.95.88.87.93 7-12 yo 2.97.93.93.70.85.94.94.84.89 Note. Congruence coefficients <.90 are bolded for emphasis. Three factors were extracted from the Clinical sample and four factors were extracted from the 7-12 year old sample in the main analyses (see Table A-37; Table A-35). The datasets were comprised of NEPSY-II test variables. 1 n = 238. 2 n = 600. Table B-4 Congruency coefficients between Sung & Davis (1981) samples Target Matrix AA Cauc. Hisp. AA Cauc Hisp. F1 F2 F3 F1 F2 F3 F1 F2 F3 F1 F2 F3 F4 F1 F2 F3 F4 F1 F2 F3 F4 Aligned Sample AA 1.89.63.90.86.53.85.92.96.94.77.89.81.84.81 Cuac. 2.89.70.81.69.63.75.91.96.95.78.88.57.74.73 Hisp. 3.83.53.85.66.62.80.91.87.96.62.87.87.88.37 Note. Congruence coefficients <.90 are bolded for emphasis. Four factors were extracted from the African American and Caucasian samples and three factors were extracted from the Hispanic sample in the main analyses (see Table A-62; Table A-63; Table A-64). Each sample included test variables from the Bell Aptitude Test and Seashore Rhythm Test. 1 n = 272. 2 n = 547. 3 n = 188. 204

Table B-5 Congruency coefficients between Wechsler (1997) normative samples Target Matrix Aligned 16-29 yo 30-64 yo 65-89 yo 16-29 yo 30-64 yo 65-89 yo Sample F1 F2 F3 F1 F2 F3 F1 F2 F3 F1 F2 F3 F4 F1 F2 F3 F4 F1 F2 F3 F4 16-29 1.99.98.96.99.94.82.99.84.89.93.98.93.92.93 30-64 1.99.98.93.99.95.93.97.97.84.86.96.89.87.97 65-89 2.99.92.84.99.96.94.98.97.93.85.99.91.97.72 Note. Congruence coefficients <.90 are bolded for emphasis. Four factors were extracted from each sample in the main analyses. Each dataset was comprised of WMS-III test variables. yo = age range. 1 n = 400. 2 n = 450 205