Contribution Analysis: The promising new approach to causal claims. Sebastian Lemire. Ramboll Management Consulting, Denmark.

Size: px
Start display at page:

Download "Contribution Analysis: The promising new approach to causal claims. Sebastian Lemire. Ramboll Management Consulting, Denmark."

From this document you will learn the answers to the following questions:

  • What does CA stand for?

  • Mayne's paper discussed the question of how the CA can be used to answer questions about causality?

  • What should be addressed in the context of?

Transcription

1 Contribution Analysis: The promising new approach to causal claims Sebastian Lemire Ramboll Management Consulting, Denmark Authors note Please direct correspondence at Sebastian Lemire, telephone , Rambøll Management Consulting, Hannemanns Allé 53, DK 2300 København S, Denmark. The author would like to thank Steffen Bohni Nielsen and Melanie Kill for their thoughtful comments on an earlier version of this paper. 1

2 English abstract This paper examines the methodological strengths and weaknesses of contribution analysis (CA), awarding particular attention to the ability of CA to identify and determine the extent of influence from alternative explanations. The author argues that CA in its current form and application fares well in terms of identifying salient external factors, but finds itself in need of further methodological elaboration to adequately account for the extent of influence from these factors. As such, CA remains vulnerable to threats of internal validity. In strengthening the merit of causal claims based on CA, focus should not only be directed at CA as an analytical strategy but should also involve a much broader discussion on how to (re)think validity in the context of evaluation. An outline of the implications of this new categorization for causal claims based on CA and a discussion of how to enhance the credibility of CA concludes the paper. 2

3 Introduction Impact evaluation has for many years received the lion s share of attention in both evaluation theory and practice. Yet, despite the sustained attention to determine the attribution of projects, programs, and policies, there is to this day surprisingly little agreement on viable and methodologically rigorous alternatives to the traditional counterfactual impact designs. Indeed, the received wisdom among many an evaluator remains that rigorous impact evaluation cannot be done in the absence of controlled comparisons. The counterfactual designs stand strong. One promising alternative to the counterfactual designs is that of John Mayne s contribution analysis (CA) that seeks to examine attribution through contribution (2008). As suggested by Mayne, CA is useful in instances where it is impractical, inappropriate, or impossible to address the attribution question through an experimental evaluation design. In complex systems, Mayne goes on to argue, experimenting with exogenous variables is not possible or not practical: the counterfactual case cannot be established (2008: 4). Accordingly, the evaluation question must be readdressed by focusing on the extent to which the evaluator can build a case for reasonably inferring causality, that is, the extent to which the intervention can be said to have contributed to a set of observed (positive or negative) outcomes (Mayne, n.d.: 5). The methodological strength of CA, then, rests on its ability to accommodate without a counterfactual the often complex and messy nature of programs by taking into account the nexus of conditioning variables and interactions among program components. This is perhaps needless to say to the experienced evaluator - in and of itself a strong selling point. Yet, despite the significant theoretical interest and attention awarded contribution analysis, especially in development circles, there 3

4 are to this day few examples of the systematic application of contribution analysis (see Dybdal, Bohni and Lemire n.d. for a discussion). i As explained by Mayne: A stumbling block to using contribution analysis may be that this type of analysis remains seen, especially in some of the evaluation profession, as a very weak substitute for the more traditional experimental approaches to assessing causality. (n.d.: 1) In advancing CA as a sound methodological alternative to the traditional experimental approaches, the need then is to develop and present CA as a methodologically rigorous alternative. In this effort, I would argue that attention should be awarded the validity of causal claims based on CA. The foundation of confidence in any methodologically sound design, method or analytical approach that aims to produce inferences especially inferences of cause-and-effect relations has to award attention to validity issues. Contribution analysis is no exception. The overarching purpose of this paper is to advance the application of CA. The aim is two-fold, in that I seek to both (1) advance the theoretical discussion on the validity of causal claims based on CA and (2) push for further practical application of CA. The paper consists of three sections. The first presents a brief outline of contribution analysis and note on its ability to identify and determine the influence of external factors. However, in examining the merit of causal claims based on CA, focus should not only be directed at CA as an analytical strategy but should also involve a much broader discussion on how to (re)think validity in the context of evaluation. Thus the second section examines the concept of validity. It discusses the overwhelming dominance of the Campbellian validity model in both research and evaluation and makes the case for rethinking validity in the context of evaluation. The result of the discussion is a new categorization of validity evidence for causal claims. The third section outlines the implications of this new categorization for causal claims based on CA and 4

5 concludes with a discussion of how to enhance the credibility of CA and thereby pave the way for its increased practical application. Contribution analysis Contribution analysis (CA) has been presented and conceptually developed by John Mayne through a series of seminal papers (1999, 2001 and 2008). In its development, CA has over time moved away from its original setting in performance measurement and towards its new role in evaluating complex programs in complex settings (Dybdal, Bohni and Lemire n.d.). While the approach has undergone refinements in its methodology and even more notably its scope (see Dybdal, Bohni and Lemire n.d. for a detailed account), the underlying logic of and core steps in CA remain more or less the same: (i) elaborating the intervention s theory of change, (ii) identifying key threats to the theory of change s mechanisms, (iii) identifying other contributing factors, and (iv) testing the primary rivaling explanations. In the context of applying CA in complex settings, Mayne operationalise CA in the following steps: (1) Set out the cause-effect issue to be addressed, (2) develop the postulated theory of change and risks to it, (3) gather the existing evidence on the theory of change, (4) assemble and assess the contribution story and challenges to it, (5) seek out additional evidence, (6) revise and strengthen the contribution story, (7) in complex settings, assemble and assess the complex contribution story (Mayne, 2008). According to Mayne, in the context of evaluation CA can primarily address attribution by providing answers to contribution questions such as: Has the program made a difference? How much of a difference? Mayne delineates three types of causal stories: 1. A minimalist contribution analysis can be construed when a theory of change was developed and the expected outputs were delivered. Contribution is based on the 5

6 inherent strength of the postulated theory of change and the fact that the expected outputs were observed. 2. A contribution analysis of direct influence can be construed when a theory of change was developed, expected outputs occurred, immediate results were observed, and evidence suggests the program was instrumental in creating those results, in light of other influencing factors. 3. A contribution analysis of indirect influence can be construed when: It would measure the intermediate and final outcomes (or some of them) and gather evidence that the assumptions (or some of them) in the theory of change in the areas of indirect influence were born out. Statements of contribution at this level would attempt to provide factual evidence for at least the key parts of the whole postulated theory of change (Mayne n.d.: 25-26). The distinction between these three types of causal stories has less to do with the extent to which CA can address the magnitude of the contribution and more to do with the relative strength or credibility of the contribution story. The shared denominator for all three causal stories is that the evaluator through systematic evaluative inquiry seeks to infer plausible association between the program and a set of relevant outcomes (Mayne, 1999: 5-7). The aim of CA, then, is not to provide proof of a oneon-one, linear causal linkage between a program and its intended outcomes, nor is it to determine the exact contribution of the program. Rather the aim of CA is to provide evidence beyond reasonable doubt that the program to some degree contributed to the specified outcomes. It is important to note that Mayne does not commit himself, nor link the causal stories presented above, to any specific evaluation design. He remains insistent that a mix of quantitative and qualitative methods can be used in answering these questions through CA. According to Mayne (n.d.: 7), five 6

7 criteria concerning the embedded theory of change are to be met in order to infer plausible association ; (i) (ii) (iii) Plausibility: Is the theory of change plausible? Implementation according to plan: Was the program implemented with high fidelity? Evidentiary confirmation of key elements: To what extent are the key elements of the theory of change informed by or based on existing evidence? (iv) Identification and examination of other influencing factors: To what extent have other influencing factors been identified and accounted for? (v) Disproof of alternative explanations: To what extent has the most relevant alternative explanations been disproved? These collectively serve as the quality criteria of causal stories based on CA. While Mayne continues throughout his conceptual advancement of CA to summarily address the issue of accounting for other influencing factors and assessing their potential influence, he never goes into detail on the subject. As such, there is no operational framework or discussion of what it means to account for other influencing factors, despite their importance being stated repeatedly. Moreover, the ability to account for the influence of other factors is key in establishing the internal validity of causal claims and inferences based on CA. At the risk of adding to an already overused term, the following presents a discussion of the concept of validity in relation to causal claims and inferences. Validity an outline of an elastic term The concept of validity has for several decades been widely discussed and developed (see Chen 2010 for an overview) ii. Unfortunately, but perhaps not unexpectedly, the long-enduring debates have in many instances served to muddy rather than to clarify the waters. As a result, the term has come to 7

8 mean many different things to many different people. I suggest that the conceptual murkiness and even some of the central points of conflict in the ongoing debates stem from a lack of recognition that the very meaning and application of the term may differ across different fields of application between research and evaluation iii. For several decades the Campbellian validity model has been dominant in both research and evaluation (Campbell and Stanley, 1963). Campbell and Stanley s delineation between internal validity (i.e., to what extent the design accounts for the influence of external factors) and external validity (i.e., to what extent the conclusions of the study can be generalized) has had profound influence on the theory and practical application of validity amongst researchers and evaluators alike. The importance of their contribution is axiomatic. Indeed, the most oft-cited categorization of validity evidence still remains that of the Campbellian model s internal and external validity. External validity generally stated concerns the extent to which one may safely generalize the conclusions derived from a study; that is, to what extent the inferences and conclusions are valid for other subjects, other times and other places, or other settings (Mohr 1995, p.92). This is obviously relevant in the context of research, as the aim of research studies very often revolves around producing knowledge about a specific topic that can be generalized, and in effect applied, to further an academic field. Internal validity by some considered the sine qua non of validity is an expression of the extent to which a design accounts for external factors. As such, internal validity is constitutes a key component in isolating and determining the magnitude the impact of a program. The two types of validity are characterized by an inverse relationship (Chen, 2010). As noted by Mohr, The less successful a design is in accomplishing the first, the more it is depending on the second for causal inference (1995, preface). 8

9 Despite the heavy influence of the Campbellian validity model, it does not remain unchallenged. Most recently, Huey Chen has questioned the relevance of the model in the context of evaluation, writing that: Because the Campbellian model was developed for academic research, ideas and principles proposed by the model might not be wholly applicable or relevant to program evaluation. Similarly, issues crucial to program evaluation but not to academic research are, in the Campbellian validity model, most likely ignored. (2010, p. 206) As just one example, Chen argues that the model s emphasis on internal validity as the sine qua non of research, may not be as relevant in the context of evaluation (2010). Instead, the relative importance of internal versus external validity must be reconsidered in the context of evaluation (Chen 2010). My interest is not to engage in debates on the relative importance and weighing of internal versus external validity. In my opinion internal and external validity together express two overarching types of validity that are necessary to address in research and evaluation. However, I would argue, inspired by Carol Weiss and Chen, that the meaning, purpose and application of these two types of validity needs to be clarified in the context of evaluation. Indeed, I think the hard-won clarity that could result from such an effort would not only serve to enhance the credibility of contribution analysis but also further the field of evaluation in general. Simply consider external validity that concerns the extent to which inferences and conclusions can be generalized to other subjects at other times and places. As noted by Chen, such an open-ended quest for law-like propositions is often more relevant in a research context whereas it may be extremely difficult or even impossible to achieve in the context of evaluation (2010, p.207) iv. This is not to say that this interpretation of external validity as statistical generalizability is not relevant for some evaluations. In fact, in the early days of social engineering the aim of randomized controlled trials 9

10 was exactly to identify the programs that work and then implement these more widely. More recent developments in the field of evaluation, such as systematic reviews and rapid evidence assessment, also lend themselves well to this traditional interpretation of external validity. However, many and perhaps even most evaluations have a much more practically oriented aim in that they seek to answer very specific questions and to produce information that supports the practical implementation of programs in other local contexts. Indeed, one oft-cited challenge related to the utilization of information stemming from evaluations is how to translate the generic learning statements from one local context into actual practice in other local contexts. The emphasis on statistical generalization in the Campbellian interpretation of external validity seems less appropriate in these types of evaluations. Inspired by Chen, and motivated by the field s persistent investment in utilization of information from evaluations, I would argue that the type of external validity that is particularly relevant for many evaluations ought perhaps to be more in the direction practical generalizibility. This new interpretation of practical generalizibility could express the extent to which inferences and conclusions can support the local implementation of the program for other subjects, other times and other places or other settings. Moreover, this may present a welcomed twin to Samuel Messick s concept of consequential validity v in the context of test validation that concerns the extent to which adverse consequences are produced by invalid test interpretation and use (1989). The Campbellian model has been applied without giving due justice to the differences in context, aim and quality criteria of research and evaluation, and it may be a model whose time has come to an end in relation to many evaluations. The idea is not to replace the Campbellian validity model with an everything-goes-approach to validity. I am not advocating for an approach that simply allows evaluators to be opportunistic in their choice of validity evidence. Rather the aim is to develop a framework that is more consistent with the utilization-oriented nature of evaluation. 10

11 Inspired by Messick s unitary validity concept, I suggest a new classification for cutting and framing validity evidence for causal claims and inferences (see table 1 below). The first dimension covers two different types of justification for making causal inferences: the evidential and the consequential. The second dimension is the function of the causal claims and inferences for either theoretical or practical use. According to the classification, the justification for the interpretation of causal claims and inferences is primarily based on the appraisal of the evidential basis (i.e. to what extent other influencing factors have been accounted for in isolating the impact of the intervention and to what extent the causal claims be generalized to other subjects at other times and places?) and perhaps secondarily supplemented by an appraisal of the consequential basis (i.e. to what extent have the causal claims resulted in misconception due to flaws in the design or analytical strategy?). Likewise, the justification for the practical use of causal inferences is primarily based on the appraisal of the evidential basis (i.e. to what extent other influencing factors have been accounted for in isolating the impact of the intervention and to what extent can the causal claims be practically applied to other subjects at other times and places?) and secondarily supported by the consequential basis (i.e.to what extent are the causal claims likely to lead to misapplication due to flaws in the design or analytical strategy?). A couple of examples might clarify how to apply the content of the table. A research study aiming to examine the causal linkage between smoking and lung cancer would primarily build its case on the evidential basis of internal validity and statistical generalizability. In addition, consequential validity evidence would also strengthen the validity of the causal claims in the study. In marked contrast, an evaluation of a pilot project on youth advising would more likely aim for the practical use of its causal conclusions and therefore focus the validation effort on internal validity and practical 11

12 generalizibility. An examination of the consequential validity of the causal conclusions could further strengthen the evaluation. Table 1. A new framework for validity evidence for causal stories Interpretation of causal inferences and claims Evidential Internal validity & statistical Basis (primary) Consequential Basis (secondary) Adapted from Messick 1989 generalizability Consequential validity (theoretical) Practical use of causal inferences and claims Internal & practical generalizibility Consequential validity (practical) If we accept this cutting and combining of validity evidence, where does this leave us in our quest to enhance the validity of causal claims based on CA? Contribution analysis and validity evidence As mentioned earlier in this paper, published examples of the systematic application of CA are far and few between. Accordingly, the following discussion builds on how CA in its current conceptual state and presentation fares in relation to the proposed categorization of validity evidence (primarily Mayne 1999, 2001 and 2008). It is also important to note that it is the combination of a design and an analytical strategy that collectively enhances the validity of causal claims. Accordingly, the less successful the design is in enhancing validity, the more one might depend on the analytical strategy to do the work. As mentioned earlier, Mayne does not commit himself to any specific designs. As such, the true capacity of CA in terms of realizing validity evidence cannot be determined by examining CA isolated from specific designs. My examination of CA and validity, then, is more an effort to gauge its relative strengths and weakness to justify causal claims. My focus will be on the evidential basis, as this dimension constitutes the primary source of justification. It is in effect central in making the case for CA as a methodologically rigorous alternative to experimental designs. 12

13 Consequently, I will focus my effort on practical generalizibility, statistical generalizibility and internal validity. Let s consider these in turn. In my opinion, CA holds great potential in terms of practical generalizibility - the extent to which inferences and conclusions can support the local implementation of the program for other subjects, other times and other places or other settings. The inherent focus of CA on understanding the nature and context of causal linkages between a program and a set of desired outcomes provides causal stories that lend themselves well to local implementation in other settings. The emphasis on the development and subsequent refinement of an embedded theory of change leads the evaluator towards a deep and highly applicable understanding of how the nuts and bolts of the program may function and behave in different contexts. CA however holds a weaker position when it comes to statistical generalizibility the extent to which conclusions can be generalized to other subjects, other times and other places or other settings. The focus on systematically examining the nature and context of the causal linkages between the program and the desired set of outcomes is not likely to produce the type of law-like general statements that lend themselves well to generalization. However, and as mentioned above, the true capacity of CA in relation to statistical generalizibility remains contingent upon the specific design and methods employed in the evaluation. Third and finally, I think the real area of improvement in relation to CA and validity revolves around internal validity. While Mayne provides different strategies for identifying the most salient external factors, he never goes into any detail on how to gauge the influence of these. The importance of an operational framework for identifying and assessing the influence of external factors is particularly clear given the very aim of contribution analysis, especially in relation contribution stories of direct and indirect influence. The advantage that CA offers in complex contexts is that it aims to 13

14 determine the relative, rather than the specific, contribution of a program to a set of outcomes of interest. This being the case, explicit guidance on how to systematically gauge the magnitude of the contribution of a program relative to other influencing factors is essential. In my mind, an elaborated version of this aspect of CA provides a necessary stepping stone to strengthen the ability of CA to approximate the contribution of programs. The consequences of this missing stepping stone are real. As just one example, Michael Patton s use of contribution analysis in his advocacy impact evaluation of a major philanthropic foundation (2008) resulted in the following conclusion: Based on a thorough review of the campaign s activities, interviews with key informants and key knowledgeables, and careful analysis of the Supreme Court decision, we conclude that: The coordinated final-push campaign contributed significantly to the Court s decision (2008, p. 1 my italics). One might wonder how to interpret the vague yet heavily loaded quantifier significant in the above conclusion. How was it determined that the contribution was significant as opposed moderate? What does it really mean that the campaign contributed significantly? The need for some sort of systematic approach to not only identify but also gauge the influence of other external factors is in my mind called for. I would dare argue that the methodological soundness of CA demands a consistent and rigorous use of strategies that aim to reduce the threat of external factors. I am well aware that pushing for the increased application of CA requires more than a theoretical discussion of validity issues. In closing this paper, I would like to share what I think are some of the related issues that ought to be discussed and that may serve to strengthen the conceptual and practical advancement of CA. Admittedly, these are only the beginnings. First, I suggest we need to examine the underlying concept of causality that CA builds on. Examining the underlying concept of causality should involve an examination of the counterfactual 14

15 framework that Mayne is positioning CA against when arguing that contribution analysis does not involve a counterfactual-based argument. It is certainly true that CA offers a viable alternative to counterfactual designs in settings where comparison and control groups are unfeasible. That being said CA still deals with counterfactual-based questions and is in my mind certainly compatible with some counterfactual designs. As pointed out by Howard White: Although it may not always be necessary or useful to make the counterfactual explicit, in attributing changes to outcomes to a development intervention it most likely is useful to have an explicit counterfactual. An explicit counterfactual does not necessarily mean that one needs a comparison group, though often it will. (2010, p. 157). As he goes on to argue, the counterfactual may come in the form of different variants of interrupted time-series design. If we accept this conceptualization of the explicit counterfactual, there is no reason to hold that CA is incompatible with counterfactual-based designs or arguments as such. We may as well keep our doors open to and be aware of this particular type of CA. Second, I think it may further the discussion on CA as an alternative to experimental designs to sharply distinguish between designs and analytical strategies, as these are often conflated. An evaluation design specifies the frequency and placement of measurement points in relation to the intervention being evaluated. It also specifies the demand for a control or comparison group. In doing so a design delineates the overarching structure of the data collection. An analytical strategy specifies how the data derived from the measurement points will be analyzed and connected with the questions to be answered as part of the evaluation. One might argue that simply comparing experimental designs with an analytical strategy is like comparing apples and oranges. I agree. However, I also recognize that these comparisons are being made and will continue to be made and that contribution analysis, as noted by Mayne, is often perceived as a very weak substitute for the more traditional experimental 15

16 approaches to assessing causality (n.d.: 1). In advancing CA, I think we have to accept that these comparisons will be made and seek to inform and frame them as best as possible. This involves holding on to the important distinction between a design and an analytical strategy and recognizing that the internal validity of causal claims is both contingent upon the evaluation design and the analytical strategy - collectively. Third, and in direct extension with these two above points in mind, it may prove rewarding to explore in practical application or at least in theory the methodological conditions or types of designs that will strengthen contribution stories based on CA. This has me wondering: Is there really any reason why we can t combine a counterfactual pre-/post design with CA? Are there certain counterfactual or non-counterfactual designs that lend themselves particularly well to CA? Are there types of designs that will strengthen and connect particularly well with the three different types of contribution stories? Are there certain types of counterfactual designs that are required to support contribution stories of direct or indirect influence? In answering these questions we have to be clear on the distinction between counterfactuals and control groups, designs and analytical strategies. We also need to engage and systematically apply CA in our evaluations. Fourth and finally, I would argue that we should continue the methodological discussion on how to enhance and assess the quality of contribution stories. How would we recognize a methodologically sound CA if it were right in front of us? Mayne points towards five criteria, but are there other relevant quality markers? I m thinking here of a set of quality markers equivalent but not identical to the quality markers typically employed in research. These quality markers may collectively serve as a backbone to strengthen CA as a viable and methodologically credible method that can address attribution through contribution. 16

17 Concluding remarks Contribution analysis (CA) presents a promising and viable alternative to the traditional counterfactual impact designs; indeed, it is my strong belief in the potential of CA that motivates this paper. However, in advancing CA as a methodologically sound way of addressing attribution, a need arises for addressing the validity issues pertaining to CA. Validity is at its core about the extent to which we can invest our trust in a set of inferences. As such, the foundation of confidence in any methodologically sound design, method or analytical strategy that aims to produce inferences especially inferences of cause-and-effect relations is contingent upon the extent to which attention is awarded to validity issues. In order to pave the way for the increased practical application of CA we need to address validity issues related to CA. However, we also need to make sure that the concept of validity that we employ in building credible causal stories by way of CA or other strategies is applicable and relevant to the field of evaluation. It is my modest hope that this paper will foster further discussion of both validity and contribution analysis. 17

18 References Bickman, L The Function of Program Theory: Using Program Theory in Evaluation, New Directions for Evaluation, 33 pp Campbell D.T., & Stanley, J Experimental and Quasi-experimental Designs for Research. Chicago: Rand McNally Chen, H The Theory-driven Approach to Validity. Evaluation and Program Planning, 10, pp Chen, H Validity in evaluation research: A critical assessment of current issues, Policy and Politics, 16 (1), pp Chen, H The Bottom-Up Approach to Integrative Validity: A New Perspective for Program Evaluation. Evaluation and Program Planning, 33, pp Davidson, E.J Ascertaining Causality in Theory-Based Evaluation, New Directions for Evaluation, 87, pp Dybdal, Bohni and Lemire (n.d.). House, E Unfinished Business: Causes and Values, The American Journal of Evaluation 22 (3), pp Kane, M Current Concerns in Validity Theory, Journal of educational measurement 38 (4), pp Mayne, J Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly, discussion paper, Office of the Auditor General of Canada. Mayne, J Addressing Attribution through Contribution Analysis: Using Performance Measures Sensibly, Canadian Journal of Program Evaluation, 16 (1), pp Mayne, J Contribution analysis: An approach to exploring cause and effect, ILAC Brief 16, Institutional Learning and Change (ILAC) Initiative, Rome, Italy. Mayne, J. n.d.: Addressing Cause and Effect in Simple and Complex Settings through Contribution Analysis in R. Schwartz, K. Forss, and M. Marra (Eds.): Evaluating the Complex, R. Schwartz, K. Forss, and M. Marra (Eds.), New York, Transaction Publishers (in print). Messick, S Validity in R. L. Linn (Ed.), Educational measurement (3 rd ed.), pp New York, American Council on Education and Macmillan. 18

19 Patton, M Advocacy Impact Evaluation. JMDE, 5(9): pp Rogers, P. et al Program Theory Evaluation: Practice, Promise, and Problems, New Directions for Evaluation, 87, pp Rogers, P Theory-Based Evaluation: Reflections Ten Years On, New Direction for Evaluation, 114, pp Scheirer, A.M Program Theory and Implementation Theory: Implications for Evaluators, New Directions for Program Evaluation, 33, pp White, H (2010). A contribution to current debates in impact evaluation, Evaluation 16 (2), pp

20 i One published example is that of Michael Patton and his employment of contribution analysis in evaluating a stealth campaign (Patton, 2008). ii The literature on internal and external validity in research and evaluation is extensive (see Chen 1988 & 2010 as well as Kane 2001 for a good overview), and space does not allow for a detailed account of the development of the term here. However, some of the trends are particularly relevant in relation CA and therefore merit our attention. First of all, the concept of validity in the Campbellian tradition pertains specifically to the research design; that is, it is the research design that is being validated. Over the years the common consensus has moved towards validity pertaining to the claims and inferences produced by different designs. Stated differently, it is the validity of the causal claims and inferences that are being validated. As a result, any credible combination of design and analytical strategy that seeks to produce sound causal claims has to address internal validity. iii New interpretations and conceptual advancement in the area of validity has come most often from the research community, especially from researchers in the area of psychometrics where test and instrument validation is central (see Kane 2001 and Messick 1989 among others). iv Meta-evaluation is the exception. v The concept of consequential validity was introduced by Messick in the context of instrumental validity, that is, the validation of tests and measurement instruments. However, it certainly appears relevant given the heavy focus on utilization in the field of evaluation. 20

Research Design and Research Methods

Research Design and Research Methods CHAPTER 3 Research Design and Research Methods Overview This chapter uses an emphasis on research design to discuss qualitative, quantitative, and mixed methods research as three major approaches to research

More information

Grounded Theory. 1 Introduction... 1. 2 Applications of grounded theory... 1. 3 Outline of the design... 2

Grounded Theory. 1 Introduction... 1. 2 Applications of grounded theory... 1. 3 Outline of the design... 2 Grounded Theory Contents 1 Introduction... 1 2 Applications of grounded theory... 1 3 Outline of the design... 2 4 Strengths and weaknesses of grounded theory... 6 5 References... 6 1 Introduction This

More information

Validity, Fairness, and Testing

Validity, Fairness, and Testing Validity, Fairness, and Testing Michael Kane Educational Testing Service Conference on Conversations on Validity Around the World Teachers College, New York March 2012 Unpublished Work Copyright 2010 by

More information

Data quality and metadata

Data quality and metadata Chapter IX. Data quality and metadata This draft is based on the text adopted by the UN Statistical Commission for purposes of international recommendations for industrial and distributive trade statistics.

More information

The European Financial Reporting Advisory Group (EFRAG) and the Autorité des Normes Comptables (ANC) jointly publish on their websites for

The European Financial Reporting Advisory Group (EFRAG) and the Autorité des Normes Comptables (ANC) jointly publish on their websites for The European Financial Reporting Advisory Group (EFRAG) and the Autorité des Normes Comptables (ANC) jointly publish on their websites for information purpose a Research Paper on the proposed new Definition

More information

Science Plus: A Response to the Responses to Scientific Research in Education

Science Plus: A Response to the Responses to Scientific Research in Education This is an electronic version of an article published in Teachers College Record. Complete citation information for the final version of the paper, as published in the print edition of Teachers College

More information

Chapter 2 Quantitative, Qualitative, and Mixed Research

Chapter 2 Quantitative, Qualitative, and Mixed Research 1 Chapter 2 Quantitative, Qualitative, and Mixed Research This chapter is our introduction to the three research methodology paradigms. A paradigm is a perspective based on a set of assumptions, concepts,

More information

INTERNATIONAL FRAMEWORK FOR ASSURANCE ENGAGEMENTS CONTENTS

INTERNATIONAL FRAMEWORK FOR ASSURANCE ENGAGEMENTS CONTENTS INTERNATIONAL FOR ASSURANCE ENGAGEMENTS (Effective for assurance reports issued on or after January 1, 2005) CONTENTS Paragraph Introduction... 1 6 Definition and Objective of an Assurance Engagement...

More information

Writing an essay. This seems obvious - but it is surprising how many people don't really do this.

Writing an essay. This seems obvious - but it is surprising how many people don't really do this. Writing an essay Look back If this is not your first essay, take a look at your previous one. Did your tutor make any suggestions that you need to bear in mind for this essay? Did you learn anything else

More information

WRITING A CRITICAL ARTICLE REVIEW

WRITING A CRITICAL ARTICLE REVIEW WRITING A CRITICAL ARTICLE REVIEW A critical article review briefly describes the content of an article and, more importantly, provides an in-depth analysis and evaluation of its ideas and purpose. The

More information

GLOSSARY OF EVALUATION TERMS

GLOSSARY OF EVALUATION TERMS Planning and Performance Management Unit Office of the Director of U.S. Foreign Assistance Final Version: March 25, 2009 INTRODUCTION This Glossary of Evaluation and Related Terms was jointly prepared

More information

Interview studies. 1 Introduction... 1. 2 Applications of interview study designs... 2. 3 Outline of the design... 3

Interview studies. 1 Introduction... 1. 2 Applications of interview study designs... 2. 3 Outline of the design... 3 Interview studies Contents 1 Introduction... 1 2 Applications of interview study designs... 2 3 Outline of the design... 3 4 Strengths and weaknesses of interview study designs... 6 5 References... 7 1

More information

ADVERTISING VALUE EQUIVALENCY (AVE)

ADVERTISING VALUE EQUIVALENCY (AVE) THE INSTITUTE FOR PUBLIC RELATIONS COMMISSION ON PR MEASUREMENT AND EVALUATION University of Florida * PO Box 118400 * Gainesville, FL 32611-8400 (352) 392-0280 * (352) 846-1122 (fax) www.instituteforpr.com

More information

1 Annex 11: Market failure in broadcasting

1 Annex 11: Market failure in broadcasting 1 Annex 11: Market failure in broadcasting 1.1 This annex builds on work done by Ofcom regarding market failure in a number of previous projects. In particular, we discussed the types of market failure

More information

Learning and Teaching

Learning and Teaching B E S T PRACTICES NEA RESEARCH BRIEF Learning and Teaching July 2006 This brief outlines nine leading research-based concepts that have served as a foundation for education reform. It compares existing

More information

OUTLINE OF PRINCIPLES OF IMPACT EVALUATION

OUTLINE OF PRINCIPLES OF IMPACT EVALUATION OUTLINE OF PRINCIPLES OF IMPACT EVALUATION PART I KEY CONCEPTS Definition Impact evaluation is an assessment of how the intervention being evaluated affects outcomes, whether these effects are intended

More information

Hypothesis testing. c 2014, Jeffrey S. Simonoff 1

Hypothesis testing. c 2014, Jeffrey S. Simonoff 1 Hypothesis testing So far, we ve talked about inference from the point of estimation. We ve tried to answer questions like What is a good estimate for a typical value? or How much variability is there

More information

Integrated Risk Management:

Integrated Risk Management: Integrated Risk Management: A Framework for Fraser Health For further information contact: Integrated Risk Management Fraser Health Corporate Office 300, 10334 152A Street Surrey, BC V3R 8T4 Phone: (604)

More information

12/30/2012. Research Design. Quantitative Research: Types (Campbell & Stanley, 1963; Crowl, 1993)

12/30/2012. Research Design. Quantitative Research: Types (Campbell & Stanley, 1963; Crowl, 1993) Quantitative Prepared by: Amanda J. Rockinson-Szapkiw Liberty University A research design is a plan that guides the decision as to: when and how often to collect data what data to gather and from whom

More information

The Economic Importance of the Country of Origin Principle in the Proposed Services Directive. Final report

The Economic Importance of the Country of Origin Principle in the Proposed Services Directive. Final report The Economic Importance of the Country of Origin Principle in the Proposed Services Final report Table of Contents Preface... 3 Executive summary... 4 Chapter 1 The economic importance of the Country of

More information

BEPS ACTIONS 8-10. Revised Guidance on Profit Splits

BEPS ACTIONS 8-10. Revised Guidance on Profit Splits BEPS ACTIONS 8-10 Revised Guidance on Profit Splits DISCUSSION DRAFT ON THE REVISED GUIDANCE ON PROFIT SPLITS 4 July 2016 Public comments are invited on this discussion draft which deals with the clarification

More information

Qihoo v. Tencent: economic analysis of the first Chinese Supreme Court decision under Anti-Monopoly Law

Qihoo v. Tencent: economic analysis of the first Chinese Supreme Court decision under Anti-Monopoly Law February 2015 Qihoo v. Tencent: economic analysis of the first Chinese Supreme Court decision under Anti-Monopoly Law The October 16, 2014 Chinese Supreme People s Court (SPC) decision on Qihoo 360 v.

More information

Effects of CEO turnover on company performance

Effects of CEO turnover on company performance Headlight International Effects of CEO turnover on company performance CEO turnover in listed companies has increased over the past decades. This paper explores whether or not changing CEO has a significant

More information

CHAPTER 3. Methods of Proofs. 1. Logical Arguments and Formal Proofs

CHAPTER 3. Methods of Proofs. 1. Logical Arguments and Formal Proofs CHAPTER 3 Methods of Proofs 1. Logical Arguments and Formal Proofs 1.1. Basic Terminology. An axiom is a statement that is given to be true. A rule of inference is a logical rule that is used to deduce

More information

Observing and describing the behavior of a subject without influencing it in any way.

Observing and describing the behavior of a subject without influencing it in any way. HOW TO CHOOSE FROM THE DIFFERENT RESEARCH METHODS* The design is the structure of any scientific work. It gives direction and systematizes the research. The method you choose will affect your results and

More information

IPDET Module 6: Descriptive, Normative, and Impact Evaluation Designs

IPDET Module 6: Descriptive, Normative, and Impact Evaluation Designs IPDET Module 6: Descriptive, Normative, and Impact Evaluation Designs Intervention or Policy Evaluation Questions Design Questions Elements Types Key Points Introduction What Is Evaluation Design? Connecting

More information

An introduction to impact measurement

An introduction to impact measurement An introduction to impact measurement Contents 1 Introduction 2 Some definitions 3 Impact measurement at BIG 4 Setting impact measures for programmes APPENDICES A External Resources (separate document)

More information

Sample Size and Power in Clinical Trials

Sample Size and Power in Clinical Trials Sample Size and Power in Clinical Trials Version 1.0 May 011 1. Power of a Test. Factors affecting Power 3. Required Sample Size RELATED ISSUES 1. Effect Size. Test Statistics 3. Variation 4. Significance

More information

Fairfield Public Schools

Fairfield Public Schools Mathematics Fairfield Public Schools AP Statistics AP Statistics BOE Approved 04/08/2014 1 AP STATISTICS Critical Areas of Focus AP Statistics is a rigorous course that offers advanced students an opportunity

More information

Equity Risk Premium Article Michael Annin, CFA and Dominic Falaschetti, CFA

Equity Risk Premium Article Michael Annin, CFA and Dominic Falaschetti, CFA Equity Risk Premium Article Michael Annin, CFA and Dominic Falaschetti, CFA This article appears in the January/February 1998 issue of Valuation Strategies. Executive Summary This article explores one

More information

Lectures, 2 ECONOMIES OF SCALE

Lectures, 2 ECONOMIES OF SCALE Lectures, 2 ECONOMIES OF SCALE I. Alternatives to Comparative Advantage Economies of Scale The fact that the largest share of world trade consists of the exchange of similar (manufactured) goods between

More information

6 Essential Characteristics of a PLC (adapted from Learning by Doing)

6 Essential Characteristics of a PLC (adapted from Learning by Doing) 6 Essential Characteristics of a PLC (adapted from Learning by Doing) 1. Shared mission, vision, values, goals Educators in a PLC benefit from clarity regarding their shared purpose, a common understanding

More information

Case Studies. Dewayne E Perry ENS 623 perry@mail.utexas.edu

Case Studies. Dewayne E Perry ENS 623 perry@mail.utexas.edu Case Studies Dewayne E Perry ENS 623 perry@mail.utexas.edu Adapted from Perry, Sim & Easterbrook,Case Studies for Software Engineering, ICSE 2004 Tutorial 1 What is a case study? A case study is an empirical

More information

Module 2: Evaluation Essentials

Module 2: Evaluation Essentials Do, Measure, Improve Module 2: Evaluation Essentials Module 2: Evaluation Essentials Types of research o Basic new knowledge or evaluation? Framing evaluation research NatGuild2014 Kamella Tate MFA, EdD

More information

Critical Analysis So what does that REALLY mean?

Critical Analysis So what does that REALLY mean? Critical Analysis So what does that REALLY mean? 1 The words critically analyse can cause panic in students when they first turn over their examination paper or are handed their assignment questions. Why?

More information

classroom Tool Part 3 of a 5 Part Series: How to Select The Right

classroom Tool Part 3 of a 5 Part Series: How to Select The Right How to Select The Right classroom Observation Tool This booklet outlines key questions that can guide observational tool selection. It is intended to provide guiding questions that will help users organize

More information

Evaluation Plan: Process Evaluation for Hypothetical AmeriCorps Program

Evaluation Plan: Process Evaluation for Hypothetical AmeriCorps Program Evaluation Plan: Process Evaluation for Hypothetical AmeriCorps Program Introduction: This evaluation plan describes a process evaluation for Financial Empowerment Corps (FEC), an AmeriCorps State and

More information

FOREIGN AFFAIRS PROGRAM EVALUATION GLOSSARY CORE TERMS

FOREIGN AFFAIRS PROGRAM EVALUATION GLOSSARY CORE TERMS Activity: A specific action or process undertaken over a specific period of time by an organization to convert resources to products or services to achieve results. Related term: Project. Appraisal: An

More information

DEVELOPING HYPOTHESIS AND

DEVELOPING HYPOTHESIS AND Shalini Prasad Ajith Rao Eeshoo Rehani DEVELOPING 500 METHODS SEPTEMBER 18 TH 2001 DEVELOPING HYPOTHESIS AND Introduction Processes involved before formulating the hypotheses. Definition Nature of Hypothesis

More information

Reflections on Probability vs Nonprobability Sampling

Reflections on Probability vs Nonprobability Sampling Official Statistics in Honour of Daniel Thorburn, pp. 29 35 Reflections on Probability vs Nonprobability Sampling Jan Wretman 1 A few fundamental things are briefly discussed. First: What is called probability

More information

Cellphones: Safe or Carcinogenic? Scientist 1 Scientist 2 non- ionizing radiation

Cellphones: Safe or Carcinogenic? Scientist 1 Scientist 2 non- ionizing radiation Cellphones: Safe or Carcinogenic? With an estimated five billion worldwide users, the public debate over cellphone and the possible link to cancer is one of critical importance. The concern is that cellphones

More information

accel team jobs depend on it

accel team jobs depend on it Advancing employee productivity accel team jobs depend on it Supervisory guides to performance improvement PLANNING EMPLOYEE TRAINING AND DEVELOPMENT Developing a process that will meet the goals of the

More information

Evidence-Based Nursing Practice Toolkit

Evidence-Based Nursing Practice Toolkit Evidence-Based Nursing Practice Toolkit Translating Research into Practice Adapted for SRHS by: Lucy Gansauer, MSN, RN, OCN & Sherri Stroud, MSN, RN SRHS Nursing Evidence Based Practice Model Organizational

More information

Statistics 2014 Scoring Guidelines

Statistics 2014 Scoring Guidelines AP Statistics 2014 Scoring Guidelines College Board, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks of the College Board. AP Central is the official online home

More information

WHAT WORKS IN INNOVATION AND EDUCATION IMPROVING TEACHING AND LEARNING FOR ADULTS WITH BASIC SKILL NEEDS THROUGH FORMATIVE ASSESSMENT STUDY OUTLINE

WHAT WORKS IN INNOVATION AND EDUCATION IMPROVING TEACHING AND LEARNING FOR ADULTS WITH BASIC SKILL NEEDS THROUGH FORMATIVE ASSESSMENT STUDY OUTLINE WHAT WORKS IN INNOVATION AND EDUCATION IMPROVING TEACHING AND LEARNING FOR ADULTS WITH BASIC SKILL NEEDS THROUGH FORMATIVE ASSESSMENT STUDY OUTLINE The OECD s Centre for Educational Research and Innovation

More information

Governance and the role of Boards

Governance and the role of Boards 1 Governance and the role of Boards Speech given by Andrew Bailey, Deputy Governor, Prudential Regulation and Chief Executive Officer, Prudential Regulation Authority Westminster Business Forum, London

More information

GUIDELINES FOR PROPOSALS: QUALITATIVE RESEARCH Human Development and Family Studies

GUIDELINES FOR PROPOSALS: QUALITATIVE RESEARCH Human Development and Family Studies Drafted by Lynet Uttal using the Quantitative Research Proposal Guidelines and in consultation with GPC (5/99) GUIDELINES FOR PROPOSALS: QUALITATIVE RESEARCH Human Development and Family Studies Overview:

More information

Teaching Notes for the Case Study Insurance Broker Network (InBroNet): Selecting Partners, Evaluating Practices

Teaching Notes for the Case Study Insurance Broker Network (InBroNet): Selecting Partners, Evaluating Practices Teaching Notes for the Case Study Insurance Broker Network (InBroNet): Selecting Partners, Evaluating Practices in: Sydow, Schüßler, Müller-Seitz (2016): Managing Inter-Organizational Relations. Debates

More information

Introduction... 3. Qualitative Data Collection Methods... 7 In depth interviews... 7 Observation methods... 8 Document review... 8 Focus groups...

Introduction... 3. Qualitative Data Collection Methods... 7 In depth interviews... 7 Observation methods... 8 Document review... 8 Focus groups... 1 Table of Contents Introduction... 3 Quantitative Data Collection Methods... 4 Interviews... 4 Telephone interviews... 5 Face to face interviews... 5 Computer Assisted Personal Interviewing (CAPI)...

More information

CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

More information

An Introduction to Secondary Data Analysis

An Introduction to Secondary Data Analysis 1 An Introduction to Secondary Data Analysis What Are Secondary Data? In the fields of epidemiology and public health, the distinction between primary and secondary data depends on the relationship between

More information

Comparison of Research Designs Template

Comparison of Research Designs Template Comparison of Comparison of The following seven tables provide an annotated template to guide you through the comparison of research designs assignment in this course. These tables help you organize your

More information

Evaluation Theory: Who Needs It? Jean A. King

Evaluation Theory: Who Needs It? Jean A. King Evaluation Theory: Who Needs It? Jean A. King An important question If evaluation is a field of practice, then what is an appropriate role for theory in evaluation? Theory? Practice?... there is nothing

More information

TO NETWORK OR NOT TO NETWORK: NGO EXPERIENCES WITH TECHNICAL NETWORKS

TO NETWORK OR NOT TO NETWORK: NGO EXPERIENCES WITH TECHNICAL NETWORKS TO NETWORK OR NOT TO NETWORK: NGO EXPERIENCES WITH TECHNICAL NETWORKS By Lise Rosendal Østergaard, coordinator, AIDSNET, and Joel Nielsen, senior consultant, Carl Bro A/S Corresponding author: Lise Rosendal

More information

2. Auditing. 2.1. Objective and Structure. 2.2. What Is Auditing?

2. Auditing. 2.1. Objective and Structure. 2.2. What Is Auditing? - 4-2. Auditing 2.1. Objective and Structure The objective of this chapter is to introduce the background information on auditing. In section 2.2, definitions of essential terms as well as main objectives

More information

LITERACY: READING LANGUAGE ARTS

LITERACY: READING LANGUAGE ARTS IMPORTANT NOTICE TO CANDIDATES: The assessment information in this document is aligned with NBPTS Literacy: Reading Language Arts Standards, Second Edition (for teachers of students ages 3 12). If you

More information

Q FACTOR ANALYSIS (Q-METHODOLOGY) AS DATA ANALYSIS TECHNIQUE

Q FACTOR ANALYSIS (Q-METHODOLOGY) AS DATA ANALYSIS TECHNIQUE Q FACTOR ANALYSIS (Q-METHODOLOGY) AS DATA ANALYSIS TECHNIQUE Gabor Manuela Rozalia Petru Maior Univerity of Tg. Mure, Faculty of Economic, Legal and Administrative Sciences, Rozalia_gabor@yahoo.com, 0742

More information

One natural response would be to cite evidence of past mornings, and give something like the following argument:

One natural response would be to cite evidence of past mornings, and give something like the following argument: Hume on induction Suppose you were asked to give your reasons for believing that the sun will come up tomorrow, in the form of an argument for the claim that the sun will come up tomorrow. One natural

More information

Measuring Success: A Guide to Becoming an Evidence-Based Practice

Measuring Success: A Guide to Becoming an Evidence-Based Practice Measuring Success: A Guide to Becoming an Evidence-Based Practice by Jennifer Fratello, Tarika Daftary Kapur, and Alice Chasan Vera Institute, Center on Youth Justice Models for Change Every young person

More information

Examining Motivation Theory in Higher Education: An Expectancy Theory Analysis of Tenured Faculty Productivity

Examining Motivation Theory in Higher Education: An Expectancy Theory Analysis of Tenured Faculty Productivity VOLUME 15, NUMBER 1, 2012 Examining Motivation Theory in Higher Education: An Expectancy Theory Analysis of Tenured Faculty Productivity Brent Estes, PhD Assistant Professor Department of Health & Kinesiology

More information

Investment manager research

Investment manager research Page 1 of 10 Investment manager research Due diligence and selection process Table of contents 2 Introduction 2 Disciplined search criteria 3 Comprehensive evaluation process 4 Firm and product 5 Investment

More information

Chapter 1 Introduction to the Study

Chapter 1 Introduction to the Study ` Chapter 1 Introduction to the Study 1.1. Introduction The convergence of computer and communications technologies of the late 20th century has profoundly affected information creation, distribution,

More information

Qualitative Research. A primer. Developed by: Vicki L. Wise, Ph.D. Portland State University

Qualitative Research. A primer. Developed by: Vicki L. Wise, Ph.D. Portland State University Qualitative Research A primer Developed by: Vicki L. Wise, Ph.D. Portland State University Overview In this session, we will investigate qualitative research methods. At the end, I am hopeful that you

More information

INTERNATIONAL STANDARD ON ASSURANCE ENGAGEMENTS 3000 ASSURANCE ENGAGEMENTS OTHER THAN AUDITS OR REVIEWS OF HISTORICAL FINANCIAL INFORMATION CONTENTS

INTERNATIONAL STANDARD ON ASSURANCE ENGAGEMENTS 3000 ASSURANCE ENGAGEMENTS OTHER THAN AUDITS OR REVIEWS OF HISTORICAL FINANCIAL INFORMATION CONTENTS INTERNATIONAL STANDARD ON ASSURANCE ENGAGEMENTS 3000 ASSURANCE ENGAGEMENTS OTHER THAN AUDITS OR REVIEWS OF HISTORICAL FINANCIAL INFORMATION (Effective for assurance reports dated on or after January 1,

More information

On the attributes of a critical literature review. Saunders, Mark N. K. 1 & Rojon, Céline 2. United Kingdom.

On the attributes of a critical literature review. Saunders, Mark N. K. 1 & Rojon, Céline 2. United Kingdom. On the attributes of a critical literature review Saunders, Mark N. K. 1 & Rojon, Céline 2 1 School of Management, University of Surrey, Guildford, GU2 7XH, United Kingdom; 2 Department of Psychology &

More information

COMPETENCY ACC LEVEL PCC LEVEL MCC LEVEL 1. Ethics and Standards

COMPETENCY ACC LEVEL PCC LEVEL MCC LEVEL 1. Ethics and Standards ICF CORE COMPETENCIES RATING LEVELS Adapted from the Minimum Skills Requirements documents for each credential level (Includes will-not-receive-passing-score criteria- gray background) COMPETENCY ACC LEVEL

More information

Prove the worth of your organization s work with this process.

Prove the worth of your organization s work with this process. What s the True Value of Your Services? Use Social Return on Investment to Find Out. Prove the worth of your organization s work with this process. By John Byrnes Principal, Community Services Analysis

More information

Planning and Writing Essays

Planning and Writing Essays Planning and Writing Essays Many of your coursework assignments will take the form of an essay. This leaflet will give you an overview of the basic stages of planning and writing an academic essay but

More information

Scarcity, Conflicts and Cooperation: Essays in Political and Institutional Economics of Development. Pranab Bardhan

Scarcity, Conflicts and Cooperation: Essays in Political and Institutional Economics of Development. Pranab Bardhan Scarcity, Conflicts and Cooperation: Essays in Political and Institutional Economics of Development by Pranab Bardhan Table of Contents Preface Chapter 1: History,Institutions and Underdevelopment Appendix:

More information

Five High Order Thinking Skills

Five High Order Thinking Skills Five High Order Introduction The high technology like computers and calculators has profoundly changed the world of mathematics education. It is not only what aspects of mathematics are essential for learning,

More information

The Comparison between. Qualitative, Quantitative, and Single subject design. A Paper presented by. Dr. Bander N. Alotaibi

The Comparison between. Qualitative, Quantitative, and Single subject design. A Paper presented by. Dr. Bander N. Alotaibi 1 The Comparison between Qualitative, Quantitative, and Single subject design A Paper presented by Dr. Bander N. Alotaibi 2 Introduction The historical importance of qualitative and quantitative designs

More information

Allowance for Loan and Lease Losses: Building the Right Model

Allowance for Loan and Lease Losses: Building the Right Model Allowance for Loan and Lease Losses: Building the Right Model By Amit Govil, Partner, P&G Associates Recent regulatory emphasis, the changes in the economic climate, the uncertainty in the real estate

More information

CFSD 21 ST CENTURY SKILL RUBRIC CRITICAL & CREATIVE THINKING

CFSD 21 ST CENTURY SKILL RUBRIC CRITICAL & CREATIVE THINKING Critical and creative thinking (higher order thinking) refer to a set of cognitive skills or strategies that increases the probability of a desired outcome. In an information- rich society, the quality

More information

Pilot Testing and Sampling. An important component in the data collection process is that of the pilot study, which

Pilot Testing and Sampling. An important component in the data collection process is that of the pilot study, which Pilot Testing and Sampling An important component in the data collection process is that of the pilot study, which is... a small-scale trial run of all the procedures planned for use in the main study

More information

Executive Summary of Mastering Business Growth & Change Made Easy

Executive Summary of Mastering Business Growth & Change Made Easy Executive Summary of Mastering Business Growth & Change Made Easy by David Matteson & Jeff Hansen, June 2008 You stand at a crossroads. A new division of your company is about to be launched, and you need

More information

ICF CORE COMPETENCIES RATING LEVELS

ICF CORE COMPETENCIES RATING LEVELS coachfederation.org ICF CORE COMPETENCIES RATING LEVELS Adapted from the Minimum Skills Requirements documents for each credential level Includes will-not-receive-passing-score criteria. COMPETENCY 1.

More information

Remodelling the Big Bang

Remodelling the Big Bang Remodelling the Big Bang Dewey B. Larson Unquestionably, the most significant development that has taken place in cosmology in recent years is the replacement of the original Big Bang theory by a totally

More information

LITERATURE REVIEWS. The 2 stages of a literature review

LITERATURE REVIEWS. The 2 stages of a literature review LITERATURE REVIEWS Literature reviews. are an integral part of graduate studies to help you become fully conversant with a topic area may be a stand alone paper or part of a research paper or proposal

More information

You will by now not be surprised that a version of the teleological argument can be found in the writings of Thomas Aquinas.

You will by now not be surprised that a version of the teleological argument can be found in the writings of Thomas Aquinas. The design argument The different versions of the cosmological argument we discussed over the last few weeks were arguments for the existence of God based on extremely abstract and general features of

More information

Article Four Different Types of Evidence / Literature Reviews

Article Four Different Types of Evidence / Literature Reviews Article Four Different Types of Evidence / Literature Reviews The rapid growth in the number of reviews undertaken can partly be explained by the current emphasis on evidence-based practice. Healthcare

More information

ANOTHER GENERATION OF GENERAL EDUCATION

ANOTHER GENERATION OF GENERAL EDUCATION ANOTHER GENERATION OF GENERAL EDUCATION Peter K. Bol Charles H. Carswell Professor of East Asian Languages and Civilizations I was asked to set forth some personal reflections rather than to summarize

More information

STEPS OF THE ETHICAL DECISION-MAKING PROCESS

STEPS OF THE ETHICAL DECISION-MAKING PROCESS STEPS OF THE ETHICAL DECISION-MAKING PROCESS EESE Faculty Development Workshop Douglas R. May, Professor and Co-Director International Center for Ethics in Business SUMMARY OF THE STEPS OF THE ETHICAL

More information

Three Theories of Individual Behavioral Decision-Making

Three Theories of Individual Behavioral Decision-Making Three Theories of Individual Decision-Making Be precise and explicit about what you want to understand. It is critical to successful research that you are very explicit and precise about the general class

More information

Developing an Effective Evaluation Plan. Setting the course for effective program evaluation

Developing an Effective Evaluation Plan. Setting the course for effective program evaluation Developing an Effective Evaluation Plan Setting the course for effective program evaluation Acknowledgments This workbook was developed by the Centers for Disease Control and Prevention s (CDC s) Office

More information

The Five Biggest Pitfalls Dentists Need to Avoid When Buying a Dental Practice By Troy C. Patton, CPA/ABV

The Five Biggest Pitfalls Dentists Need to Avoid When Buying a Dental Practice By Troy C. Patton, CPA/ABV The Five Biggest Pitfalls Dentists Need to Avoid When Buying a Dental Practice By Troy C. Patton, CPA/ABV Troy Patton, CPA/ABV is an accountant from Indianapolis Indiana. Mr. Patton started his practice

More information

Planning and conducting a dissertation research project

Planning and conducting a dissertation research project Student Learning Development Planning and conducting a dissertation research project This guide addresses the task of planning and conducting a small research project, such as an undergraduate or masters

More information

Last year was the first year for which we turned in a YAP (for 2013-2014), so I can't attach one for 2012-2013.

Last year was the first year for which we turned in a YAP (for 2013-2014), so I can't attach one for 2012-2013. Annual Assessment Report Department of English 2012-2013 1. Previous Yearly Action Plan Last year was the first year for which we turned in a YAP (for 2013-2014), so I can't attach one for 2012-2013. 2.

More information

KEY CONCEPTS AND IDEAS

KEY CONCEPTS AND IDEAS LEAD SELF The domain of the LEADS in a Caring Environment leadership capability framework, consists of four capabilities: a leader (1) Is Self-Aware, (2) Manages Self, (3) Develops Self, and (4) Demonstrates

More information

Organizing an essay the basics 2. Cause and effect essay (shorter version) 3. Compare/contrast essay (shorter version) 4

Organizing an essay the basics 2. Cause and effect essay (shorter version) 3. Compare/contrast essay (shorter version) 4 Organizing an essay the basics 2 Cause and effect essay (shorter version) 3 Compare/contrast essay (shorter version) 4 Exemplification (one version) 5 Argumentation (shorter version) 6-7 Support Go from

More information

ISA 200, Overall Objective of the Independent Auditor, and the Conduct of an Audit in Accordance with International Standards on Auditing

ISA 200, Overall Objective of the Independent Auditor, and the Conduct of an Audit in Accordance with International Standards on Auditing International Auditing and Assurance Standards Board Exposure Draft April 2007 Comments are requested by September 15, 2007 Proposed Revised and Redrafted International Standard on Auditing ISA 200, Overall

More information

What is a Personal Development Plan?... 1. Where did the Personal Development Plan come from?... 2. How does the Personal Development Plan work?...

What is a Personal Development Plan?... 1. Where did the Personal Development Plan come from?... 2. How does the Personal Development Plan work?... What is a Personal Development Plan?... 1 Where did the Personal Development Plan come from?... 2 How does the Personal Development Plan work?... 3 How does it relate to my Antioch School program?... 5

More information

Basel Committee on Banking Supervision. Working Paper No. 17

Basel Committee on Banking Supervision. Working Paper No. 17 Basel Committee on Banking Supervision Working Paper No. 17 Vendor models for credit risk measurement and management Observations from a review of selected models February 2010 The Working Papers of the

More information

Audit Sampling. AU Section 350 AU 350.05

Audit Sampling. AU Section 350 AU 350.05 Audit Sampling 2067 AU Section 350 Audit Sampling (Supersedes SAS No. 1, sections 320A and 320B.) Source: SAS No. 39; SAS No. 43; SAS No. 45; SAS No. 111. See section 9350 for interpretations of this section.

More information

Logic Models, Human Service Programs, and Performance Measurement

Logic Models, Human Service Programs, and Performance Measurement Three Logic Models, Human Service Programs, and Performance Measurement Introduction Although the literature on ment has been around for over two decades now, scholars and practitioners still continue

More information

7 Conclusions and suggestions for further research

7 Conclusions and suggestions for further research 7 Conclusions and suggestions for further research This research has devised an approach to analyzing system-level coordination from the point of view of product architecture. The analysis was conducted

More information

4 April 2008. Also via email: transport.prices-oversight@accc.gov.au. Dear Ms Arblaster

4 April 2008. Also via email: transport.prices-oversight@accc.gov.au. Dear Ms Arblaster 4 April 2008 Ms Margaret Arblaster General Manager Transport and Prices Oversight Australian Competition and Consumer Commission (ACCC) GPO Box 520 MELBOURNE VIC 3001 Also via email: transport.prices-oversight@accc.gov.au

More information

Planning a Class Session

Planning a Class Session Planning a Class Session A Guide for New Teachers by Diane M. Enerson Kathryn M. Plank R. Neill Johnson The Pennsylvania State University 301 Rider Building II University Park, PA 16802 www.schreyerinstitute.psu.edu

More information

Fourth generation techniques (4GT)

Fourth generation techniques (4GT) Fourth generation techniques (4GT) The term fourth generation techniques (4GT) encompasses a broad array of software tools that have one thing in common. Each enables the software engineer to specify some

More information

WHAT IS A JOURNAL CLUB?

WHAT IS A JOURNAL CLUB? WHAT IS A JOURNAL CLUB? With its September 2002 issue, the American Journal of Critical Care debuts a new feature, the AJCC Journal Club. Each issue of the journal will now feature an AJCC Journal Club

More information

PSYCHOLOGY PROGRAM LEARNING GOALS AND OUTCOMES BY COURSE LISTING

PSYCHOLOGY PROGRAM LEARNING GOALS AND OUTCOMES BY COURSE LISTING PSYCHOLOGY PROGRAM LEARNING GOALS AND OUTCOMES BY COURSE LISTING Psychology 1010: General Psychology Learning Goals and Outcomes LEARNING GOAL 1: KNOWLEDGE BASE OF PSYCHOLOGY Demonstrate familiarity with

More information