The Comparison between. Qualitative, Quantitative, and Single subject design. A Paper presented by. Dr. Bander N. Alotaibi



Similar documents
Qualitative Research. A primer. Developed by: Vicki L. Wise, Ph.D. Portland State University

Comparison of Research Designs Template

COM 365: INTRODUCTION TO COMMUNICATION RESEARCH METHODS Unit Test 3 Study Guide

Research Design and Research Methods

Practical Research. Paul D. Leedy Jeanne Ellis Ormrod. Planning and Design. Tenth Edition

STUDENT THESIS PROPOSAL GUIDELINES

Single and Multiple-Case Study Designs IS493

Measurement and measures. Professor Brian Oldenburg

DOCTOR OF PHILOSOPHY DEGREE. Educational Leadership Doctor of Philosophy Degree Major Course Requirements. EDU721 (3.

Analyzing Research Articles: A Guide for Readers and Writers 1. Sam Mathews, Ph.D. Department of Psychology The University of West Florida

Introducing Social Psychology

CHAPTER THREE: METHODOLOGY Introduction. emerging markets can successfully organize activities related to event marketing.

Chapter 2 Quantitative, Qualitative, and Mixed Research

School of Advanced Studies Doctor Of Management In Organizational Leadership. DM 004 Requirements

WARSAW SCHOOL OF ECONOMICS

GUIDELINES FOR PROPOSALS: QUALITATIVE RESEARCH Human Development and Family Studies

Five High Order Thinking Skills

INTERNATIONAL FRAMEWORK FOR ASSURANCE ENGAGEMENTS CONTENTS

Qualitative Critique: Missed Nursing Care. Kalisch, B. (2006). Missed Nursing Care A Qualitative Study. J Nurs Care Qual, 21(4),

Research Methods: Qualitative Approach

Qualitative and Quantitative Research

School Psychology Doctoral Program Dissertation Outline 1 Final Version 6/2/2006

DEVELOPING HYPOTHESIS AND

ASSOCIATED STUDY MATERIALS

NON-PROBABILITY SAMPLING TECHNIQUES

California State University, Los Angeles Department of Sociology. Guide to Preparing a Masters Thesis Proposal

What Is a Case Study? series of related events) which the analyst believes exhibits (or exhibit) the operation of

12/30/2012. Research Design. Quantitative Research: Types (Campbell & Stanley, 1963; Crowl, 1993)

Chapter 2. Sociological Investigation

School of Advanced Studies Doctor Of Education In Educational Leadership With A Specialization In Educational Technology. EDD/ET 003 Requirements

Job Design from an Alternative Perspective

Fairfield Public Schools

Learning and Teaching

Guidelines for Preparing an Undergraduate Thesis Proposal Department of Agricultural Education and Communication University of Florida

Observing and describing the behavior of a subject without influencing it in any way.

CFSD 21 ST CENTURY SKILL RUBRIC CRITICAL & CREATIVE THINKING

QUALITATIVE RESEARCH

School of Advanced Studies Doctor Of Management In Organizational Leadership/information Systems And Technology. DM/IST 004 Requirements

Analyzing Marketing Cases

AN ANALYSIS OF THE STRENGTHS AND LIMITATION OF QUALITATIVE AND QUANTITATIVE RESEARCH PARADIGMS

Non-random/non-probability sampling designs in quantitative research

Research Methods Carrie Williams, ( Grand Canyon University

Using Qualitative & Quantitative Research Methods to Answer your Research Questions

In an experimental study there are two types of variables: Independent variable (I will abbreviate this as the IV)

Models of Dissertation in Design Introduction Taking a practical but formal position built on a general theoretical research framework (Love, 2000) th

Competencies for Canadian Evaluation Practice

LONDON SCHOOL OF COMMERCE. Programme Specifications for the. Cardiff Metropolitan University. MSc in International Hospitality Management

MARKETING RESEARCH AND MARKET INTELLIGENCE (MRM711S) FEEDBACK TUTORIAL LETTER SEMESTER `1 OF Dear Student

Position Classification Standard for Management and Program Clerical and Assistance Series, GS-0344

WRITING A CRITICAL ARTICLE REVIEW

Part (3) Nursing Education and Research

Economics Proposal for CORE 2014 due by February 1, 2013

Case Studies. Dewayne E Perry ENS 623 perry@mail.utexas.edu

EVALUATION OF IMPORTANCE FOR RESEARCH IN EDUCATION

Research design and methods Part II. Dr Brian van Wyk POST-GRADUATE ENROLMENT AND THROUGHPUT

Information Technology Research in Developing Nations: Major Research Methods and Publication Outlets

COMPREHENSIVE EXAMINATION. Adopted May 31, 2005/Voted revisions in January, 2007, August, 2008, and November 2008 and adapted October, 2010

Programme Specifications

9 The Difficulties Of Secondary Students In Written English

IS/IT RESEARCH: A RESEARCH METHODOLOGIES REVIEW

4. Is the study design appropriate for the research question and objectives?

Developing an R Series Plan that Incorporates Mixed Methods Research

Investigators place signposts to carry the reader through a plan for a

How do we know what we know?

Using Case Studies in Research

Interview studies. 1 Introduction Applications of interview study designs Outline of the design... 3

Grounded Theory. 1 Introduction Applications of grounded theory Outline of the design... 2

Maths Mastery in Primary Schools

Depth-of-Knowledge Levels for Four Content Areas Norman L. Webb March 28, Reading (based on Wixson, 1999)

This Module goes from 9/30 (10/2) 10/13 (10/15) MODULE 3 QUALITATIVE RESEARCH TOPIC 1. **Essential Questions**

College of Arts and Sciences: Social Science and Humanities Outcomes

*Heinemann, London, 1979

IHTE-1800 Research methods: Case and action research. Sari Kujala, spring 2007

GUIDELINES FOR REVIEWING QUANTITATIVE DESCRIPTIVE STUDIES

Using qualitative research to explore women s responses

Holly. Anubhav. Patrick

Phenomenological Research Methods

What is Grounded Theory? Dr Lynn Calman Research Fellow School of Nursing, Midwifery and Social Work

Logic Models, Human Service Programs, and Performance Measurement

Qualitative Interview Design: A Practical Guide for Novice Investigators

Guided Reading 9 th Edition. informed consent, protection from harm, deception, confidentiality, and anonymity.

Writing in Psychology. General Advice and Key Characteristics 1

INSTITUTE OF CONTINUING AND DISTANCE EDUCATION

DOCTORAL DEGREE IN EDUCATION. Ed.D. Leadership in Schooling

Does coaching work? Can mathematics coaches make a difference?

How to Develop a Research Protocol

INTERNATIONAL STANDARD ON ASSURANCE ENGAGEMENTS 3000 ASSURANCE ENGAGEMENTS OTHER THAN AUDITS OR REVIEWS OF HISTORICAL FINANCIAL INFORMATION CONTENTS

STANDARDS FOR GUIDANCE COUNSELING PROGRAMS

Fundamentals of Qualitative Research. Joan LaFrance AIHEC NARCH Meeting Dinѐ College June 25, 2015

Research Critique of Caught in the middle: Experiences of tobacco-dependent nurse practitioners

INTERNATIONAL STANDARD ON AUDITING 700 FORMING AN OPINION AND REPORTING ON FINANCIAL STATEMENTS CONTENTS

Evaluation: Designs and Approaches

Consulting projects: What really matters

Exploring the directions and methods of business development. A comparative multiple-case study on Ikea and Vodafone

Teachers' Art Assessment Practices: Relationship to Expertise, Experience, Beliefs and Confidence

ISA 620, Using the Work of an Auditor s Expert. Proposed ISA 500 (Redrafted), Considering the Relevance and Reliability of Audit Evidence

DOCTORAL DEGREE IN EDUCATION. Ed.D. Leadership in Schooling

Transcription:

1 The Comparison between Qualitative, Quantitative, and Single subject design A Paper presented by Dr. Bander N. Alotaibi

2 Introduction The historical importance of qualitative and quantitative designs is partially illustrated by their longevity. Hammersley (1992) has noted that debates about the two methods have roots in the mid-nineteenth century and occurred in sociology in the 1920s and 1930s. Recent attention to the debate started with a revival of the fortunes of qualitative research methods in the 1960s in sociology and psychology, which had been dominated by quantitative methods throughout the 1940s and the 1950s (Hammersley,1992). The two approaches have philosophical roots in the naturalistic and the positivistic philosophies, respectively. Essentially all qualitative researchers, regardless of their theoretical differences, reflect some sort of individual phenomenological perspective. Most quantitative research approaches, regardless of their theoretical differences, tend to emphasize that there is a common reality on which people can agree (Newman, 1998). The role of qualitative methods either alone or in conjunction with quantitative methods has attracted the attention of various publics associated with administrating and evaluating social intervention programs (Wesley. 1996). The phrases qualitative methods and quantitative methods not only mean specific data collection, but also are more appropriately conceptualized as paradigms. Filstead ( 1978) defined the paradigm as a set of interrelated assumptions about the social world which provides a philosophical and conceptual framework for the organized studies of that world (p.34). In general, quantitative method are supported by the positivist or scientific paradigm, which leads us to regard the world as made up of observable, measurable facts, whereas qualitative methods are supported by the interpretive paradigm, which describes a world in which reality is socially constructed, complex, and ever

3 changing. The two paradigms come from two quite different and unique traditions. While the collection of attributes that make up the qualitative paradigm grew out of the natural and agricultural sciences, the quantitative paradigm came from work in social anthropology and sociology (Reichardt & Cook, 1979). The different assumptions about the nature of the world affect not only the research method used, but also the purpose of research and the roles of researchers (Glesre & Peshkin, 1992). The researchers in a quantitative paradigm are concerned with discovering, verifying, or identifying causal relationships among concepts that are derived from a prior theoretical scheme. The assignment of subjects is of concern, and efforts are generally made to use either random assignment or other sampling techniques to minimize intervening variables that could impact the results of the research. In contrast, a qualitative researcher prefers the theory to emerge from the data itself. This grounding of theory in data enhances the ability of the researcher to understand and perhaps ultimately devise an explanation for the phenomenon which is consistent with its occurrence in the social world (Firestone, 1998). Characteristics of the two approaches: Another way to compare and contrast the qualitative and quantitative methods is to address and identify the characteristic of the two approaches in a journal article, dissertation, or scholarly research study. Qualitative and Quantitative researchers do use similar elements in their work, however, they state a purpose pose a problem or raise a question, define a research population, collect and analyze data, and present outcome. In addition, they rely on a theoretical framework and are concerned with rigor. Nevertheless, the way that researchers put these elements together makes for distinctive differences in both the process and the final product ( Glesne & Peshkin, 1992).

4 The two approaches differ in terms of the use of language and words. One different is in the point of view used by authors in introductions to qualitative and quantitative studies, Creswell (1998) pointed out that a quantitative introduction is written from the third person point of view, while a qualitative introduction presents the point of view of first or second person. The difference also might be found in verb tense and types of questions. In a qualitative study, the present tense is mostly used to connotate immediate, direct action, while researchers in a quantitative study employ the past tense to create distance between the written study and the action on which the study is based. Moreover, the introduction in a qualitative study conveys an inductive design, and uses literature to provide a rationale for the problem. In a quantitative study, the introduction is more a deductive, static design where the literature and theory help to direct the study. A fundamental characteristic of a good qualitative purpose statement is that it implies or expresses the assumptions of the qualitative paradigm. Here, the researcher needs to: a) use qualitative words that convey an evolving, developing study, b) clearly identify the key concept being explored, c) mention the qualitative method of inquiry, and d) identify the unit of analysis. In a quantitative purpose statement, the researcher should include information about the theory to be used, the method of inquiry, the variables, and the intent to compare groups, samples, or relate variables (Ragin, 1987). In regard to the research questions, researcher uses a limited number of subquestions and they should be descriptive in nature, and employ appropriate qualitative questions. In contrast, quantitative questions are objectives and hypotheses, flow from a theory, use language that orders the variables from independent to dependent, and employs standard wording to assist the reader in more clearly understanding the variables in the study (Creswell, 1998). Terms that may not

5 be understood outside the field of the study are defined, in qualitative studies, tentatively because the meaning of words will emerge from the informants. These terms are few in numbers and defined throughout the proposed study. In contrast, terms in quantitative studies are defined early in the study in a separate section. The use of a theory can be added to those elements that distinguish between the two approaches. Newman and Benz (1998) noted that while most researchers in qualitative studies employ an inductive mode of development, and their placement of theory tends to be toward the end of the study, quantitative researchers tend to employ theories deductively and place them toward the beginning of the study. Qualitative design is used when observing and interpreting reality with the aim of developing a theory that will explain what was experienced, while the use of the quantitative approach is needed when one begins with a theory and tests for confirmation and disconfirmation of that theory. Data collection technique in qualitative design tends to be multimodal with the researcher s acknowledging his or her personal outlook on the phenomenon. The concept of validity is entirely different from quantitative design. It relies heavily on description and the researcher s analysis. The reader can draw an independent judgment about validity based on the logic, and richness of the description. On the other hand, quantitative research requires maximizing the objectivity of data collection. Replication of the findings is paramount in order to show that they are not contradictory (serendipitous). This procedure leads to an emphasis on reliability and minimizing threats to internal validity (McWilliam, 1991) Finally it is important to emphasize that one method is not necessary better that the other. It all relies on what the researcher is studying or wants to find out. The best method that the one is answers the research questions most efficiently and with most foremost inference quality.

6 Single Subject Design Although single subject designs have strong roots in psychology, the field of Education, rehabilitation, sport and athletic performance find them useful. They assist in analysis of treatment ranging from visual feedback to improve balance in stroke patients to pharmacological agents and dependents variables (Franklin et al, 1997). The utility of single subject designs has been illustrated repeatedly in applied settings, including clinics, schools, home and community for a variety of populations (Marlow, 1993). Mostly, single design demonstrations have been used to investigate behavior modification techniques. Particularly, the area of applied behavioral analysis has strongly established the utility of the design and has extended the range of design options suitable for investigation. Recently, however, single subject designs have received increased attention because they provide a methodological approach that permits experimental investigation with one subject. According to Kazdin (1982) the emerging of the single subject design was a consequence of problems associated with group designs. According to Marlow (1993) research studies were found to have major methodological problems that caused questions about the accuracy of the findings. First, the research studies often had no type of comparison group, which led to questions about the internal and external validity of the results. Second, group designs tend to pool both the results from successful and unsuccessful programs in order to determine average results, which prevents them from being able to determine what works with whom and what kinds of problems exists. Third, group designs mostly rely on only two measurements, one before the intervention and the other after. This makes it more difficult to know what happens between these two measurements points. Therefore, as a consequence of these problems and the continuing demands for the accountability of programs, researchers were increasingly required to evaluate

7 their practices, which resulted in identifying specific interventions and their effectiveness as a new technology known as single system or single subject designs. Basically, single subject designs involve studying a single case by taking repeated measures of one or more dependent variables and systematically applying, and in some designs withdrawing, an independent variable. If the application of the independent variable is associated with systematic changes in the dependent variable, one can infer that the dependent variable has caused the changes ( Horn & Heerboth, 1982). Despite the fact that single subject designs were originally developed for studying a single organism, they are also applicable to program evaluation in two ways: First, the replication of many single case designs applied to individuals, which involve the use of multiple single-case designs in order to determine program effects across many subjects, setting, and types of behavioral problems. Second the substitution of groups of subjects or program outcome criteria for the individual subject. This approach employs program outcome measures or variables gathered on groups of clients as the dependent variable. Each data point would now represent either a specific program outcome measure or an average across a number of subjects (Repp, 1983). Kazdin (1982) indicated that of all the characteristics that can be attributed to single subject designs, two would be central. First, single subject designs require a continuous assessment of performance over time. Second, intervention effects are replicated within the same subject over time. Additionally, several other characteristics are associated with single subject designs, including: a)the focus on one or a few subject, b)the evaluation of intervention effects on overt behaviors, and c) the evaluation of data through visual inspection.

8 The goal of single-subject designs is identical to those of group designs. Both are intended to establish the existence of a functional relationship between one or more independent variables and one or more dependent variables and to rule out extraneous events as causes of changes in the dependent variable(s) (Conners & Wells, 1982). This latter goal is a well-known concept of internal validity. Contrary to the belief of many, single subject designs can be just as internally valid as group designs. However, Alberto and Troutman (1999), have described differences between the two designs and compare them in a clear and concise manner. Group designs concentrate on data concerning a number of individuals, while single-subject designs are concerned with single individuals. Group designs are used to ascertain the effects of an independent variable on a dependent variable in samples. For the most part in the group design, experiments are conducted with at least two groups. One group is the experimental group and is exposed to the independent variable, whereas the other group, the control group, is not. Data are frequently reported in terms of the mean, or average, performance of all persons in each sample. The researchers then generalize these results to the population of which the sample is assumed to be representative of. Researchers use statistical analysis of differences among means to evaluate group experimental designs. Mathematical tests are used to estimate whether the differences between groups possibly result from the independent variable or chance. Most applied behavior analysis researchers prefer to use single-subject designs. This enables them to avoid missing important information about individuals that may otherwise have been obscured. Group designs can identify the effects of

9 variables on the average performance of large numbers of subjects, while singlesubject designs identify the effects of variables on a specific behavior of a particular student. Single-subject designs require repeated measures of the dependent variable, which can be recorded on a daily, weekly, or monthly basis. Then the performance is compared under different conditions or manipulations of the independent variable. One major difference between the two designs is that single-subject designs emphasize clinical significance for an individual, rather than statistical significance among groups. Certain ingredients that are common to all single-subject designs distinguish them from group designs, which include a measure of baseline performance, and at least one measure of performance under an intervention condition. They also require at least one replication of results, which allows for the assumptions of a functional relationship. Although the two approaches are based on different assumptions about the nature of the world, which require different instruments and procedures to find the type of data designed (Glesne & Peshkin, 1992), many believe that skilled researchers can successfully combine them. The combination of the two methods has provided the type of information that appeals to empiricists and nonempiricists alike. Patton (1980) has articulated that, although system analysis and behavioral objective approach to evaluation are incompatible with a phenomenological perspective and a naturalistic inquiry method, qualitative and quantitative data gathering can go at the same time. Creswell (1998) defined a mixed method as one in which the researcher uses multiple methods of data collection and analysis (p.175). This method might be drawn from within method approaches such as different types of quantitative data strategies (e.g., a survey and an experiment). or might involve between methods

10 drawing on qualitative and quantitative data collection procedures (e.g., a survey and an interview). Hawe (1991) discussed disjunctive and conjunctive combination of qualitative and quantitative methods. He mentioned that disjunctive combination consist of using a) data collection techniques from one paradigm to study one phenomenon or outcome, and b) techniques from the other paradigm to study another phenomenon or outcome. Conjunctive combinations use data from both traditions to investigate the same issue. The use of the two approaches together has been supported by numerous numbers of researchers. For example, Mann (1994) conducted a study in which qualitative data were collected regarding two groups of tutors. One group consisted of three tutors who seemed to have benefited the most from a training program. The control group consisted of those tutors who had not benefited from the training program. The general hypothesis was that the two groups would be different in their conceptualization of their role as well as their effect of the tutee. Qualitative data collection involved weekly journal entries during a 10-week period and a two page written by each tutor. Qualitative analysis of the journal entries showed that those who benefited from the training program were more aware of interpersonal aspects of tutoring, identified conflicts in clearer forms, and resolves them more efficiently than the other group. In a study of the marriage pattern, Meekers (1996) combined qualitative and quantitative methods of data collection and analysis. Qualitative data collection consisted of ethnographic research done by previous researchers to form definition for the types of martial unions in this group. Quantitative data collection involved the use of an event-history survey. Meekers combined the results from the ethnographies and the event-history survey by placing the respondents in the event-history survey in

11 each of the categories found in the ethnographic findings. Results supported the ethnographic findings, indicating that young couples have begun to deviate from the formulating of traditional marriage. Another study used mixed method conducted by Timberlake (1994). The researcher used the mixed method approach to data collection and analysis in a single study of homeless children. Data were collected through interviews with homeless children and their mothers as well as questionnaires completed by their teachers. Data analysis consisted of both qualitative and quantitative methods. Quantitative analysis included calculation of statistical indicators as well as test of significance between well-adjusted and there children. Qualitative analysis included the development of a 3*2 grid based on children s meaning of homeless. Results indicted significant differences between the two groups. Generally, the combination of the two methods identified questions and areas that would not have been evident by the use of just one method. Reichard and Cook (1979) stated that the mixed method should receive more attention for the following reasons: a) research usually has multiple purposes, which must be carried out under the most demanding of conditions. This variety of needs often requires a variety of methods. Second, when used together for the same purpose qualitative and quantitative approaches can be built upon each other to offer insights that neither one alone could provide. Finally, since each of the two approaches has it own bias, using mixed method techniques can assist the researcher triangulate on the underlying truth, by using each approach to check on and learn from the other. In contrast to Reichard & Cook s (1989) perspective, Marlow (1993) stated that the use of mixed methods might experience a number of practical obstacles. He described several barriers that can stand in the way of combining qualitative and quantitative methods. First, the combination of the two methods can be prohibitively expensive, Second, the use of them together may require too much time. Third, the

12 use of the mixed method requires adequate training in both methods. Therefore, For these reasons, Marlow (1993) seems to not be optimistic that mixed method will become commonplace and help to produce solid and valid research. However, to what extent can one collect data both qualitatively and quantitatively without violating the principles for each method? The researcher who undertakes an essentially qualitative study might wish to include quantitative information to increase the generalizability of the finding. For example, in addition to conducting in-depth interviews as well as observing in classrooms, the researcher may have an associate observer with a coding scheme. The quantitative principle will be violated in that 1) the formal observation did not include interobserver agreement data, 2) the operational definition included subjective elements, 3) if it was not made across enough subjects, and 4) if competing variables were neither measured nor controlled for. On the other hand, a researcher who is using a quantitative study and wishes to include qualitative information to compensate for the dryness of the information gathered in a study on special educators about the success of inclusion, a researcher may violate the principle of qualitative studies that included qualitative information: if he/she: a) spent too little time observing, b) did not keep field notes, c) only recorded behaviors that he/she had decided a head of time to document, and d) did not interpret what was seen. ( McWilliam, 1991). Therefore, researcher who uses poor statistical analysis in the quantitative portion of the study, with the justification that the open interview, for example, in the qualitative study make up for the quantitative deficiency, effectively destroys the credibility of the quantitative finding. Additionally, the empiricist who throws in a case vignette merely provides an example of the quantitative findings; the generative value of grounded research would be lost.

13 As a conclusion, it is important to understand that the use of a mixed method would be useful and possibly create a higher standard for research if it doesn t diminish the existing standards for either method. Researchers should take into account that combining good qualitative design and good quantitative design will produce valid, credible results.