1 Planning and Evaluating Health Information Outreach Projects THREE BOOKLET COLLECTING AND ANALYZING EVALUATION DATA 2nd edition Outreach Evaluation Resource Center 2013 National Network of Libraries of Medicine
2 COLLECTING AND ANALYZING EVALUATION DATA 2nd edition Planning and Evaluating Health Information Outreach Projects Booklet Cynthia A. Olney, PhD Evaluation Specialist National Network of Libraries of Medicine Outreach Evaluation Resource Center Susan J. Barnes, MLS, MS Assistant Director National Network of Libraries of Medicine Outreach Evaluation Resource Center
3 Olney, Cynthia A. Collecting and analyzing evaluation data / Cynthia A. Olney, Susan J. Barnes. - Seattle, Wash. : National Network of Libraries of Medicine, Outreach Evaluation Resource Center ; [Bethesda, Md.] : National Library of Medicine, (Planning and evaluating health information outreach projects ; booklet 3) Supplement to: Measuring the difference / Catherine M. Burroughs. Includes bibliographical references. 1. Health Education organization & administration. 2. Community-Institutional Relations. 3. Information Services organization & administration. 4. Data Collection methods. 5. Data Collection standards.. I. Barnes, Susan, MLS. II. Burroughs, Catherine M. (Catherine Mary). Measuring the difference. III. National Network of Libraries of Medicine (U.S.). Outreach Evaluation Resource Center. IV. National Library of Medicine (U.S.). V. Title. VI. Series. 02NLM: WA 590 Additional copies can be ordered or downloaded from: National Network of Libraries of Medicine, Outreach Evaluation Resource Center, Box , University of Washington, Seattle, Washington, , This project has been funded in whole with federal funds from the Department of Health and Human Services, National Institutes of Health, National Library of Medicine, under Contract No. HHS-N C with the University of Washington.
4 Table of Contents Preface...i Acknowledgements... ii Introduction...1 Quantitative Methods...2 Step One: Design Your Data Collection Methods...3 Step Two: Collect Your Data...8 Step Three: Summarize and Analyze Your Data...12 Step Four: Assess the Validity of Your Findings...17 Qualitative Methods...18 Step One: Design Your Data Collection Methods...20 Step Two: Collect Your Data...22 Step Three: Summarize and Analyze Your Data...25 Step Four: Assess the Trustworthiness of Your Findings...29 Take Home Messages...29 References...30 Appendix 1: Examples of Commonly Used Quantitative Methods...31 Appendix 2: Examples of Commonly Used Qualitative Methods...32 Tool Kit: Using Mixed Methods...33 Worksheet 1 Planning a Survey...34 Worksheet 2 Planning an Interview...35 Blank Worksheets...36 Checklist...38
5 Preface i Preface This booklet is part of the Planning and Evaluating Health Information Outreach Projects series designed to supplement Measuring the Difference: Guide to Planning and Evaluating Health Information Outreach . This series also supports evaluation workshops offered through the Outreach Evaluation Resource Center of the National Network of Libraries of Medicine. The goal of the series is to present step-by-step planning and evaluation methods. The series is aimed at librarians, particularly those from the health sciences sphere, and representatives from community organizations who are interested in conducting health information outreach projects. We consider health information outreach to be promotional and educational activities designed to enhance community members abilities to find and use health information. A goal of these activities often is to equip members of a specific group or community to better address questions about their own health or the health of family, peers, patients, or clients. Such outreach often focuses on online health information resources such as the websites produced by the National Library of Medicine. Projects may also include other sources and formats of health information. We strongly endorse partnerships among organizations from a variety of environments, including health sciences libraries, hospital libraries, community-based organizations and public libraries. We also encourage broad participation of members of target outreach populations in the design, implementation, and evaluation of the outreach project. We try to describe planning and evaluation methods that accommodate this participatory approach to communitybased outreach. Still, we may sound like we are talking to project leaders. In writing these booklets we have made the assumption that one person or a small group of people will be in charge of initiating an outreach project, writing a clear project plan, and managing the evaluation process. Booklet 1 in the series, Getting Started with Community Assessment, is designed to help you collect community information to assess need for health information outreach and the feasibility of conducting an outreach project. Community assessment also yields contextual information about a community that will help you set realistic program goals and design effective strategies. It describes three phases of community assessment: 1. Get organized, 2. Collect data about the community, and 3. Interpret findings and make project decisions. The second booklet, Planning Outcomes-Based Outreach Projects, is intended for those who need guidance in designing a good evaluation plan. By addressing evaluation in the planning stage, you are committing to doing it and you are more likely to make it integral to the overall project. The booklet describes how to do the following: 1. Plan your program with a logic model, 2. Use your logic model for process assessment, and 3. Use your logic model to develop an outcomes assessment plan. The third booklet, Collecting and Analyzing Evaluation Data, presents steps for quantitative methods (methods for collecting and summarizing numerical data) and qualitative methods (specifically focusing on methods for summarizing text-based data.) For both types of data, we present the following steps: 1. Design your data collection methods, 2. Collect your data, 3. Summarize and analyze your data, and 4. Assess the validity or trustworthiness of your findings. Finally, we believe evaluation is meant to be useful to those implementing a project. Our booklets adhere to the Program Evaluation Standards developed by the Joint Committee on Standards for Educational Evaluation . Utility standards, listed first because they are considered the most important, specify that evaluation findings should serve the information needs of the intended users, primarily those implementing a project and those invested in the project s success. Feasibility standards direct evaluation to be cost-effective, credible to the different groups who will use evaluation information, and minimally disruptive to the project. Propriety standards uphold evaluation that is conducted ethically, legally, and with regard to the welfare of those involved in or affected by the evaluation. Accuracy standards indicate that evaluation should provide technically adequate information for evaluating a project. Finally, the accountability standards encourage adequate documentation of program purposes, procedures, and results. We sincerely hope that you find these booklets useful. We welcome your comments, which you can to one of the authors: Cindy Olney at or Susan Barnes at
6 ii Acknowledgements Acknowledgements We deeply appreciate Cathy Burroughs groundbreaking work, Measuring the Difference: Guide to Planning and Evaluating Health Information Outreach, and thank her for her guidance in developing the Planning and Evaluating Health Information Outreach Projects series as a supplement to her publication. We also are grateful to our colleagues who provided feedback for the first edition of the series. To update the series, we were fortunate to work with four reviewers who brought valuable different viewpoints to their critiques of the booklets. We want to thank our reviewers for their insightful suggestions: Molly Engle, PhD, Professor and Extension Evaluation Specialist, Oregon State University, Corvallis, OR Sabrina Kurtz-Rossi, MA, Health Literacy Consultant and Principal, Kurtz-Rossi & Associates, Boston, MA Annabelle Nunez, MA, Assistant Librarian, Arizona Health Sciences Center, Tucson, AZ Ann Webb Price, PhD, President, Community Evaluation Solutions, Alpharetta, GA This project has been funded in whole with federal funds from the Department of Health and Human Services, National Institutes of Health, National Library of Medicine, under Contract No. HHS-N C with the University of Washington.
7 Introduction 1 Introduction This booklet provides tips and techniques for collecting and analyzing information about your outreach projects so that you can make decisions like these: When planning a program, you need to understand the needs of program participants so you can choose outreach strategies that are motivational and supportive. As you monitor project activities, you will need to decide whether to make changes to your plans. As the project nears its end, you will decide how to report the results. You and others invested in the project, referred to as stakeholders, will have to decide how your project made a difference and if your outreach project should be continued. If you are going to make good decisions about your outreach project or any other project you need information or data. In this booklet we use the word data to include numerical and text-based information (such as interview transcriptions or written comments) gathered through surveying, observing, interviewing, or other methods of investigation. During community assessment data can help you identify groups that are in particular need of health information outreach . Data also can be used to assess the resources and challenges facing your project. While you are implementing your activities and strategies, data can provide you with feedback for project improvement this is called process assessment, which we described in Booklet 2, Planning Outcomes-Based Outreach Projects . During outcomes assessment, data can provide the basis for you and other stakeholders to identify and understand results and to determine if your project has accomplished its goals. Therefore, much care must go into the design of your data collection methods to assure accurate, credible and useful information. To really understand and assess an outreach project, it is recommended to use multiple and mixed methods when possible: Multiple methods means collecting data from more than one source and not relying on one survey or test or focus group to provide an adequate program assessment. Mixed methods means that various types of information sources are used to assess your project. We provide an example of how to use mixed methods in the Toolkit that starts on page 33. When possible, project evaluation should combine both quantitative and qualitative methods. Quantitative methods gather numerical data that can be summarized through statistical procedures. Qualitative methods collect nonnumerical data, often textual, that can provide rich details about your project. Each approach has its particular strengths and, when used together, can provide a thorough picture of your project. (Note: When we talk about data collection methods, we are referring to procedures or tools designed to gather information. Surveys and interviews are data collection methods. When we compile, summarize and analyze the data, we use the term analytic methods. ) This booklet is organized into two sections: one for quantitative methods and one for qualitative methods. After a brief overview, each section focuses on a specific method that is common and applicable to a variety of evaluation projects. In the quantitative section, the survey method has been chosen. For the qualitative section, interviewing is the method addressed. However, we should note that neither surveys nor interviews are limited to collecting one type of data. Either method can be designed to produce qualitative or quantitative data. Often, they are designed to collect a combination of both. You choose the type of method based on the evaluation question you want to answer. Evaluation questions describe what you want to learn (as differentiated from survey and interview questions, which are carefully worded, sequenced, and formatted to elicit responses from participants). Figure 1 provides an approach to selecting the type of method. Figure 1: Choosing Type of Method What are your evaluation questions? If you are trying to learn: How many? How much? What percentage? How often? What is the average amount? If you are trying to learn: What worked best? What did not work well? What do the numbers mean? How was the project useful? What factors influenced success or failure? Choose quantitative methods (see page 2) Choose qualitative methods (see page 18)
8 2 Quantitative Methods Steps Overview Quantitative Methods Figure 2: Evaluation Using Quantitative Methods STEP 1 Design Your Data Collection Methods Surveys Write your evaluation questions Develop the data collection tool (i.e., questionnaire) Pilot test the questionnaire STEP 2 Collect Your Data Surveys Decide who will receive the questionnaire Maximize response rate Check for nonresponse bias Provide motivation and information about risks and participants rights STEP 3 Summarize and Analyze Your Data Compile descriptive data Calculate measures of central tendency and dispersion Simplify data to explore trends Provide comparisons STEP 4 Assess the Validity of Your Findings Calculate response rate Look for low completion rate of specific sections of surveys Investigate socially desirable responding This section will take you through the steps of using quantitative methods for evaluation, as shown above in Figure 2. Any piece of information that can be counted is considered quantitative data, including: Attendance at classes or events Participation or drop-out rates Test scores Satisfaction ratings Quantitative methods show the degree to which certain characteristics are present, such as frequency of activities, opinions, beliefs, or behaviors within a group. They can also provide an average look at a group or population. The advantage of quantitative methods is the amount of information you can quickly gather and analyze. The questions listed below are best answered using quantitative methods: What is the average number of times per week that workshop participants search online for health information? How many clinics in our outreach project have bookmarked National Library of Medicine resources on at least one of their computers? On average, how much did trainees confidence in using online health information resources improve after training? What percentage of participants in a PubMed training session said their skills in using the resource improved as a result of taking the course? How many people visited the resource website during the grant period? What percentage of visitors to a booth at a health fair showed interest in finding prescription drug information online? How likely are participants on average to recommend MedlinePlus to others? What percentage of users improved their ability to find good consumer health information as a result of our sessions? Appendix 1 describes some typical data collection methods for quantitative data.
9 Quantitative Methods Step One 3 Step One: Design Your Data Collection Methods This section will focus on one of the most popular quantitative methods: surveys. This method has been chosen because of its usefulness at all stages of evaluation. Surveys use a standard set of questions to get a broad overview of a group s opinions, attitudes, self-reported behaviors, and demographic and background information. In this booklet, our discussion is limited to written surveys such as those sent electronically or through surface mail. Write your evaluation questions The first task in developing the questionnaire is to write the general evaluation questions you want to answer with a questionnaire. Evaluation questions describe what you want to learn by conducting a survey. They are different from your survey questions, which are specific, carefully formatted questions designed to collect data from respondents related to the evaluation questions. (We will use the term survey items when referring to survey questions to distinguish them from evaluation questions.) Listed below are examples of evaluation questions associated with different phases of evaluation: Community assessment . During the planning stages of an outreach project, you can use surveys to assess your outreach community members beliefs, attitudes, and comfort levels in areas that will affect your outreach strategies. Evaluation questions may be: What health information resources do people in this community use most often? How many community members are experienced Internet users? If you have a logic model , you should review the input and activities sections to help you focus the community assessment questions. You also should consider the information you might want to gather to check assumptions listed in the assumptions section of the logic model. Process assessment. Surveys are often used midproject to get participants feedback about the quality of the activities and products of your outreach project. So your evaluation questions might be: How do participants rate the effectiveness of our teaching methods? How do participants rate the usefulness of the online resources we are providing? How many people are likely to use the health resources after the training session? How do participants rate the quality of the training session? You should look at the activities and inputs column of your logic model to determine the questions you might want to ask. You also can check the outcomes columns to determine if your survey can help you collect baseline information that will allow you to assess change. Outcomes assessment. At this stage, you use surveys to help assess the results of your outreach project. So questions might include: Do participants use the online resources we taught after they have completed training? Have participants talked with their physicians about something they found at MedlinePlus? How many health care professionals trained in our study said they retrieved information from MedlinePlus to give to a patient? When designing a survey for outcomes assessment, review the outcomes columns of your logic model.
10 4 Quantitative Methods Step One Figure 3: Aligning Evaluation and Survey Questions Evaluation Question How do participants rate the quality of the training session? Survey Items How satisfied were you with the information presented during the training session? Would you recommend this session to others? Do you think you will use the online resources in the future? Response Options Very satisfied Somewhat satisfied Neutral Somewhat dissatisfied Very diassatisfied Yes No Don t know Yes No Don t know Develop the data collection tool (i.e. questionnaire) Your next task is to write survey items to help you answer your evaluation questions. One approach is to use a table like that shown in Figure 3, above, to align survey items with evaluation questions. Writing surveys can be tricky, so you should consider using questions from other projects that already have been tested for clarity and comprehension. (Although adopting items from existing questionnaires does not mean that you should forgo your own pilot test of your questionnaire.) Journal articles about health information outreach projects sometimes include complete copies of questionnaires. You can also contact the authors to request copies of their surveys. You also could try contacting colleagues with similar projects to see if they mind sharing their surveys. However, if you do copy verbatim from other surveys, always be sure to secure permission from the original author or copyright holder. It also is a collegial gesture to offer to share your findings with the original authors. Figure 4, on page 5 through 7, gives you six examples of commonly used item formats. The visual layout of your survey is also important. Commercial websites that offer online survey software give examples of how to use layout, color, and borders to make surveys more appealing to respondents and easier for them to complete. There are several popular commercial products to create web-based surveys, such as SurveyMonkey (http://surveymonkey.com) and Zoomerang (http://www.zoomerang.com). In most cases, you will want to design online surveys that are accessible to respondents with disabilities. This means that your survey should be available to respondents who use screen reader software or need high contrast in the surveys or those who are limited in their keyboard use. SurveyMonkey.com states that its application meets all current U.S. Federal Section 508 certification guidelines and, that if you use one of its standard questionnaire templates, your survey will be 508 compliant. If you are not using SurveyMonkey or one of its templates, you should read tips from SurveyMonkey.com about how to make your questionnaires accessible by visiting its Section 508 Compliancy tutorial page . Pilot test the questionnaire Always pilot test your questionnaire before you send it to the target audience. Even if you think your wording is simple and direct, it may be confusing to others. A pilot test will reveal areas that need to be clarified. First, ask one or two colleagues to take the survey while you are present and request that they ask questions as they respond to each item. Make sure they actually respond to the survey because they may not pick up confusing questions or response options just by reading it. Once you have made adjustments to the survey, give it to a small portion of your target audience and look at the data. Does anything seem strange about the responses? For instance, if a large percentage of people are picking other on a multiple-option question, you may have missed a common option.
11 Quantitative Methods Step One 5 Figure 4: Survey Item Formats Example 1: Two Options Have you used MedlinePlus since the training session? Yes No Not sure Tips This yes-no item works well for collecting factual information, such as people s participation in activities, exposure to publicity materials, or experience with specific online resources Other two-option formats are true/false, support/oppose, and agree/disagree. Include a don t know or not sure option for participants who either cannot remember or are not sure about the information you are requesting. Note: In electronic survey design the convention is to use round radio buttons for questions that direct respondents to choose one option. Example 2: Best Option The last time you looked for health information on the Internet, who were you getting it for? (Choose one.) Myself A supervisor A family member A client A friend or co-worker Other (please describe ) Tips Best option items are good for collecting information about the respondent s attributes and behaviors. Make sure that choices do not overlap so that each person can easily choose only one response. Provide an other response for options that are not included on the list. Example 3: Multiple Options Tips Where do you get health information? (Check all that apply.) From my doctor Television At the clinic Friends or family members Newspapers The Internet Magazines Other (please describe ) This is a faster version of the yes/no format: A check means yes and a blank means no. If your list of options is more than 6 or 7 items, use a yes-no format instead. If the list is too long, people may not consider every item. When forced to respond, they are more likely to look at each item. Use Other even if you think you have listed all possible responses. People will use this option if they are not sure where their option fits. Note: In electronic survey design, the convention is to use square check boxes for questions that direct respondents to choose all applicable options.
12 6 Quantitative Methods Step One Figure 4: Survey Item Formats, continued Example 4: Version 1 Rating Scales Please select the option that indicates your level of agreement with the statement. Because of the training session, I am much more confident about my ability to find information about my health concerns. Strongly Somewhat Uncertain Somewhat Strongly Disagree Disagree Agree Agree Version 2 Please select the option that indicates your level of agreement with the statement. The exercises were helpful. Not at all Very helpful helpful Tips These two formats are good for collecting information from respondents about their attitudes, feelings, beliefs, and opinions. The numbers should match the tone and intensity of the response labels. For example, associate a 1 with the most negative response label and 5 with most positive. A neutral point is usually recommended for participants who do not have strong opinions in either direction about the item. Some survey designers use even numbers of responses. This approach works best for items eliciting opinions or attitudes, particularly if you think people may be reluctant to voice their opinions and will hide in the neutral response option. You can provide as many response choices as you want, but most experts believe 5-7 options are adequate. Keep the direction of the responses consistent throughout the questionnaire. For example, always have the most negative response option on the far left and the most positive option on the far right. The design stage also entails seeking approval from appropriate committees or boards that are responsible for the safety and well-being of your respondents. If you are working with a university, most evaluation research must be reviewed by an Institutional Review Board (IRB). Evaluation methods used in public schools often must be approved by the school board, and community-based organizations may have their own review processes that you must follow. Because many evaluation methods pose little to no threat to participants, your project may not require a full review. Therefore, you should consider meeting with a representative from the IRB or other committee to find out the best way to proceed with submitting your evaluation methods for approval. Most importantly, it is best to identify all these review requirements while you are designing your methods. Otherwise, your evaluation may be significantly delayed. Once you have pilot-tested the survey and attained required approvals you are ready to administer it to your entire sample.
13 Quantitative Methods Step One 7 Figure 4: Survey Item Formats, continued Example 5: Rank-Order Listed below are different health topics that could be included on a consumer health website. Rank the features in terms of how important each topic is to you, with 1 as the most important feature and 7 as the least important. Specific health conditions Wellness information Alternative medicine Prescription drugs Health insurance, Medicaid, Medicare Clinical trials Health news Tips This format should be avoided. Ranking items is a difficult task for respondents. Also, you may force respondents to rank two items that are of equal importance to them. When possible, choose a rating scale (Example 4) instead of a rank-order item. Statistical analysis of rank-ordered items is very tricky because responses across individuals are not comparable. Using the item above as an example, two people may rank prescription drugs as the most important feature of a website relative to the other features in the list. However, the first respondent may think everything on the list is important and the second may think nothing is important, so a 1 tells you nothing about the strength of the importance to each respondent. To analyze this type of data, the best you can do is show how many times an item was ranked, for instance, as 1 or 2. Example 6: Open-Ended List at least two important things you learned in the training session today: Tips This format yields qualitative data, but it is often helpful in interpreting the statistical information you gather on your survey. To analyze open-ended questions, use the methods described beginning with Step Three of the Qualitative Methods of this booklet on page 27. Avoid starting a survey with open-ended questions. Open-ended questions can be over whelming and people may choose to not take the survey. Draw the respondent in with some interesting, easy quantitative questions and save your open-ended questions for later in the survey.
14 8 Quantitative Methods Step Two Step Two: Collect Your Data Decide who will receive the questionnaire As part of planning your survey, you will decide whether to collect data from a sample (that is, a subgroup) of your target population and generalize the responses to the larger population or to collect data from all participants targeted by the survey. Sampling is used when it is too expensive or timeconsuming to send a survey to all members of a group, so you send the survey to a portion of the group instead. Random sampling means everyone in the population has an equal chance of being included in the sample. For example, if you want to know how many licensed social workers in your state have access to online medical journals, you probably do not have to survey all social workers. If you use random sampling procedures, you can assume (with some margin of error) that the percentage of all social workers in your state with access is fairly similar to the sample percentage. In that case, your sample provides adequate information at a lower cost compared with a census of the whole population. For details about random sampling, see Appendix C of Measuring the Difference . With smaller groups, it is possible to send the survey to everyone. In this case, any information you summarize is a description of the group of respondents only. For instance, if you survey all seniors who were trained in your outreach project to use MedlinePlus and 80% of them said they used it at home one month after the session, you can describe how many of your trainees used MedlinePlus after training. This percentage provides important information about a result of your outreach project. However, you cannot make the generalization that 80% of all trained seniors use MedlinePlus at home after they are trained because you have not randomly sampled from among all seniors who received training on the resource. Maximize response rate The quality of your survey data depends heavily on how many people complete and return your questionnaire. Response rate refers to the percentage of people who return a survey. When a high percentage of people respond to your survey, you have an adequate picture of the group. But when you have a high percentage of nonresponders (members of your sample who did not complete your questionnaire), you are not sure if they share a lot of similar characteristics that might affect the accuracy of your interpretation of your findings. For example, the nonresponders may have been less enthusiastic than responders and were not motivated to complete the questionnaire. If they had actually responded, you may have found lower levels of satisfaction in the total group. If the survey was administered electronically, the responders may be more computerliterate than nonresponders. Without participation of these nonresponders, your findings may be favorably biased toward the online resources that you are promoting. The problem with low response rate is that, while you may suspect bias when your response rate is low, it is difficult to confirm your suspicion or determine the degree of bias that exists. Statisticians vary in their opinions of what constitutes a good response rate. Sue and Ritter reviewed survey literature and reported that the median response rate was 57% for mailed surveys and 49% for online surveys . In our experience talking with researchers and evaluators, 50% seems to be the minimal response rate acceptable in the field of evaluation. Figure 5, on page 9 defines a typical protocol for administering mailed surveys. Studies show that these procedures are effective for surveys sent either through regular mail or [7, 8, 9]. Because electronic surveys are becoming increasingly popular, we have provided additional tips for increasing their response rate: Keep your survey as simple as possible so it will load quickly. Avoid questions that require a response before allowing participants to move on to the next question. If respondents cannot or do not know how to answer a question, they will likely not complete the questionnaire if they cannot move beyond that question. Give participants an estimated time required to complete a survey. Use a progress bar that tells them how much of the questionnaire they have completed and how much is remaining. Scrolling seems to provide better completion rates than forcing respondents to go through pages with one or two questions. Be sure to start your survey with questions that are easy and interesting to answer. Do not start with openended questions because they may make the survey seem overwhelming to respondents. Demographic questions should be at the end of the survey because respondents find them boring or, in some cases, offensive. Incentives may help your response rate. For both online and mailed surveys, token incentives (as small as $5) usually are shown to be more effective if they are sent along with your first request for respondents participation (the pre-notification letter). Typically, promises to send
15 Quantitative Methods Step Two 9 Figure 5: How to Administer Surveys 1. Send a Pre-Survey Letter Always send a personalized pre-survey letter (print surveys) or (electronic surveys) to the target audience from someone influential or well-liked by the group. Let them know that a survey will be sent via within the next week. For pre-notification s, have your influential endorser s name or address in the from or subject field if possible. 2. Send the Survey Within a week after the pre-survey letter, send the survey with a personalized cover letter (e.g., Dear Jane Smith ) or personalized with a link to the survey. (Microsoft Word will create mail-merged messages and put them in Outlook s outbox.) 3. Send a Personalized Reminder Within a week after sending the survey, send a personalized reminder postcard or to nonresponders. 4. Send a Second Reminder Within two weeks, send or another survey to nonresponders, again with a personalized cover letter. 5. Keep Track of Undeliverable Surveys If you mail surveys, be sure to use first-class mail so undeliverable surveys are returned to you. If you send surveys through , keep track of the returned s and, if possible, send print surveys to those participants. This mixed-method approach has been shown to increase response rates for electronic surveys. From: Dillman DA et al. Internet, mail, and mixed-mode surveys  incentives after respondents complete a survey have little effect on response rate [8,9,10]. There are exceptions to this finding. One study of incentives with web-based surveys showed that promises to enter respondents into lotteries for $25-$50 gift cards was more effective than pre-paid or promised incentives. While incentives seem to boost response rate, most survey researchers think that making multiple contacts has an equal or greater positive effect on response rates compared with incentives. So if you have to choose between incentives or postage for follow-up postcards and replacement surveys, choose the latter. Check for nonresponse bias Getting a high response rate can be difficult even when you implement procedures for maximizing it. Because survey researchers have been facing decreased levels of response rate in recent years, the field of public opinion research has a number of studies focused on the relationship between low response rate and bias (called nonresponse bias). Some survey researchers have concluded that the impact of nonresponse rate is lower than originally thought . If you fail to get a return rate of 50% or more, you should try to discern where the bias might be: If resources allow, attempt to contact nonresponders with a short version of the survey to assess the level of bias in the sample. You can compare your findings against information you collected through focus groups, interviews, and other qualitative methods to see if the numbers are consistent with survey findings. Check to see how closely your respondents match the demographics of your sample. For instance, if you find that a higher proportion of hospital librarians in your sample responded compared with public librarians, you can speculate about how responses to your questions may not be generalizable to public librarians. You also can compare responses of subgroups to further explore bias associated with low representation of a particular subgroup. Using the example from the bullet above, you can compare your public librarians and hospital librarians responses to see if your concerns are warranted.
16 10 Quantitative Methods Step Two The bottom line is that you should explore your sample for nonresponse bias. You may decide, in fact, that you should not analyze and report your findings. However, if you believe your data are still valid, you can then include your response rate, potential biases, and the results of your exploration into nonresponse bias to your stakeholders. They can then judge for themselves the credibility of your data. Provide motivation and information about risks and participants rights The correspondence around surveys is an important part of the overall survey design. The pre-notification letter is your first contact with respondents, and the impression you create will determine the likelihood that they will ultimately complete the questionnaire. It needs to be succinct and motivational. The cover letter also is a motivational tool for inducing them to participate, and it should inform them before they begin the survey of their rights and potential risks to participation. This is called informed consent, and it is part of the Program Evaluation Standards described in the booklet preface (specifically, the propriety standard ) . If you must have your project reviewed through an institutional review board (IRB) or some other type of review group, you should get specific details of what should be in the letter. (The IRB will want to see your correspondence as well as your questionnaire.) Your reminder notices are your last attempt to motivate participation. They generally are kept short and to the point. Figure 6, on page 11, provides a checklist for creating the letters and s used in survey distribution. Once you have received the last of your surveys, you have accumulated raw data that you summarize so that you can see patterns and trends in the data that you can use to inform project decisions. To do so, you must summarize the raw data so you can then interpret it.
17 Quantitative Methods Step Two 11 Figure 6: Checklists for Writing Survey Correspondence 1. Checklist for pre-notification letters/ s: For mailed surveys, send the letter with the signature that your respondents know and trust. For electronic communication, use an electronic signature or the name and signature line of the person they know and trust. Also, for electronic questionnaires, use that person s address in the FROM field if possible. Briefly describe the project and why it is important that they respond. (Emphasize how it is important to them, their families, and their communities as well as how their responses are important to you and your organization.) Explain when and how (e.g., mail, with a link) they can expect the questionnaire and who will send it to them. Thank them in advance for their help. 2. Checklist for cover letters/ s Briefly describe your project and why you want respondents to reply to your questionnaire. Directly ask recipients to participate. Explain how the respondents participation will benefit them (e.g., by helping the organization improve service; making their opinions heard) Appeal to the respondent s altruism by describing how his or her participation will help others. State the sponsors of the study. Explain how to complete the questionnaire (e.g., click on the link below or complete the enclosed questionnaire and return it in the enclosed, postage-paid envelope. ) You may want to have the link near the top of the message. Describe who will see their individual responses and how you will maintain confidentiality. Usually, confidentiality is maintained by removing identifiable information from responses before they are shown to others, and only reporting summary data. Describe any risks to the respondent if he or she chooses to respond. (If you don t know of any risks, state that.) Indicate the voluntary nature of their participation and their right to withhold answers at any point in the survey. Describe incentives. (An incentive may include a promise to send them results of the questionnaire and how you used the results.) Include a statement that the participant is free to refuse to participate and can stop responding at any time. Request that they respond by a certain date. Give contact information if a respondent has further questions. 3. Checklist for reminder letters/ s: State that you are sending a reminder for them to complete the questionnaire. Thank those who have already responded. (Note: You can send targeted follow-ups to those who have not responded, but include the thank-you statement anyway. It adds an air of politeness and it communicates to the nonresponders that some of their peers have responded.) Request those who have not responded to do so by a given date. For electronic questionnaires, include a link to the questionnaire in every follow-up. For print questionnaires, you send a reminder postcard for the first follow-up and a letter with another copy of the questionnaire on the second follow-up.
18 12 Quantitative Methods Step Three Step Three: Summarize and Analyze Your Data Compile descriptive data The first step in analyzing quantitative data is to summarize the responses using descriptive statistics that help identify the main features of data and discern any patterns. When you have a group of responses for one question on your survey, that group of responses is called a response distribution or a distribution of scores. Each question on your survey, except open-ended ones, creates a distribution of scores. Descriptive statistics describe the characteristics of that distribution. For some survey question distributions, you want to see how many respondents chose each possible response. This will tell you which options were more or less popular. You start by putting together a table that shows how many people chose each of the possible responses to that question. You then should show the percentage of people who chose each option. Percentages show what proportion of your respondents answered each question. They convert everything to one scale so you can compare across groups of varying sizes, such as when you compare the same survey question administered to training groups in 2011 and Figure 7 shows you how to construct a frequency table. Calculate measures of central tendency and dispersion You also should determine the most representative score of a distribution. The most representative score is called the measure of central tendency. Depending on the nature of your score distribution, you will choose among three measures: Mode. The most frequent response. Median. The score that is in the middle of the distribution, with half of the scores above and half below. To find it, sort your distribution from highest to lowest ratings, then find the number that equally divides the distribution in half. (If you have an even number of scores, add the two most central scores and divide by two to calculate the median.) Mean. This is known as the average response in your distribution. It is computed by adding all responses and dividing by the number of respondents who answered the question. You also need to calculate the spread of your scores, so you can know how typical your measure of tendency is. We call these measures of dispersion, the most frequently reported measures being range (the lowest and highest scores reported) and standard deviation (the spread of scores, with a higher standard deviation meaning a bigger spread of scores). Figure 7: Frequency Table Question: Please indicate your level of agreement with this statement. I am more confident about finding prescription drug information on the Web because of this training session. N=100 Response Strongly Disagree Somewhat Disagree Uncertain Somewhat Agree Strongly Agree Total Missing Response value (1) (2) (3) (4) (5) Frequencies Percent 0.0% 2.0% 5.0% 36.0% 54.0% 3% Valid Percent 0.0% 2.1% 5.1% 37.1% 55.7% Definitions N Number of people responding to the survey. (Note: 100 people returned a survey, but only 97 responded to this particular question.) Frequencies The number of respondents choosing each response. Percent The number of those choosing that response divided by the number of people who completed the survey. Valid Percent The number of respondents choosing that response divided by the number of respondents who answered the question. In this example, we had 100 people complete the survey, but only 97 actually responded to this particular question.
19 Quantitative Methods Step Three 13 You do not always report all central tendency and dispersion measures. The ones you use depend on the type of data collected by a given item. Figure 8, on page 14, shows how you would represent these measures for three levels of data. The first level is called nominal-level data. It is considered first level because the only information you get from a respondent s answer is whether he or she does or does not belong in a given category. Table A shows a very typical nominal-level question: Where do you live? For this categorical data, the measure of central tendency (the most representative response ) is the mode. Percentages tell you how responses disperse among the options. Table B describes responses to the same question used in Figure 7, on the previous page. The data collected in Table B s question gives you information about intensity. As you read the response options on Table B from left to right, each option indicates a higher level of agreement with the statement. So if someone marked strongly agree, that respondent indicated a higher degree of agreement with the statement than a respondent who marked somewhat agree. Because we can rank responses by their degree of agreement with the response, they are considered ordinal-level data. For ordinal-level data, the most representative score is the median. In the example in Table B, the median score is 4. That score indicates that 50% of responses were 4 (somewhat agree) or above and 50% were responses of 4 and below. The range of responses presents how widespread the ratings were in a distribution. For this question, all responses were between 2 ( somewhat disagree ) and 5 ( strongly agree ). In Table C, the interval/ratio-level data suggest even more information than provided by the question in Table B. The question asked respondents how many times they visited their public library in the past 30 days. As with our question in Table B, a higher number means more. A respondent who visited 4 times visited more often than a person who visited 2 times. But notice that you also can describe how much more, because each visit counts an equal amount. So you know that the respondent who went 4 times to the public library visited twice as often as the person who went 2 times. (There is a difference between interval-level and ratio-level data that we will not discuss here because both are described with the same statistics. If you are interested, this difference is described in any basic statistics textbook.) For this level of data, the most representative score usually is the mean (also known as the average) and the standard deviation is an index of how far the scores scatter from the mean. If you have a relatively normal distribution (something you probably know as a bell-shaped distribution), then approximately 68% of scores will fall between one standard deviation below and one standard deviation above the mean. The standard deviation is really more useful in understanding statistical tests of inference, such as t-tests and correlations. It may not be particularly meaningful to you, but if you report a mean, it is good practice to report the standard deviation. For one reason, it tells people how similarly the people in your sample were in their responses to the questions. Also, it is another way to compare similarities in samples that responded to the question. Notice that we qualified our statement about the mean being the most representative central tendency measure for interval/ ratio-level data. You also will notice that, in Table C, the median and range are reported as well as the mean and standard deviation. Sometimes, you may get a couple of extremely high or low scores in a distribution that can have too much effect on the mean. In these situations, the mean is not the most representative score and the standard deviation is not an appropriate measure of dispersion. For the data presented in Table C, let s say the highest number of visits was 30 rather than 7. For the fictional data of this example, changing that one score from 7 to 30 visits would alter the mean from 1.8 to 2.7. However, the median (which separates the top 50% of the distribution from the bottom 50%) would not change. Even though we increased that highest score, a median of 1 would continue to divide the score distribution in half. If your mean and medians are not similar, then the median and range are the measures you should report, even if your scores are interval/ratio-level. How big a difference is too big? Check your distribution of scores for extreme scores. If they are not obvious or you aren t sure, report both. In fact, there are other reasons you might want to report median and range rather than mean and standard deviation for interval/ ratio-level data. There are times when the average doesn t make as much sense as a median when you report your findings to others. In Table C, it may make more sense to talk about 2 visits rather than 1.8 visits. (No one really paid 1.8 visits, right?). The range is also easier to grasp than standard deviations. One other note: There are times that you might see people report means and standard deviations for data similar to what you see in Table B. Discussions among measurement experts and statisticians about the appropriateness of this practice have been long and heated. Our opinion is that our discussion here reflects what makes sense most of the time for health information outreach. However, we also can see the other point of view and would not discredit this practice out of hand. There are other ways to use tables to help you understand your data. Figures 9-12, on pages 15 and 16, show formats that will help you analyze your descriptive data. After you compile a table, write a few notes to explain what your numbers mean.
20 14 Quantitative Methods Step Three Figure 8: Measures of Central Tendency and Dispersion Table A: Nominal-Level (Categorical) Data Question: Where do you live? N=144 Delaware New York New Jersey Pennsylvania Measure of central tendency Mode 56 Measure of dispersion [none] Mode The mode is the most frequent response. Table B: Response Ordinal-level (Ranked) Data Question: Please indicate your level of agreement with this statement. I am more confident about finding prescription drug information on the Web because of this training session. N=100 Strongly Disagree Somewhat Disagree Neither Agree nor Disagree Somewhat Agree Strongly Agree Response value (1) (2) (3) (4) (5) Frequencies Percent 0.0% 2.0% 5.0% 36.0% 54.0% Valid Percent 0.0% 2.1% 5.2% 37.1% 55.7% Median 5 Lowest Score Highest Score Range 2 5 Median The median is the score that is in the middle of the distribution, with half of the scores above and below. Range The range shows the lowest and highest scores reported Total Missing Table C: Interval/Ratio-Level (Equal Distance) Data Question: How many times did you visit your public library in the last 30 days? N=25 Response (Responses were open-ended) Mode 2 visits Median 1 visit Range Lowest=0 Highest=7 Mean (average) 1.8 Standard deviation 1.9 Mean The mean is the average response in your distribution. It is computed by adding all responses and dividing by the number of respondents who answered the question. Standard deviation The standard deviation gives you an idea of how closely scores cluster around the mean. A higher standard deviation means a bigger spread of scores. In a normal (bell-shaped) distribution, about 68% of scores will fall between one standard deviation below and one standard deviation above the mean.
Wilder Research Analyzing and interpreting data Evaluation resources from Wilder Research Once data are collected, the next step is to analyze the data. A plan for analyzing your data should be developed
Designing Effective Survey Instruments Thomas M. Archer, Ph.D. 2007 Definitions: Handout Some Terms Data Instrumentation Quantitative and Qualitative Survey and a Questionnaire Reliability and Validity
Survey research Chaiwoo Lee Postdoctoral Associate MIT AgeLab firstname.lastname@example.org agelab.mit.edu Contents Key characteristics of survey research Designing your questionnaire Writing the questions Putting them
Innovation Insight Series Number 14 http://www.psu.edu/president/pia/innovation/ Across the University many people use data in assessment and decision-making. Data-based decision-making, an essential element
Survey Research Types of Surveys Surveys are classified according to their focus and scope (census and sample surveys) or according to the time frame for data collection (longitudinal and cross-sectional
AEC407 1 Glenn D. Israel and Jessica L. Gouldthorpe 2 As part of the Savvy Survey Series, this publication provides Extension faculty with an overview of the process that uses email to invite participants
Wilder Research Data entry and analysis Evaluation resources from Wilder Research General instructions Preparation for data entry Data entry is often thought of as a time-consuming process, but there are
Capstone Suggestions for Survey Development for Research Purposes 1. Begin by listing the questions you would like to answer with the survey. These questions will be relatively broad and should be based
State of Washington Health Care Authority 2014 Washington State Health Care Authority Adult Medicaid Health Plan CAHPS Report December 2014 3133 East Camelback Road, Suite 300 Phoenix, AZ 85016 Phone 602.264.6382
Suggestions for Survey Design: Things to consider when conducting a survey Did you know that has conducted over 50 surveys in the past two years? The research office conducts annual surveys of first-time
Office of Adult Education Follow-Up Manual State of Michigan Workforce Development Agency Office of Adult Education revised September 2014 CONTENTS OVERVIEW OF NRS REQUIREMENTS...2 OVERVIEW OF THIS DOCUMENT...
Continuous Program Improvement (CPI) CPI Support Call Analyzing CPI Qualitative Data Wednesday February 11 th & Thursday February 12 th, 2009 ETR Associates BA Laris, Lisa Unti, Kris Freiwald, Gina Lepore
University of Wisconsin-Extension Cooperative Extension Madison, Wisconsin PD &E Program Development & Evaluation Using Excel for Analyzing Survey Questionnaires Jennifer Leahy G3658-14 Introduction You
What will I find in this section of the toolkit? Toolkit Section Introduction to the Toolkit Assessing Local Employer Needs Market Sizing Survey Development Survey Administration Survey Analysis Conducting
Step 6: Report Your PHDS-PLUS Findings to Stimulate System Change Sharing information from the PHDS-PLUS can help you launch or strengthen partnerships and efforts to improve services, policies, and programs
Stakeholder Engagement Working Group Stakeholder Engagement: Planning and Budget Resource December 2015 Purpose: This resource was developed as part of the Society of Practitioners of Health Impact Assessment
Good afternoon and welcome to today s Coffee Break presented by the Evaluation and Program Effectiveness Team in the Division for Heart Disease and Stroke Prevention at the CDC. We are very fortunate today
TABLE OF CONTENTS: Section 1: Introduction to the Employee Satisfaction Roll Out Process 3 Section 2: The Survey Roll Out Process Defined... 4-15 1: Survey Completed And Data Collected. 4 2: Leaders Trained
Presenting survey results Report writing Introduction Report writing is one of the most important components in the survey research cycle. Survey findings need to be presented in a way that is readable
Research Design Concepts Independent and dependent variables Data types Sampling Validity and reliability Research Design Action plan for carrying out research How the research will be conducted to investigate
Conducting Market Research Using Primary Data Kynda R. Curtis, Ph.D. Assistant Professor and State Extension Specialist Department of Resource Economics, University of Nevada, Reno WEMC FS#7-08 Overview
Six Steps for Successful Surveys Overview Since the first Web-based survey was conducted in 1994, the use of online surveys has steadily increased as a means of collecting valuable information from important
Performance Management WORKSHOP HANDOUTS Facilitated by: Tara Kemes, Vantage Point Knowledge Philanthropist June 2013 Page 1 of 16 Handout 1 Performance Management System Overview What is performance management?
UCLA CENTER FOR HEALTH POLICY RESEARCH Section 4: Key Informant Interviews Purpose Key informant interviews are qualitative in-depth interviews with people who know what is going on in the community. The
Getting Started with 20/20 Insight TRIAL VERSION 20/20 Insight is a registered trademark of Performance Support Systems, Inc., Newport News, VA. Windows XP, MS Outlook, MS Word, Excel and PowerPoint are
A guide to evaluating road safety education programs for young adults Contents Introduction................................................ 1 1. Identify the goals and objectives of your program.........................
This document was peer reviewed through the NWI. Supporting Wraparound Implementation: Chapter 5a.3 Choosing a Consultant to Support Your Wraparound Project Patricia Miles, Consultant National Wraparound
April 30, 2009 Assignment 4 Instructional Design for Conversion of Legacy FrameMaker Documentation to the new Xerox Branding Xerox and the sphere of connectivity design are trademarks of Xerox Corporation
Process Improvement: The is Not Process Improvement: The is not the Problem Rosa West PhD, MBA, LMHC, LMFT This product was supported by the Florida Department of Children & Families Substance Abuse and
Customer Satisfaction Surveys Raymond Gee, Research Director, TNS Building a customer-centric research program 2 What are organizations focus on today? Smile chart 3 Customer retention: gaining more customers
QUALTRICS TIPS AND TRICKS FOR UNC-CH USERS PLEASE NOTE: This tip sheet is not intended as a step-by-step guide. It was developed to provide information that UNC users have found important to know in addition
1750 H Street NW, 2nd Floor, Washington, DC 20006 P 202.756.2971 F 866.808.6585 hanoverresearch.com Best Practices in B2B Customer Service Data Collection This report examines the best practices for gathering
This document provides a checklist and recommendations to help plan and administer a survey. Contents: Checklist 2 Planning a Survey 3-5 Components of a Survey an Overview 6 Creating a Survey Instrument
8 Tips for Maximizing Survey Response Potential You ve built a survey and are ready to make it available to potential respondents. But how do you convert those potential respondents into completed responses?
CAHPS 4.0 Adult Medicaid Health Plan Survey Continuous Quality Improvement Report February 2012 3975 Research Park Drive Ann Arbor, MI 48108 Table of Contents Background 1 Executive Summary 2 CAHPS 4.0
Evaluating Teaching Innovation By Richard James The evaluation of teaching and learning, and particularly of teaching and learning innovation, has become a priority in universities. The idea in writing
101 Ways to Win The Grant Writing Game by Steve Price, Ed.D. and Stephen Price, M.A. Introduction In this brief ebook, Dr. Steve Price and Stephen Price share 101 strategies for successful grant writing
The Own Your Future Long-Term Care Awareness Campaign: Implications for CLASS By Eileen J. Tell Spring 2011 No. 13 The Community Living Assistance Services and Supports (CLASS) Plan a groundbreaking component
Criminal Justice Research Department of Premier and Cabinet Criminal Justice Evaluation Framework (CJEF): Conducting effective outcome evaluations THE CRIMINAL JUSTICE EVALUATION FRAMEWORK (CJEF) The Criminal
Surveys: Why, When and How Gavin Henning Pamela Bagley Session Outcomes Articulate appropriate uses for surveys Write effective survey questions Construct an effective questionnaire Choose an appropriate
Performing a Community Assessment 37 STEP 5: DETERMINE HOW TO UNDERSTAND THE INFORMATION (ANALYZE DATA) Now that you have collected data, what does it mean? Making sense of this information is arguably
Basic Principles of Survey Design Thomas Lindsay Andrew Sell Introductions Name, Institution, why are you here? Survey Review 5 minutes Political Participation and Attitudes Survey What is the research
Hosting Motivation and Satisfaction Study: Feedback from 2006-07 Long-Term Exchange Students, Host Families, and Host School Conducted by surveying a random selection of participants from exchange programs
TRAINING NEEDS ANALYSIS WHAT IS A NEEDS ANALYSIS? It is a systematic means of determining what training programs are needed. Specifically, when you conduct a needs analysis, you Gather facts about training
Introduction Measuring and evaluating results will provide you and your associates with the vital information you need for making key strategic and tactical decisions prior to, at and after the trade shows,
Top 5 best practices for creating effective dashboards and the 7 mistakes you don t want to make p2 Financial services professionals are buried in data that measure and track: relationships and processes,
Deploying a CRM system in practice Understanding the user experience Received (in revised form): 5th February, 2007 Yuksel Ekinci is one of the leading international academics in service quality and customer
CAREER SERVICES COVER LETTER GUIDE Career Services Goodwin College One Riverside Drive East Hartford, CT 06118 (860) 727-6768 COVER LETTERS COVER LETTER GUIDE Your cover letter introduces you to a prospective
Organizational Research: Determining Appropriate Sample Size in Survey Research James E. Bartlett, II Joe W. Kotrlik Chadwick C. Higgins The determination of sample size is a common task for many organizational
P P R C PHYSICIAN PRACTICE RESOURCE CENTER PHYSICIAN PRACTICE RESOURCE CENTER PPRC is a service for physicians provided by the Massachusetts Medical Society Patient Satisfaction Surveys This resource kit
Dynamics CRM for Outlook Basics Microsoft Dynamics CRM April, 2015 Contents Welcome to the CRM for Outlook Basics guide... 1 Meet CRM for Outlook.... 2 A new, but comfortably familiar face................................................................
2015 State of Michigan Department of Health and Human Services 2015 Michigan Department of Health and Human Services Adult Medicaid Health Plan CAHPS Report September 2015 Draft Draft 3133 East Camelback
Survey Crafter, Inc. Survey Crafter Walkthrough Summary: Walkthrough: Applies to: Topic: Synopsis This tutorial walks you through entering the example employee satisfaction survey in Survey Crafter Professional
Research Skills: Levels of Measurement. Graham Hole, February 2011 Page 1 Levels of measurement in psychological research: Psychology is a science. As such it generally involves objective measurement of
Chapter 7 Qualitative methods for effectiveness evaluation: When numbers are not enough 7.1 Introduction 7.2 Methods of collecting qualitative information 7.2.1 Interviews and focus groups 7.2.2 Questionnaires
Quick Guide For Administrators Based on TIP 52 Clinical Supervision and Professional Development of the Substance Abuse Counselor Contents Why a Quick Guide?...2 What Is a TIP?...3 Benefits and Rationale...4
Proceedings of the 2001 Annual ASEE Conference, ASEE, June 2001. Abstract A COMPARISON OF ELECTRONIC SURVEYING BY E-MAIL AND WEB Catherine E. Brawner, Richard M. Felder, Rodney H. Allen, Rebecca Brent,
Toolkit for the OSEP TA & D Network on How to Evaluate Dissemination A Component of the Dissemination Initiative Richard Sawyer, PhD National Dissemination Center for Children with Disabilities Contents
Outlook XP Email Only Table of Contents OUTLOOK XP EMAIL 5 HOW EMAIL WORKS: 5 POP AND SMTP: 5 TO SET UP THE POP AND SMTP ADDRESSES: 6 TO SET THE DELIVERY PROPERTY: 8 STARTING OUTLOOK: 10 THE OUTLOOK BAR:
TOOLKIT for Making Written Material Clear and Effective SECTION 2: Detailed guidelines for writing and design PART 5 Understanding and using the Toolkit Guidelines for Graphic Design Chapter 1 Tips for
How to Develop a Research Protocol Goals & Objectives: To explain the theory of science To explain the theory of research To list the steps involved in developing and conducting a research protocol Outline:
INTRODUCTION The aim of this How To guide is to provide advice on how to analyse your data and how to present it. If you require any help with your data analysis please discuss with your divisional Clinical
THE MARYLAND STUDY ON PHYSICIAN EXPERIENCE WITH MANAGED CARE September 2001 TABLE OF CONTENTS Executive Summary...1 I. Introduction...3 II. Methodology...5 A. Sample B. Data Collection C. Statistical Analysis
Table of Contents for Cover Letter TIP SHEETS CODE CL - 1 CL - 2 CL - 3 TITLE How do I Write a Cover Letter? Cover Letter Samples How to Start and Close a Cover Letter Navigate this document in several
Introduction to Survey Methodology Professor Ron Fricker Naval Postgraduate School Monterey, California 1 Goals for this Lecture Introduce professor and course Define what we mean by the term survey Characteristics
The 2013 Officer Training Experiment Wesley G. Skogan, Northwestern University This evaluation focuses on the Chicago Police Department's efforts to reshape its relationship with minority communities in
Onboarding Program Supervisor s Guide Supervisor s Guide Introduction This guide has been developed for supervisors to support an effective and successful onboarding process for new employees. As a supervisor,
68 S ECTION 5: DEVELOPING AND IMPLEMENTING AN EVALUATION PLAN EVALUATION METHODS Although indicators identify what you will look at to determine whether evaluation criteria are met, they do not specify
1 21st Annual Conference on Distance Teaching and Learning click here -> Using Online Surveys to Evaluate Distance Education Programs Elaine Strachota, PhD Distance Learning Coordinator and Faculty Milwaukee
EVALUATION METHODS TIP SHEET QUANTITATIVE METHODS: Quantitative data collection methods consist of counts or frequencies, rates or percentages, or other statistics that document the actual existence or
Whether and How to Apply to Ph.D. Programs in Political Science: FAQs Matthias Staisch, The University of Chicago This Version: October 4, 2013 1 Do I need a Ph.D. to attain my professional goals? Before
About the CAHPS Item Set for Addressing Health Literacy Introduction... 1 Why Is This Information Important?... 1 Contents of the CAHPS Item Set for Addressing Health Literacy... 1 Using the Results of
Content Sheet 13-1: Overview of Customer Service Role in quality management system This module will describe basic elements that are essential for developing an effective customer service program. Customer
Northumberland Knowledge Know Guide How to Analyse Data - November 2012 - This page has been left blank 2 About this guide The Know Guides are a suite of documents that provide useful information about
Welcome to today s Coffee Break presented by the Evaluation and Program Effectiveness Team in the Division for Heart Disease and Stroke Prevention at the Centers for Disease Control and Prevention. *Note:
Technical report How to Get More Value from Your Survey Data Discover four advanced analysis techniques that make survey research more effective Table of contents Introduction..............................................................2
Service Satisfaction Survey Summary Report Regional Services Division Prepared by and Corporate Planning Branch October 214 Executive Summary The Service Satisfaction Survey was launched on March 26 th
The Michigan State University Institute for Health Policy (IHP) is recruiting a sample of office based primary care providers to be interviewed concerning their perceptions of the Meaningful Use of electronic
Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use
An NCQA Issue Brief 2000 L Street, NW Washington, DC 20036 888-275-7585 www.ncqa.org Putting Reliable Health Care Performance Measurement Systems into Practice By Joachim Roski, PhD MPH; Vice President,
Breastfeeding in Ontario Evaluating Breastfeeding Programs and Initiatives What is evaluation? Evaluation means different things to different people. Fundamentally, evaluation means getting feedback about