CHIS 2015-2016 Sample Design and Survey Methodology (SDSM) Technical Advisory Committee (TAC) Meeting Summary



Similar documents
A Review of the Cell Phone Sample Component of the California Health Interview Survey

Guidelines & Considerations: Telephone Surveys of Respondents Reached via Cell Phone

The AmeriSpeak ADVANTAGE

CHIS Adult Technical Advisory Committee Meeting

E-reader Ownership Doubles in Six Months

Use of Text Messaging to Increase Response Rates

The Rise in Download Rate of Search & Use by Adults

Cell Phone Coverage in an ABS Multi-Mode Design

Methods for Improving Response Rates: Findings from AAPOR Benjamin L. Messer Washington State University PAPOR Mini-Conference, June 22, 2012

The Demographics of Social Media Users 2012

Tablet Ownership 2013

JSM Survey Research Methods Section

Integrating Cell Phone Numbers into Random Digit-Dialed (RDD) Landline Surveys

A Comparison of Level of Effort and Benchmarking Approaches for Nonresponse Bias Analysis of an RDD Survey

17% of cell phone owners do most of their online browsing on their phone, rather than a computer or other device

24% of internet users have made phone calls online

Cell Phone Activities 2013

Video calling and video chat

CHIS Multicultural Technical Advisory Committee Meeting

Use of Text Messaging to Increase Response Rates

71% of online adults now use video sharing sites

Analysis of Households with Interrupted Telephone Service Using 1995 NPTS Nancy McGuckin May 3, 1999

Location Based Services - The Less Commonly Used

Seniors Choice of Online vs. Print Response in the 2011 Member Health Survey 1. Nancy P. Gordon, ScD Member Health Survey Director April 4, 2012

Cell Phone Reverse Directories Promise or Peril?

California Emerging Technology Fund Calls for National Policy on Affordable Broadband Rate

Developing Standardized Paradata and Dashboards for Use Across Multiple Surveys

Americans and text messaging

Overview of Proposed Changes to BRFSS Cell Phone Sampling Frame

Americans and their cell phones

Oregon Health Insurance Survey

F I N A L R E P O R T. Covered California. Consumer Tracking Survey. PRESENTED TO: Covered California 560 J Street Sacramento, CA MAY 6, 2014

Online Product Research

The Changing Face of American Communities: No Data, No Problem

Survey of Healthy San Francisco Participants

Identifying and Reducing Nonresponse Bias throughout the Survey Process

Cell Phones and Nonsampling Error in the American Time Use Survey

KNOWLEDGEPANEL DESIGN SUMMARY

AUGUST 26, 2013 Kathryn Zickuhr Aaron Smith

Cell Phone Sampling Summit II Proposed Statements on Accounting for Cell Phones in Telephone Survey Research in the U.S.

AAPOR Cell Phone Task Force

Results of the 2007 Consumer Assessment of Healthcare Providers and Systems (CAHPS ) for Medi-Cal Managed Care Health Plans

Developing a Sustainable and Flexible Long-Distance Travel Survey

The AP-Viacom Survey of Youth on Education March, 2011

Pearson Student Mobile Device Survey 2015

Kaiser Family Foundation/New York Times Survey of Chicago Residents

How Many People Have Nongroup Health Insurance?

Practicability of Including Cell Phone Numbers in Random Digit Dialed Surveys: Pilot Study Results from the Behavioral Risk Factor Surveillance System

2014 Ohio Medicaid Managed Care Program CAHPS Member Satisfaction Survey Methodology Report

Incorporating Concepts of Health Literacy into State Prevention, Wellness and Healthcare Programs: Successes and Challenges

Internet, broadband, and cell phone statistics

Scope and Coverage of Landline and Cell Phone Numbers Appended to Address Frames

Mode and Patient-mix Adjustment of the CAHPS Hospital Survey (HCAHPS)

Alternate Strategies for Including Cell Phone Only Households in Surveys: Comparing the Use of ABS and Cell Phone RDD Samples in Massachusetts

Why Sample? Why not study everyone? Debate about Census vs. sampling

Appending a Prepaid Phone Flag to the Cell Phone Sample

Telephone Surveys and the Mobile Phone Only Population Workshop 17, July, 2012.

UCLA CTSI Clinical and Translational Science Institute

Evaluating Mode Effects in the Medicare CAHPS Fee-For-Service Survey

2015 Michigan Department of Health and Human Services Adult Medicaid Health Plan CAHPS Report

RDD Telephone Surveys: Reducing Bias and Increasing Operational Efficiency

L.A.Health. Recent Trends In Health Insurance Coverage Among Los Angeles County Children. Volume 3 Issue 1 October 2000

EVALUATING THE USE OF RESIDENTIAL MAILING ADDRESSES IN A METROPOLITAN HOUSEHOLD SURVEY

Teens and Technology 2013

Recruiting Probability-Based Web Panel Members Using an Address-Based Sample Frame: Results from a Pilot Study Conducted by Knowledge Networks

2003 National Survey of College Graduates Nonresponse Bias Analysis 1

see Designing Surveys by Ron Czaja and Johnny Blair for more information on surveys

Dual Frame Estimation in the 2003 National Survey of College Graduates

Practical Tools for Nonresponse Bias Studies

Billing Zip Codes in Cellular Telephone Sampling

Hawaii County Consumer Spending: Research and Economic Analysis Division Department of Business, Economic Development & Tourism

CALIFORNIA HEALTHCARE FOUNDATION. Medi-Cal Versus Employer- Based Coverage: Comparing Access to Care

Older Adults and Social Media Social networking use among those ages 50 and older nearly doubled over the past year

Attachment 2. CHIS Making an Impact

Introduction to the CAHPS 2014 (MY 2013) Health Plan Survey

REACHING THE U.S. CELL PHONE GENERATION COMPARISON OF CELL PHONE SURVEY RESULTS WITH AN ONGOING LANDLINE TELEPHONE SURVEY

THE FIELD POLL. By Mark DiCamillo, Director, The Field Poll

FIELD RESEARCH CORPORATION

Adults and Cell Phone Distractions

DEPARTMENT OF AGRICULTURE. Food and Nutrition Service. Agency Information Collection Activities: Proposed Collection; Comment Request

Human Service Transportation Plan Region 3

35% of those ages 16 and older own tablet computers

Epi Research Report New York City Department of Health and Mental Hygiene May 2010

Making Surveys Work for You: Some Important Design Considerations

IRS Oversight Board 2014 Taxpayer Attitude Survey DECEMBER 2014

HOUSEHOLD TELEPHONE SERVICE AND USAGE PATTERNS IN THE U.S. IN 2004

C A LIFORNIA HEALTHCARE FOUNDATION. s n a p s h o t Haves and Have-Nots: A Look at Children s Use of Dental Care in California

Using Proxy Measures of the Survey Variables in Post-Survey Adjustments in a Transportation Survey

THE FIELD POLL. Release #2509 Embargoed for print publication: Tuesday, June 16, 2015

Release #2448 Embargoed for Print Publication: Monday, August 19, 2013

Welcome & Introductions. A Study of Population Health. What Is the Family Health Survey?

National Postsecondary Student Aid Study

NUMBERS, FACTS AND TRENDS SHAPING THE WORLD APRIL 3, 2014 FOR FURTHER INFORMATION ON THIS REPORT: Aaron Smith, Senior Researcher

THE FIELD POLL. UCB Contact. Release #2430 Release Date: Friday, September 28, 2012

New Findings on Questionnaire Design from 2012 AAPOR Conference Papers AAPOR conference recap mini-conference 6/22/2012 San Francisco, CA Su Li

Optimizing Call Patterns for Landline and Cell Phone Surveys

Covered California CAHPS Ratings Fall 2014 Scoring Health Plan s Historical CAHPS Results

Leah Ericson Computing Services, Carnegie Mellon University 5000 Forbes Ave, Pittsburgh, PA 15213,

AN EXPERIMENT IN CALL SCHEDULING. Patricia Cunningham, David Martin, J. Michael Brick Westat, 1650 Research Boulevard, Rockville, MD

Adult Non-Native English Speakers in the United States

Transcription:

CHIS 2015-2016 Sample Design and Survey Methodology (SDSM) Technical Advisory Committee (TAC) Meeting Summary Date: December 16, 2014, 9:00am-11:30am Chair: Abdelmonem A. Afifi, PhD Dean Emeritus & Professor of Biostatistics & Mathematics UCLA School of Public Health Members and Participants Present: CHAIR 1. Abdelmonem A. Afifi, UCLA School of Public Health TAC CONTACTS 1. Matt Jans, UCLA Center for Health Policy Research 2. Jane Kil, UCLA Center for Health Policy Research MEMBERS 1. Lisa Carley-Baxter, Research Triangle Institute International 2. Thomas Belin, UCLA School of Public Health 3. Stephen Blumberg, National Center for Health Statistics, CDC 4. Linda Bourque, UCLA School of Public Health 5. J. Michael Brick, Westat Statistical Group 6. Trent Buskirk, Marketing Systems Group 7. Leah Melani Christian, The Nielsen Company 8. William Cumberland, UCLA School of Public Health 9. Jill Dever, Research Triangle Institute International 10. Sherman Edwards, Westat Statistical Group 11. Ned English, NORC, University of Chicago 12. Rachel Harter, Research Triangle Institute International 13. Courtney Kennedy, Abt SRBI 14. Paul Lavrakas, NORC, University of Chicago 15. Patricia Lee, California Department of Health Care Services 16. Sunghee Lee, Institute for Social Research, University of Michigan 17. Joel Moskowitz, UC Berkeley School of Public Health 18. Kristen Olson, University of Nebraska, Lincoln 19. Andy Peytchev, Research Triangle Institute International 20. Emilia Peytcheva, Research Triangle Institute International 21. David Roe, Research Triangle Institute International 22. Michael Stern, NORC, University of Chicago 1

EX-OFFICIO 1. David Grant, UCLA Center for Health Policy Research 2. Ninez Ponce, UCLA Center for Health Policy Research CHIS STAFF: UCLA 1. Nedghie Adrien, UCLA Center for Health Policy Research 2. May Aydin, UCLA Center for Health Policy Research 3. Tara Becker, UCLA Center for Health Policy Research 4. Carlo Carino, UCLA Center for Health Policy Research 5. Akbar Esfahani, UCLA Center for Health Policy Research 6. Royce Park, UCLA Center for Health Policy Research 7. Priya Thaker, UCLA Center for Health Policy Research 8. Yueyan Wang, UCLA Center for Health Policy Research 9. Hongjian Yu, UCLA Center for Health Policy Research Meeting Notes Welcome Afifi, David Grant, and Ninez Ponce welcomed members to the SDSM TAC meeting. o David Grant gave an update about the new CHIS data collection vendor. RTI is the new vendor for CHIS 2015-2016. He welcomed RTI and thanked Westat for 13 years of outstanding service and input in the CHIS sample design and survey methodology. Acknowledgements to Sherm Edwards, Mike Brick, and Ismael Flores Cervantes. Goals and Themes: Matt Jans highlighted goals and themes of the TAC meeting: o Goals: (1) generate list of ideas for future designs and research based on current design, (2) describe changes for CHIS, (3) describe the Building Healthy Communities study, and (4) discuss future designs. o Themes: (1) address-based sampling, (2) data collection mode, (3) incentive structures and amounts, and (4) non-response follow-up methods. Core Elements of CHIS Design for 2015-2016 Cycle Matt Jans reviewed the core elements of CHIS design: o CHIS methods reports: http://chis.ucla.edu/chis/design/pages/methodology.aspx o CHIS interviews are done via telephone. The sampling frames are landline RDD, cell phone RDD, and surname lists. Pre-notification letters with $2 incentives are sent when sampled phone numbers are matched to addresses. o The CHIS population is the non-institutionalized CA population, stratified by counties and smaller county groups, with some sub-county strata. Forty-four strata represent 58 counties, with 17 of the smallest counties grouped into three strata. Most strata have a target of 400 or more interviews. o CHIS has two estimation goals: (1) county-level estimates, (2) state-level estimates for largest race/ethnicity groups via oversamples. 2

o CHIS has been conducted biennially since 2001 and became a continuous survey in 2011. Matt Jans reviewed CHIS response rates (RR) overtime, for landline and cell samples: o All response rates are in the methodology reports online. o There s been a slight decline in landline overall RR and an increase in cell phone overall RR among household, adult, child, and adolescent surveys. o Linda Bourque asked about extended interviews vs. overall interviews. Matt Jans replied that screener*extended=overall. A big jump from 2009 to 2011 in the cell phone screener interviews, which is likely due to increased usage of cell phones (i.e. don t need to be home, and etc.). Bill Cumberland asked about adjustments for people who have both cell and landline. Mike Brick replied it s a dual-frame with a composite estimator of the overlap. The composite factor in the last cycle was 0.5 (CHIS 2011-2012). o Overall RRs have leveled off since 2009. Major drops were between 2005 and 2009, common on other telephone surveys. Courtney Kennedy asked if it s related to the switch from cell screening to overlap. Mike Brick replied it s due to the population change, (e.g., who has a cell phone and how they use it). Screen-out cell-only population was only used in CHIS 2007 (cell phone sample pilot test). Since 2009 it s been an overlap design. o Screener and extended RRs dropped between 2005 (49.8%) and 2007 (35.6%), and the reasons are unknown. Possible explanations offered by the TAC were the start of the recession and changes in technology. o Teen RRs have always been lower, because interviewers need to obtain permission from parents. Trent Buskirk asked if the screener drop due to lack of contact or more ineligibles being encountered. Mike Brick replied that the noncontact rate did not change during the 2005-2007 period. o The state RR is 31.6% for landlines, with smaller and more rural counties having higher RRs rates than larger counties with dense/urban populations. San Francisco has always had the lowest RRs (SF is a county and a city, a small area with a younger, denser, more affluent, and more educated population). Ninez Ponce also suggested a regional affect, the survey says UCLA, and maybe saying University of California for Northern California residents. Mike Brick has used local help before (e.g. letterheads from county health departments) and it lowered the RRs. Urban places are harder to survey in general. It s hard to add cell phone data to county RRs, since county isn t known for the non-respondents. Ninez Ponce suggested adding county during the cell phone screener, instead of waiting until the end of the survey, and many TAC members agreed. Andy Peytchev commented that the county assigned vs. county of residence varies, and knowing the county early will help CHIS manage the non-response sample and the specific goals by county. o For adult sample characteristics, age groups had a 50% or higher cell phone adult RR vs. landline, except 65+. Paul Lavrakas asked about the denominator for the adult RR characteristics sample. Matt Jans replied the denominator is if the screener was completed. Sample adult age is asked in the screener. There is a continuing jump in cell phone RRs. Looking at extended RR allows us to study the relationship of response rate with demographics independent of differences in contact rate and screener response. What s planned in 2015-2016 3

o Increase the cell phone sample to 50% landline and 50% cell (the past has been 80% landline and 20% cell). o Modify incentive structure and experiment with promised incentives for nonrespondents. The $2 incentive will be kept for CHIS 2015-2016. Andy Peytchev added that information about nonresponse is limited. Paul commented that there s no theoretical justification for a one-size fits all, but it s ethical. Trent Buskirk commented that if using ABS for CHIS, county could be added to the record by default and save another screener question. This could be validated later on, if necessary. o Oversamples: 0-64 living in Covered California rating regions, African Americans and Asians enrolled in Covered CA There are 19 Covered CA rating regions. There s a contract with Covered CA and CHBRP. CHIS is experienced with oversampling Asians (12% population in CA). An African-American oversample is more challenging (7-8% in CA). David Grant added it will be the general African-American population (not just Covered CA enrollees). Linda Bourque added that LA County has the highest number of African-Americans in CA. Tom Belin commented to have Asian subgroups for these oversamples (i.e. Korean and Vietnamese represent different waves of immigration). Increasing RRs is important, but so is a firm, scientific foundation of the understanding of related nonresponse bias. o Responsive/adaptive design. Andy Peytchev replied that it s the flexibility in the study design to increase RR. David Grant gave a brief overview of Building Healthy Communities (BHC) and the BHC oversample. o BHC is a 10-year public health initiative of The California Endowment, which targets 14 communities across CA. There was a BHC oversample in 2009. To help evaluate the BHC initiative, CHIS will collect 400 interviews per community in 2015-2016. ABS will be used, with a combination of telephone match and a mail screener to collect telephone numbers. Pre-paid and promised incentives will be tested. A sub-sample of non-respondents will be contacted in-person. o Courtney Kennedy asked if mail surveys were ruled out over literacy or instrument length. Matt Jans replied that CHIS is not mailed due to language and skip patterns. David Grant added that to keep a reduced mail version in mind for future CHIS administration. Open Discussion Address-based sampling (to reduce costs, improve response rates, one size fits all v. targeting with Census/ACS information or other frame data) o ABS has been tested in two BHC regions, results were decent considering the communities had hard-to-survey characteristics. o Rachel Harter has used ABS with pre-paid and promised incentives (neighborhoods targeted for improved health interventions in LA County). The survey was in English and Spanish. Mail worked better than phone. 4

o Paul Lavrakas has found it effective to send a pre-paid incentive ($2) and also add a promised incentive if the household sends their contact info (unpublished Nielsen research). o It was suggested that reviewing Census tracts or blocks can help decide where to send translated materials. Afifi brought up a multi-language letter. Matt Jans added that letters are ok (and we do them), but not surveys (too lengthy, too many languages if trying to replicate the entire CHIS design). o Trent Buskirk wondered if the screener can include asking about languages spoken, to plan the interview for the respondent. Data collection mode (concurrent v. sequential, mode choice, mode independent of sample frame) o Joel Moskowitz added that the U.S. has 165 million smartphone users. Perhaps CHIS should consider an optional online mode. Skip patterns and languages are not a problem. Global Smartphone Use Continues to Climb, Studies Show http://bit.ly/1wuolwx o Matt Jans added a web survey would need to be adaptive to PC, mobile, and tablets. o Ned English (for Michael Stern) added that he does ABS and asks respondents to visit a URL. Non-web respondents get a mail follow-up. Telephone is last. It s important to consider how the survey experience will be online (smartphone vs. PC). Trent Buskirk added that studies have shown 20-25% of web completes are from smartphones and if they are not optimized, about 70-85% drop out from the survey. o Stephen Blumberg added that push-to-web is typically done for cost, rather than RRs. Ninez Ponce replied cost is more of a pushback than dropping RRs. Surveys saw a drop in 2007 and many questions arose about non-response bias. After those concerns settled, cost in the major challenge. o Paul Lavrakas added that ABS is not rewarding from a cost and data quality standpoint. Mike Brick countered that he had good experience moving from telephone to mail (NHES), with huge reductions in coverage error and improvement in RR. Data quality was comparable. ABS cost 75% of the telephone survey. o Tom Belin added that it might help to start with studying non-response bias. He reminded that the BHC results could help lay a foundation. o Paul Lavrakas added non-response bias differs based on RRs. A reference is Bob s 2006 POQ article on the relationship of non-response bias and response rate. Mike Brick added that measurement is the main problem in CHIS. Changing mode might bring estimate changes from the mode change, moreso than nonresponse bias. o Afifi asked if non-respondents vs respondents have been compared. David Grant added that it was done in 2007, led by Sunghee Lee. Differences were not systematic (only one significant difference of all compared). o Ninez Ponce added that in CHIS history, the mode has stayed the same, with cell phones being the biggest change. o For NHES, Mike Brick added that the estimates did not change when ABS was implemented. For the Angler survey, the estimates changed dramatically (call vs. mail). Mike Brick added that NHES is more comparable to CHIS, than Angler. Kristin Olson added to expect differential effects, shifting from interview to self- 5

administered will benefit the sensitive questions but may hurt the complex questions. Measurement errors are estimate-specific. There are trade-offs and it depends on which estimates are the most valued. Incentive structures and amounts o Incentives were briefly mentioned earlier ($2 advance, $10 promised, and differential for non-response). Paul Lavrakas added earlier to study neighborhood characteristics to differentiate the incentives. Paul Lavrakas added that Nielsen did that before, where they gathered information during the screening and mailed packets based on the call. This could be incorporated into the responsive design/double-sample in the next cycle. Trent Buskirk spoke of Nielsen, how neighborhood data was used (block, census) to build a propensity score. ABS will help build models to get respondents on the phone. Per Trent Buskirk http://www.aapor.org/aaporkentico/standards- Ethics/Institutional-Review-Boards/IRB-FAQs-for-Survey- Researchers.aspx o Joel Moskowitz asked if IRB would approve a differential incentive. Matt Jans added a fixed incentive could work. Stephen Blumberg added that incentive amounts were recently discussed at the PRIM&R meeting. The general view was the incentive amounts are not an ethical issue, as long as the amounts do not rise to a level where one feels coerced to participate and ignore the risks of the research. No incentive is thought to be coercive ("Payment is never coercive": Emily Largent et al. Misconceptions about coercion and undue influence: reflections on the views of IRB members. Bioethics. 2013 Nov;27(9):500-7). o Other incentive options: Bill Cumberland asked if anyone has had any success with lottery tickets (as the incentives themselves). Linda Bourque added that there s a state issue with using the lottery. Literature has shown that cash works better than gift cards, at least for pre-paid incentives, Paul Lavrakas cautioned. Non-response follow-up methods (mode ordering, maximum call attempts before mode switch, incentive change, response or mode prediction at each call) o Andy Peytchev added that it s more feasible to predict the likelihood of responding at next call than the specific window. He also reminded to keep in mind the difference between the mode of contact and the mode of interview. o Kristin Olson recommended that web should be a lead-off mode, not a follow-up mode due to the cost savings. Starting with web and going to mail is better for cost compared with the other way around. o Trent Buskirk added that paradata is useful, easy, and free. The screener mode will help predict follow-up mode and tailor non-response methods. o Patricia Lee added that she worked on a population survey in RI (tobacco). It was mail first, then a phone call follow-up. It had increased RRs. o Per Kristin Olson, a recent mail +FTF mode experiment http://fmx.sagepub.com/content/26/2/141.abstract o Trent Buskirk added that ABS phone append rates can vary from 40-55% depending on geography. 6

Decisions and Themes for Follow-up This section includes a synthesis of discussion points, themes, and actionable ideas created after the meeting, with some reference to comments made during the meeting. Group A: Immediately Actionable (2015-2016 Cycle) 1. Adding county to the screener a. We will discuss and plan with RTI 2. Sub-sampling for non-response and increasing incentives a. This is part of RTI s data collection plans for 2015-2016 b. Explore incentive changes tailored to characteristics of respondent or area rather than one-size-fits-all approach 3. Nonresponse rate/bias studies a. Matt Jans is leading this for the 3MC multi-cultural survey methods conference and book focusing on community/neighborhood effects and the influence of cultural norms on nonresponse. b. Considerations about nonresponse bias analysis i. Can they be done throughout data collection continuous, during different discrete stages in data collection, or just after data collection is completed? 1. Estimation during data collection could allow for responsive design and survey completion goals based on nonresponse rate/bias estimates instead of cost, number of completed interviews, or date. ii. Stephen added a point on nonresponse bias, that response rate declines since 2007 have been due more to extended interview than screening interview. Lack of participation in the extended interview seems more related to survey content or interviewer practices than contact. 4. Flexibility and adaptive design is a theme in this upcoming cycle. a. For concerns about changes in methods leading to changes in estimates, it will be helpful to be able to isolate the reason for change in an active survey 5. Targeting African-American respondents a. Explore options with auxiliary data and frame appends b. Use others lists for CHIS planning, especially for oversampled counties (Census, MSG) c. See Josh Pasek s appended data article in POQ or JSSM (mentioned by Mike Brick) Group B: Actionable in Future, but Need Some Research/Planning 1. Mode ordering a. Use demographic data to target linguistically isolated areas for screener and choosing which CHIS language interview to administer 2. Measurement error a. Mike Brick mentioned that CHIS needs to think more about measurement error in the future, so we should start doing this (both in conjunction with a mode switch and independent of that). We have a good handle on coverage and nonresponse in CHIS, but do much less with measurement error in general other than what we do to design it away in question development and pretesting. 7