Evaluating Public Information and Advocacy Campaigns



Similar documents
Education Campaign Plan Worksheet

FINANCE AND ACCOUNTING OUTSOURCING AN EXPLORATORY STUDY OF SERVICE PROVIDERS AND THEIR CLIENTS IN AUSTRALIA AND NEW ZEALAND.

Tracking Progress in Advocacy: Why and How to Monitor and Evaluate Advocacy Projects and Programmes

Intervention Logic and Theories of Change: What are they, how to build them, how to use them

Internal Communications Manager Job Profile

10 Essential Google Analytics Reports And How They Matter to B2B Executives

The Advanced Certificate in Performance Audit for International and Public Affairs Management. Workshop Overview

OXY GEN GROUP. engage. multi-channel solutions

IBA Business and Human Rights Guidance for Bar Associations. Adopted by the IBA Council on 8 October 2015

Consultation and Engagement Strategy

Guideline. Records Management Strategy. Public Record Office Victoria PROS 10/10 Strategic Management. Version Number: 1.0. Issue Date: 19/07/2010

GLOSSARY OF EVALUATION TERMS

Using Survey Research to Evaluate Communications Campaigns

Government Communication Professional Competency Framework

Monitoring and Evaluation Framework and Strategy. GAVI Alliance

IAA Diploma in Marketing Communications

Case Writing Guide. Figure 1: The Case Writing Process Adopted from Leenders & Erskine (1989)

Good practice Public Service Communications Unit Communications Function Review 2009

Corruption Risk Assessment Topic Guide

IMPROVING CRIME AND CRIMINAL JUSTICE RESEARCH AND STATISTICS PROGRAMS: A ROLE FOR THE AMERICAN SOCIETY OF CRIMINOLOGY

G-CLOUD 6 Service Definition Document

SFT F15 Develop and implement a plan for direct response fundraising

CHEA. Accreditation and Accountability: A CHEA Special Report. CHEA Institute for Research and Study of Acceditation and Quality Assurance

Teenage Pregnancy and Sexual Health Marketing Strategy November 2009

Monitoring, Evaluation and Learning Plan

HUMAN RESOURCES STRATEGY FOR RESEARCHERS AND ACTION PLAN FOR THE PERIOD

Supporter and Journey Planning Manager

Digital TV switchover: Social media

Marketing Manager. MS National Centre, London

HEALTH EDUCATION DIVISION MINISTRY OF HEALTH MALAYSIA

Young Enterprise Masterclass

IFDD and Middle East Department s Position within the IRW Structure. The International Fundraising Development Division (IFDD) Structure

FeverBee s Online Community Strategy Template

10 Christmas Merchandising Tips from Amazon

Glossary Monitoring and Evaluation Terms

COMM - Communication (COMM)

IBM G-Cloud - IBM Social Media Analytics Software as a Service

Market Research: Friend or Foe to Christian Charities?

Exploring Strategic Change

Summary. 1 WHO (2013) Country Profile of Capacity and Response to Noncommunicable diseases.

Team Leader Job Profile

Unique Methods in Advocacy Evaluation

Strategic Sourcing Outlook: Emerging Techniques and Media

WHITE PAPER EMERGING CHALLENGES FOR THE MODERN CMO. Create and deploy IT solutions for business

Statement on the core values and attributes needed to study medicine

Running surveys and consultations

Addressing Attribution Through Contribution Analysis: Using Performance Measures Sensibly

Show your value, grow your business:

Amon Mutyasira. CRM Implementation: Not for profit organisations and profitmaking CRM compared (Case Study)

Asset Managers and Search Analytics. An Investigation into Keyword Strategy and its Impact on Building Your Brand s Online Presence

Faculty of Science and Engineering Placements. Stand out from the competition! Be prepared for your Interviews

see, say, feel, do Social Media Metrics that Matter

Course Description Applicable to students admitted in

Tracking Levels of Employee Understanding and Engagement During Change Ghassan Karian Karian and Box London, U.K.

An Introduction to PRINCE2

Now available for hire

Monitoring Social Impact: How does business measure up?

Table of Contents GIVING 2 KIDS SIX BASICS OF WISE GIVING REVISED 15 APRIL

Executive summary. Today s researchers require skills beyond their core competencies

International Advocacy Capacity Tool for organizational assessment

University recruitment effectiveness survey 2013

ROLE PROFILE & PERSON SPECIFICATION JOB TITLE SOCIAL MEDIA MANAGER

the Defence Leadership framework

Annex 1: Conceptual Framework of the Estonian-Swiss Cooperation Programme

Guidance on a Model Complaints Handling Procedure

Islamic Relief Worldwide

Investing in sustainable sanitation and hygiene. Water Supply & Sanitation Collaborative Council GLOBAL SANITATION FUND

Quality Assurance. Policy P7

Head of Commercial & Contract Management (BISRID_046)

BSBMKG408B Conduct market research

E N T R E P R E N E U R I A L S K I L L S P A S S P R O J E C T Q U A L I T Y A S S U R A N C E P L A N

THE ROLE OF MARKET RESEARCH IN THE MODERN SHOPPING CENTRE

Deliverable D7.2: The project website

TEAM WE RE RECRUITING: IN ALL THREE AREAS RESEARCH & ANALYSIS STRATEGY & PLANNING MARKETING & DEVELOPMENT

10/21/2010. Overview. The Purpose of Compliance Training. Your Stay Out of Jail Free Card : Best Practices in Business Ethics and Compliance Programs

MA in Theory and History of Typography & Graphic Communication For students entering in 2006

E-Customer Relationship Management in the Clothing Retail Shops in Zimbabwe

An evaluation of the effectiveness of performance management systems on service delivery in the Zimbabwean civil service

Interviews and Focus Groups in Advertising, Public relations and Media

ADVERTISEMENT. Markets for Change / M4C Communications and Monitoring & Evaluation Officer (SC SB-4)

Conceptualising work activity for CAL systems design

The Royal College of Pathologists response to Lord Carter s report on operational productivity, February 2016

Executive Summary Strategic Plan:

Transcription:

Evaluating Public Information and Advocacy Campaigns Glenn O Neil Evaluation consultant - Owl RE, Switzerland Lecturer - International University in Geneva, Switzerland PhD candidate - Methodology Institute, London School of Economics and Political Science, UK glenn.oneil@gmail.com, goneil@iun.ch, G.A.O'Neil@lse.ac.uk Abstract Increasingly non-governmental organisations and international organisations use public information and advocacy campaigns to support their goals. Existing methodologies are rarely applied to evaluate campaigns. However, meaningful evaluation of campaigns is possible by taking into account the specific nature of campaigns while meeting minimum requirements of evaluation. This paper discusses "lessons learnt" in evaluating campaigns and particular challenges faced in assessing international campaigns. Although a standard methodology is yet to emerge, this paper describes the desired outcomes that many campaigns share and the appropriate evaluation methods that have been successfully used. Keywords: advocacy, campaigns, communications 1

Introduction Public information and advocacy campaigns are increasingly used by non-government organisations and international aid organisations to support their goals: Doctors without Borders (MSF) campaigns for access to essential medicines; Oxfam International campaigns on trade issues; World Health Organisation campaigns on tobacco control and the International Committee of the Red Cross campaigns for a ban on cluster munitions. Public information and advocacy campaigns are similar in that they use similar communication methods to achieve their goals - but differ in the goals they want to achieve. A public information campaign classically seeks to change the knowledge, attitudes or behaviours of a defined target audience. An example would be a campaign to educate people about the use of clean water. An advocacy or public will campaign classically seeks to mobilise concerned audiences and organisations to push for changes in activities, policies of practices of governments and companies. An example being a campaign to pressure governments to allocate more funds to development aid 1. In this paper, both types of campaigns are used interchangeably as in evaluation they share common traits. In theory, methodologies exist for evaluating campaigns: From the scientific rigour of a true experimental design; quasi-experimental design; to simply examining trend data and being satisfied that the campaign activities were carried out and that the desired change occurred 2. In reality, existing methodologies are rarely applied and little evaluation is undertaken. A number of reasons are put forward for this absence: the impracticality and complexity of methodology required (particularly for experimental design); the vagueness of campaign design making evaluation near impossible; the lack of resources and know-how for evaluation; and the absence of an evaluation culture amongst campaign organisers 3. Attempts that are made to evaluate campaigns in the area of development aid are often superficial and focus on campaign outputs : the production and distribution of campaign material and consequent pick-up by the media. Although of interest, a focus on campaign outcomes is more significant for evaluation, as discussed further below. Lessons learnt in evaluating campaigns Nevertheless, in working directly with organisations in evaluating campaigns and examining published campaign evaluations 4, the experience of this author indicates that meaningful evaluation of campaigns is possible. Evaluation methodology needs to be adopted taking into account the specific nature of campaigns while meeting minimum requirements of evaluation (such as considering other factors that could also explain changes seen). Some of the lessons learnt of this author in evaluating campaigns are as follows: 1 These definitions are further expanded upon in Guidelines for Evaluating Nonprofit Communications Efforts, Communications Consortium Media Center (2004), p. 6. 2 For further details on evaluation methodology proposed for communication campaigns and activities, refer to Dorfman, Ervice & Woodruff (2002), Lindenmann (2002), Broom, & Dozier (1990) and Watson, T. & Noble P. (2007). 3 For a further discussion on the lack of evaluation in communications in general see Macnamara (2006). 4 A collection of published campaign evaluation reports can be found on the Communication Initiative website: http://www.comminit.com/en/sections/terms/36%2c11/250%2c253%2c256/q 2

Outlining the pathway from aim to action to change: Many campaigns are weak in detailing the logical model (or theory of change ) through which change is supposed to occur. By detailing the aims of the campaign, its actions, the desired outcomes, indicators and data to be gathered, it pushes campaign organisers at an early stage to consider key questions such as can we really change X with action Y?, what are we wanting to achieve? and how can we measure our success?. Intermediate variables or relays in the causal chain and external factors that could be rival explanations for any change observed are useful additions also. A simplified visual of a logical model for a fictional campaign is displayed on this page 5 : Example: Fictional Campaign Simplified Logical Model Organisational Goal Fight corruption globally in the health sector. Campaign Objective By 2008, raise awareness globally of corruption in medical facilities amongst members, partners and health staff; governments take a stand on the issue. Communication Objectives By end 2008, at least 50% of members, partners and health staff in 20 key countries are aware of the issue of corruption in medical facilities. By end 2008, at least 10 Ministries of Health publicly take a stand to combat corruption in medical facilities. Communication Activities - Create website on corruption issues. - Hold five press conferences in key regions. - Create training packs for members/partners. - Establish coalition with peak bodies. - Conduct 10 meetings with health ministries. -Etc. Outcome & evaluation Indicators Increase in visibility: Number of items published in the media. Change to Knowledge (awareness): level of awareness about corruption amongst members, partners & health staff. Change to behaviour (individual): Number of actions taken by members & partners to endorse campaign. Change to behaviour (government): Number of governments that publicly take a stand to combat corruption. Evaluation Methods Visibility: media monitoring of relevant media. Knowledge (awareness): survey with members, partners & health officials of representative sample of 20 key countries. Behaviour (individual): tracking mechanism to record number of members who sign up for the campaign and take website actions (sign petition & refer campaign to a friend), number of partners that endorse and use campaign material. Behaviour (government): tracking mechanism to record number of government public statements on corruption (information sourced through media and reports from members & partners). 5 A version of this fictional example originally published in O Neil (2007). Further examples of logical models for actual campaigns can be found in Coffman (2003). 3

Building evidence of change: For the vast majority of campaigns, undertaking evaluation in line with scientific standards that show causality is simply not possible 6. But this is rarely asked of campaign evaluation. Organisations (and the donors that support them) are interested in seeing a series of evidence that together, indicate that change has occurred and what of this change can be realistically contributed to the campaign. For example, in determining the influence of a campaign in changing the practices of a given audience, this may involve interviewing audience members in addition to direct observation of their actions on the ground and then reaching some conclusions as to the influence or not of campaign elements on their actions. Focusing on interim targets: many advocacy campaigns have ambitious aims that are the equivalent of long term impacts, such as increase the percentage of GDP for development aid. Consequently, campaigns can be harshly judged as failing if they do not meet these ambitious aims quickly. What is required is the identification of interim targets that are indicators of progress towards achieving campaign aims. In the example above, interim targets could be the supportive statements made by government officials or related policy developments. Focusing on outcomes: as mentioned above, many organisations today focus on the more superficial outputs of their campaigns: attendance levels of events, visits to websites or number of mentions in the media. Instead of jumping to try and measure impact (broader and longer term change), a more realistic but still significant focus is on outcomes what changes were achieved by the campaign in a shorter time frame. In public information campaigns, outcomes are typically changes to knowledge, attitudes and behaviour of audiences. Use of proxy or existing data: Although this author advocates focusing on outcomes and interim targets, he is well aware that resource limitations often mean that it is simply not possible for campaign organisers to measure changes to knowledge, attitudes and behaviour of audiences using standard tools such as questionnaires and interviews. One solution is the use of proxy or existing data as indicators of change, such as the analysis of reports, policy/legislations schedules, records, media/web coverage. Often the absence of interim targets results in available data not being recorded and considered (such as the number of organisations that support a campaign). Examining web and media coverage can never be a replacement for measuring directly public sentiment or support but it can be a useful proxy measure that is relevant to a campaign evaluation. However, for some campaigns it may not be appropriate as campaign activities such as person-toperson communications, coalition-building and lobbying do not aim to generate media coverage rendering it an irrelevant measure. Evaluation outcomes and methods Although not wanting to prescribe a definitive methodology for evaluating campaigns, many campaigns often share similar desired outcomes that can be matched to appropriate evaluation methods (and that have been used successfully in past evaluations) 7 : 6 For a further discussion on this issue, see Kennedy & Abbatangelo (2004), p. 10. 7 These outcomes and methods are adapted from Guidelines for Evaluating Nonprofit Communications Efforts, Communications Consortium Media Center (2004), p. 13, Cabanera-Verzosa (2003) p. 49 and based on the experience of the author in evaluating campaigns. 4

Outcome Campaign activity implementation* Media & online visibility Change to knowledge, attitude & behaviour Evaluation methods Event/activity tracking and statistics: all aspects that indicate activities have been undertaken, such as the number of events held, advertisements placed, publications produced, press releases issued, radio interviews undertaken, etc. Media monitoring (software or manual), web metrics software, media distribution statistics, content analysis (software or manual): counting mentions in media and/or websites, visitor statistics to campaign website(s), content analysis of mentions (tone, placement, influence, etc). Surveys, interviews, focus groups with target audiences, tracking mechanisms, web metrics software: Preferably canvassing target audiences before, during and after a campaign to assess if changes desired have occurred. Tracking mechanisms (such as registering the number of phone enquiries, amount of donations, medical appointments taken, etc.) and web metrics (counting the number of people who register for a campaign, refer a campaign to a friend, send an email of support, etc.) can be useful proxy or direct measures of behaviour change. Change to policies, activities & practices of targeted institutions Case studies, observation studies, tracking mechanisms, monitoring support and changes of targeted institutions (public and private sector): Case studies are useful in exploring and detailing correlations between campaign activities and policy developments. Observation studies by field staff/volunteers of institutions can be complementary to policy tracking (checking if policies are enacted in practice). Tracking mechanisms can monitor statements of officials, changes to policy, legislation, etc. * Campaign activity implementation is more so an output of a campaign but is listed here as it is often of interest to campaign organisers. Challenges in evaluating international campaigns Finally, there are particular challenges faced in evaluating international campaigns. In addition to the many challenges of undertaking international evaluation in general, the following challenges have been seen in evaluating international campaigns: Advocacy campaigns often aim to change practices and policies of government agencies. In many countries, there is little transparency in the evolution of policies and practices so it is difficult to know if a campaign is having an impact. This is particular evident in campaigns that address issues that are considered sensitive by governments: for example, human rights application by police forces or treatment of minorities by immigration officials. Evaluation of outcomes of public information campaigns often requires direct canvassing of audiences through standard research methods (e.g. surveys, interviews, focus groups). In some contexts access to audiences may be difficult for reasons ranging from security to cultural issues. However, increasingly local research agencies are able to undertake such research competently. This author has worked with 5

competent research generated from local agencies in countries ranging from Armenia to the Congo. Media monitoring remains an often used (and misused) tool to measure campaign results. Automated media monitoring services must be used with caution as they are often western-centred and search only for mentions in internet versions of print media. Consequently, important media for a campaign (e.g. such as Arabic, Russian or local media) may not be covered by such automatic tools, in addition to the absence of radio and TV monitoring by these tools. Campaign evaluation often seeks to identify which communication channel was the most effective in conveying a message. The use of channels (from radio to person-toperson contact) varies widely between countries and in-depth interviews are an appropriate method (among others) to probe with audiences how they learnt of campaign messages and what were the most used and convincing channels 8. Campaign organisers have worked for many years in the absence of an evaluation culture. The level of interest and focus on evaluation varies in countries and organisations. It is therefore important that these professionals be involved in evaluation activities often and as early as possible in the campaign lifespan. Involvement can range from maintaining media monitoring on the campaign to working with external evaluators in devising the most appropriate evaluation methods. Conclusion Evaluation of public information and advocacy campaigns will continue to evolve and become more common place for a number of reasons: the move towards results-based management in organisations that oblige campaigns to set measurable objectives 9 ; the increasing interest of donors in evaluating all aspects of development work; the increasing professionalism of campaign organisers and simply the growth in the number of campaigns undertaken. References Broom, G. & Dozier, D. (1990). Using Research in Public Relations. Englewood Cliffs (NJ): Prentice Hall. Cabanera-Verzosa, C. (2003). Strategic Communication for Development Projects. World Bank. Retrieved 28 August 2008 from: http://siteresources.worldbank.org/extdevcommeng/resources/toolkitwebjan2004.pdf Coffman, J. (2003). Lessons in evaluating communications campaigns: Five case studies. Harvard Family Research Project. Retrieved 28 August 2008 from: http://www.mediaevaluationproject.org/hfrp2.pdf Dorfman, L., Ervice J. & Woodruff, K. (2002). Voices for change: A taxonomy of public communications campaigns and their evaluation challenges. Berkeley, CA: Berkeley Media Studies Group. Retrieved 28 August 2008 from: http://www.mediaevaluationproject.org/b2.pdf 8 For an informative discussion on identifying the most effective communication channels of a campaign see Kennedy & Abbatangelo (2004), p. 17. 9 The growth and consequent challenges of RBM in organisations is described well in Mayne (2007). 6

(2004). Guidelines for Evaluating Nonprofit Communications Efforts. Communications Consortium Media Center. Retrieved 28 August 2008 from: http://www.mediaevaluationproject.org/paper5.pdf Kennedy,G., Abbatangelo, J. (2004). Guidance for Evaluating Mass Communication Health Initiatives, Summary of an Expert Panel Discussion. The Centers for Disease Control and Prevention. Retrieved 28 August 2008 from: http://www.healthcommunication.net/evaluating_mass_comm.pdf Lindenmann, W. (2002). Guidelines for Measuring the Effectiveness of PR Programs and Activities. Institute for PR. Retrieved 28 August 2008 from: http://www.instituteforpr.org/files/uploads/2002_measuringprograms.pdf Macnamara, J. (2006). Two-tier evaluation can help corporate communicators gain management support. Prism (online PR Journal), Vol. 4, Issue 2. Retrieved 28 August 2008 from: http://praxis.massey.ac.nz/fileadmin/praxis/files/journal_files/evaluation_issue/commentary_ MACNAMARA.pdf Mayne, J. (2007). Challenges and Lessons in Implementing Results-Based Management, Evaluation, Vol. 13, No. 1. O Neil, G. (2007). A Visual Pathway: Approaches and challenges in evaluation of IO/NGO communication campaigns. Communication Director, Vol.2. Watson, T. & Noble P. (2005). Evaluating Public Relations: A Basic Guide to Public Relations Planning, Research & Evaluation. (Second edition). London: Kogan Page. 7