Papers and Chapters 2014-03-EN December 2014 EVALUATION AS A TOOL FOR DECISION MAKING: A REVIEW OF SOME MAIN CHALLENGES Marie-Hélène Boily, Hervé Sokoudjou 1 1. Issue The results-based management devoted the evaluation of programs as a fundamental component of performance measurement and monitoring. Through its involvement into the decision-making chain, the evaluation contributes to the development of organizations and to the (re) definition of their general and specific orientations. If the need to assess the performance of policies, programs and public projects is now commonly accepted, its management function is not yet unanimously. This is due to a number of barriers that undermine the perception decision makers have of its audibility, objectivity and credibility, i.e. its relevance as a management tool. These elements are some of the challenges faced by this discipline, which should be addressed in order to increase its usefulness to organizations. 2. Lessons learnt The 2012 Symposium of the Quebec Programme Evaluation Society (SQEP, acronym in French) has highlighted some of the challenges encountered by evaluation as a tool for decision-making. These challenges are manifold, due to the complex organizational, social and political environments into which evaluations are conducted and concerns various areas such as leadership, participation, innovation, accessibility and the report to credible judgment. Let s focus on each of these elements. 1 The ideas expressed in this paper are those of the authors and do not necessarily reflect those of IDEA. They are presented for the sake of sharing ideas and practices uniquely. IDEA International PC -2014.03.EN 1
a. Leadership: As any scientific process, conducting quality assessments requires many prerequisites: a rigorous methodology 2, a good understanding of institutional mechanisms, an intense sensitivity to the context 3, a constant awareness of its ultimate meaning (namely its use) and finally, the leadership of the evaluator (Carden, 2012). The significant importance of the resource mobilization in conducting assessments lies in their ability to gain stakeholder s adhesion to its modalities (in terms of relevance of the criteria and tools used for evaluation s implementation). Given the multiple issues and pressures (which can be both internal and external to the participating organizations) that are associated with the execution of the evaluation, the evaluator s leadership is the guarantee of the independence, impartiality and objectivity of his approach. Beyond the foregoing parameters, the value of leadership in achieving quality assessments is also due to its close affinity with another key element, which is participation. b. Participation: The success of an evaluation strategy necessarily involves the active participation of all stakeholders, which is the token of their acceptance and appropriation of future results 4. The collaboration of all stakeholders is far from obvious. It stems from a genuine desire for the evaluator to associate them in the implementation of his approach: this attitude determines the valorization of key dispositions such as a regular consultation or the validation of criteria and tools selected for the exercise. This participatory approach, which mainly relies on an iterative and an argumentation-centered approach, is increasingly encouraged. In the opinion of Daigneault (2012), this reality comes from: (1) the idea that stakeholder s participation is proportional to the use of evaluation findings (this fact is confirmed by numerous studies showing a positive correlation between these two factors), and (2) the growing demand for studies more based on qualitative methods. The increase in the participation of organizations poses the challenge of the balance which is supposed to exist between the participation and independence of the evaluation; at the same time as the one regarding innovation that should regularly be applied to instruments and methods of evaluation. 2 Dubois (2012) extends the scientific rigor underpinning the achievement of quality evaluations (which are more likely to feed the decision-making) to the definition of logic models and response models which are familiar exercises for yield measurement strategies. 3 It is worth mentioning that the influence of context is even more significant: Contandriopoulous et al. (2010) also have identified it as an essential catalyst of the use of evaluations. 4 Brousselle (2012) subordinates this element to three key factors that are the context, the quality of evaluation models and the role of the evaluator. From this perspective, the involvement of stakeholders appears as an essential parameter because of its close affinity with these three key concepts. IDEA International PC -2014.03.EN 2
c. Innovation: Innovation is one of the means available to the evaluator for getting closer to the concerns of project or programs managers (Lobe, 2012). This challenge requires a regular adjustment of the evaluator s methods and a continuous adaptation of his tools. The regular reinvention of these devices by the advisor demonstrates flexibility and adaptability which is inherent to evaluation. This also reflects the effort that the evaluator must implement in order to develop approaches that integrate the expectations of actors involved in the decision-making chain of organizations. However, the pursuit of innovation must respect the decisive criterion of accessibility. d. Accessibility: The accessibility which is discussed here concerns the tools and results of the evaluation, and is understood as the ability of the latter to be easily understood by stakeholders. This feature requires that the instruments and the conclusions of the analysis are presented in a language likely to be clear for all (actors and decision makers). The value of this parameter lies in its ability to attract (and retain) the attention of the various agents involved in the deployment of the evaluation, with a view of getting their commitment to this process. The accessibility criterion reflects the inclination of the evaluator to involve all stakeholders, through its efforts aiming to adapt the assessment s outputs to their level of intelligibility.this feature therefore requires great communication skills. The availability of evaluation results is the sine qua non condition of their viability. In doing so, it plays a key role in assessing their credibility by all actors involved. e. The report to credible judgment: This factor poses to the evaluator the challenge of producing consensus on the final results, which is based on an evaluation process of quality. The credibility in question at this level refers to "a judgment (..) both scientifically valid and acceptable'''' at the stakeholders eyes" (Hurteau, 2012). The objective is to have evaluation products which are likely to win the intellectual assent of its sponsors, ie to increase their potential for accreditation. Beyond the credibility factor, this feature also refers to the issue of the feasibility of the evaluation recommendations; which is their ability to be implemented by the program managers, and the quality of the information upon which the evaluation is based. Therefore, the connection with credible judgment directly affects an essential condition of the utility of the evaluation outcomes, and establishes itself as a determinant factor of its value as a tool for decision support. Two major findings emerge from the examination of the various points specified above: (1) They are highly correlated and potentially vectors of a ripple effect, (2) they can be referred to as best practices with which any evaluator should get acquainted in order to produce quality outputs and, therefore, increase his contribution to the decision-making process of organizations. IDEA International PC -2014.03.EN 3
Note: For comments on to this case study, please contact: idea@idea-international.org. This case study can be reproduced with indication of the sources. IDEA International PC -2014.03.EN 4
BIBLIOGRAPHY: CARDEN Fred, «Parvenir à l utilisation: Évaluation, leadership, décision», Communication presented at the Symposium of the Quebec Programme Evaluation Society (SQEP). Theme of the Symposium: The evaluation for decision making, Montreal, October 2012. CONTANDRIOPOULOS Damien and al. (2010), Knowledge exchange processes in organizations and policy arenas: an analytical systematic review of the literature in The Milbank Quarterly. 88(4): 444-482, quoted by BROUSSELLE Astrid, «De l utilisation des résultats d évaluation : Proposition pour une réinterprétation du rôle du contexte, des modèles évaluatifs et de l évaluateur», Communication at the Symposium of the Quebec Programme Evaluation Society (quoted above). DAIGNEAULT Pierre-Marc, «La participation favorise-t-elle l utilisation de l évaluation? Un examen systématique de la littérature quantitative», Communication at the Symposium of the Quebec Programme Evaluation Society (quoted above). DETHIERS Jean-Louis, «Se faire entendre, se faire comprendre, se rendre utile, trois défis de l évaluateur», Communication at the Symposium of the Quebec Programme Evaluation Society (quoted above). DUBOIS Nathalie, «La coconstruction d une compréhension partagée de l intervention : Comment les travaux d évaluation peuvent nourrir la prise de décision?», Communication at the Symposium of the Quebec Programme Evaluation Society (quoted above). HURTEAU Marthe, «L évaluation axée sur le jugement crédible: Pour une plus grande utilisation des décisions qui en découlent», Communication at the Symposium of the Quebec Programme Evaluation Society (quoted above). LOBÉ Christine, «Innover pour répondre aux besoins du décideur : Exemple d outil d aide à la décision dans le cadre d une intervention en santé», Communication at the Symposium of the Quebec Programme Evaluation Society (quoted above). IDEA International PC -2014.03.EN 5