The SO WHAT factor... Impact Evaluation Strategies for Teacher Educators
1 Purpose This paper arose out of the Training and Development Agency for Schools (TDA) information and communication technology (ICT) in initial teacher training (ITT) impact evaluation project 2008/09. It is intended to provide an accessible resource and practical ideas and models for evaluating the impact of a technology intervention from their inception to completion. One of the key findings of the evaluation was that one size definitely doesn t fit all when selecting the framework or model of evaluation. Hopefully, the ideas below will be of use to those who are implementing a wide range of innovative projects who require a customisable evaluation that can encompass the breadth of creative work being carried out in ITT with ICT. The main evaluation methods as used in the ICT in ITT evaluation can be seen in the main report at (insert web link), where we also put forward an embryonic model for determining project success factors. The SO WHAT factor Impact Evaluation Strategies for Teacher Educators
2 Content Section one Section two Evaluation what is it and why do it? Guiding principles for evaluation Section three Major frameworks and evaluation models: a. Kirkpatrick s evaluation of training b. Guskey s evaluation of staff development c. Logic frame model d. Self review framework Section four Section five Section six Evaluation tools: Tried and tested models for evaluating training and ICT programmes/projects/interventions. Sources of further information: Sources of further information: from the UK and elsewhere on evaluating ICT. Includes sites that offer downloads of evaluation guides. An example of the use of an evaluation framework Acknowledgments The SO WHAT factor Impact Evaluation Strategies for Teacher Educators
3 Section one: Evaluation What is it and why do it? People use different terminology when they are talking about evaluation and people have different perspectives on the nature and purpose of evaluation. According to Bennett (2003), while there has been ongoing debate for several decades over the nature and purpose of evaluation, he recognises that evaluation forms an important area of research in education. p15. Easterby-Smith (1986) adds his own three reasons for evaluating more succinctly as: proving improving, and learning. p13 This document aims to make the purpose of evaluation and the approaches to evaluation clearer by concentrating on the evaluation of ICT in education. Definitions of evaluation abound and Bennett (2003) offers 13 without concluding an overall definition. One biased towards education evaluation is from Nevo (1995), who suggests that it is an act of collecting systematic information regarding the nature and quality of educational objects p11, which suggests that it is a combination of description and judgement. The UK Evaluation Society (1994) also highlights the collection of information in saying evaluation is an in-depth study which takes place at a discrete point in time, and in which recognised research procedures are used in a systematic and analytically defensible fashion to form a judgement on the value of an intervention. How such collection of information or research is organised may direct us to Scriven s (1967) idea of having two forms of evaluation: formative evaluation, which would support the development of your project, and summative evaluation, for assessing the final impact of a project. Goodall et al (2005) supported this: Effective evaluation of CPD will usually need to serve two main purposes: summative evaluation (does the programme/activity improve outcomes?) and formative assessment (how can the programme/activity be improved?).p37. They go on to be critical of CPD evaluation practice and offer their own model of evaluation, the route map (found in the examples of evaluation practice in this report at section 5k). In considering ICT in education, the formative function will include the evaluation of instructional materials and pedagogic processes. This may relate to either the development or use of materials and delivery of learning. A definition that appears to be relevant to ICT issues in education is from Stern (1988), who suggests: Evaluation is any activity that, throughout the planning and delivery of innovative programmes, enables those involved to learn and make judgements about the starting assumptions, implementation processes, and outcomes of the innovation concerned. Guskey (1998) offers his definition of evaluation (adapted from the Joint Committee on Standards for Educational Evaluation, 1994.): Evaluation is the systematic investigation of merit or worth. p1, proposing that it is a structured and a measured and measurable approach. Chelimsky (1997) sums up why we evaluate in stating: We look to evaluation as an aid to strengthen our practice, organisation and programmes. p101. In order to do this, all critics agree that any reason or reasons for the evaluation should be stated before any evaluation takes place. This is reinforced by Guskey (2002), who reminds us that good evaluation is built in from the outset of the professional development programme or activity, not added on at the end. The Research Councils UK (2005) emphasise this too in confirming that evaluation is a process that takes place before, during and after a project. It includes looking at the quality of the content, the delivery process, and the impact of the project or programme on the audience(s). Some evaluation frameworks incorporate a model planning process for a project as well as an evaluation framework for the project, eg, logic frame models. Evaluation What is it and why do it?
4 Guskey (2002) helps to explain Why evaluate? : The processes and procedures involved in evaluation present an endless list of challenges that range from very simple to extremely complex. Well-designed evaluations are valuable learning tools that serve multiple audiences. They inform us about the effectiveness of current policies or practices, and guide the content, form, and structure of future endeavours. Poorly designed evaluations, on the other hand, waste time, energy and other valuable resources good evaluations do not have to be costly, nor do they require sophisticated technical skills. What they require is the ability to ask good questions and a basic understanding about how to find valid answers. Good evaluations provide information that is sound, useful, and sufficiently reliable to use in making thoughtful and responsible decisions about projects, programs, and policies. p1. Evaluation What is it and why do it?
5 Section two: Guiding principles for evaluation A. Principles for identifying their focus and purpose of evaluation B. Principles for building on what is already known C. Principles for gathering evidence D. Principles for analysing and interpreting E. Principles for communication and feedback In carrying out evaluations, participants should decide why and how they will carry them out. Drawing on the experience of CeDARE, Hadfield (2008) proposes five sets of principles that participants should consider for any evaluation: Evaluations should: 1. cover the four key levels of access and participation, participant learning, participant behaviour, and organisational impact 2. have clear foci that are at least in part co-constructed with participants and address their needs as well as those of providers 3. be directed towards outcomes which can be communicated to and used by key stakeholders within the theme, and 4. balance the amount of effort to conduct them with the potential benefit of their outcomes. Evaluations should: 1. have convincing arrangements for accessing and building upon existing evidence and knowledge of effective practice, and 2. should wherever possible use existing frameworks and tools that are already live within the system. Evaluations should: 1. try as far as possible to reuse and/or increase use of relevant evidence that has already been collected 2. ensure, as far as possible, that the process of collecting any new evidence is a learning experience for those involved 3. have clear strategies for triangulation, by collecting different sorts of evidence from different groups in more than one context, and 4. follow recognised ethical guidelines for both collection and storage. Evaluations should: 1. analyse existing data before collecting additional forms 2. use or adapt existing frameworks if they are well recognised and regarded 3. balance a search for consistent themes with contradictory messages and the unexpected outcomes, and 4. include practical arrangements for checking interpretations and summaries. Evaluations should: 1. report back in forms and ways that are accessible and appropriate to key audiences 2. where possible, use short timely feedback loops rather than rely on summative feedback, and 3. generate a short summary of key learning and impact that can be fed to others. Guiding principles for evaluation
6 Section three: Major frameworks and evaluation models Evaluation models a. Kirkpatrick s evaluation of training model Arguments for the use of this model This section (bearing in mind the principles above) identifies some major frameworks for evaluation and provides links to approaches and models of practice in evaluation for use in a variety of situations. It will draw on methods of practice from the research by CeDARE (2009) ICT in ITT survey analysis report and on selected examples from the literature and the internet. The evaluation research that provided the stimulus for this paper used evaluation models developed by Kirkpatrick for evaluating training. This also included approaches for impact evaluation based on the work of Hooper and Reiber (1995) and Fisher (2006) for applying this evaluation to ICT development and impact on trainees and trainers (details on the Kirkpatrick evaluation model are found in section 4). Kirkpatrick developed his four-step model for the evaluation of training and development in business organisations and, according to this model, evaluation should begin at level one and then, as time and budget allows, should move sequentially through levels two, three and four. Each successive level represents a more precise measure of the effectiveness of the training programme, but at the same time requires a more rigorous and time-consuming analysis. The model consists of four stages, originally described as steps but since 1996 considered as levels, and is applicable for all forms of programme evaluation, including ICT in ITT. Level one: Reactions what the participants in the programme felt about the project/programme, normally measured by the use of reaction questionnaires based upon their perceptions. Did they like it? Was the material relevant to their work? A tool such as a happy sheet is often utilised at this level. Level one evaluation is viewed by Kirkpatrick as the minimum requirement, providing some information for the improvement of the programme. Level two: Learning this moves the evaluation on to assessing the changes in knowledge, skills or attitude with respect to the programme/project objectives. Measurement at this level is more difficult, and formal or informal testing or surveying is often used, preferably pre- and post-programme. Level three: Behaviour evaluating at this level attempts to answer the question: are the newly acquired skills, knowledge or attitude being used in the everyday environment of the learner? Measuring at this level is difficult as it is often not easy to predict when the change of behaviour will occur, and therefore important decisions may have to be made as to when to evaluate, how often to evaluate and how to go about the evaluation. In the ICT in ITT project, questionnaires to determine changes in practice were used, with questions based on a modified e-maturity scale from the work of Hooper and Reiber (1995). Level four: Results this level seeks to evaluate the success of the programme in terms of results for the organisation, usually stated in improvements in quality. Determining the improvements in quality of practice is probably the most difficult aspect of their evaluation framework. (Summary adapted from Tamkin P, Yarnell J, and Kerrin, M [2002]) The four-level model can facilitate professional development evaluations because it describes how evaluation can be conducted and how it can be useful at each level. There are lots of examples of its use worldwide and it is practical and simple to use. Major frameworks and evaluation models
7 Arguments against the use of this model. Comment b. Guskey s five levels of professional development evaluation The main criticisms of the approach are based on the fact that the model has been used mainly at level one the satisfaction of learners in the training they have received. It is also considered that there is an immediate reactive response from learners at the end of their training that does not clearly link to the other levels. Such an evaluation may be useful for trainer satisfaction but may not help identify what has been learned. According to the study by Yamkin et al (2002) the overall conclusion is that the [Kirkpatrick] model remains very useful for framing where evaluation might be made. p.xiii. The CeDARE ICT in ITT survey analysis used the multi-levels of the Kirkpatrick model to determine what was already known from reviewing previous project evaluations of ICT data collection and the identification of a suitable sampling framework for investigation at a greater depth, ie, at Kirkpatrick s levels three and four framework. Guskey decided to modify Kirkpatrick s model for use on evaluating staff development in education. He comments that the Kirkpatrick model had only limited use in education because it lacked explanatory power. It was seen as helpful in addressing a broad range of what questions, but fell short when it comes to explaining why. This new five-step model is one that was advocated in a study by Goodall et al (2005), who noted that Guskey s model was adapted from Kirkpatrick s (1959) model. Goodall et al went on to suggest their own route map, drawing from their experience of reviewing the work of Guskey. Evaluation level 1. Participants reactions 2. Participants learning 3. Organisation support and change 4. Participants use of new knowledge and skills 5. Student learning outcomes What questions are addressed? (examples) Did they like it? Did participants learn what was intended? What was the impact on the organisation? Were sufficient resources made available? Mentors or coaches used? Did participants effectively apply the new skills? What was the impact on students? Did it affect student achievement? Did it influence student well-being? Is student attendance improving? How will information be gathered? (examples) Usually questionnaire at the end of the session Assessments Demonstrations Reflections Portfolios Questionnaires Minutes of meetings Interviews Focus groups Questionnaires, interviews, reflection Observation portfolios Student records/ results Questionnaires Participant portfolios Focus groups What is measured or assessed? (examples) Initial satisfaction with the experience New knowledge and skill of participants The organisation s advocacy, support, accommodation, IT resources, facilitation Degree and quality of implementation Student learning outcomes performance and achievement; attitude and disposition; skills and behaviours From: Guskey, TR (2000) Evaluating professional development. Thousand Oaks, OH: Corwin Press. Major frameworks and evaluation models
8 Arguments for the use of this model Arguments against the use of this model Comment c. Logic frame model evaluation The Kellogg Foundation (2000) It is designed for staff development in an educational context. The end product is a model that is very useful in guiding the implementation and evaluation of a program. It is straightforward to use. As with Kirkpatrick the model is said to be simplistic. There is also no recognition of the time-lag necessary between the first three levels and the last two. To evaluate levels four and five requires the new knowledge or skills identified in levels one to three to be applied in practice and to have an impact on students learning outcomes. These learning outcomes will have to be recognised and measured over time in order to evaluate whether the intervention has brought about new teaching approaches that have been embedded and are successful. In using this model Guskey suggests that you start with the questions at level five as a basis for planning your evaluation. A recent study from Davis et al (2009) confirmed that multi-level evaluation of professional development does indeed apply to ICT-related teacher training. Therefore we recommend that all five of Guskey s levels be consistently adopted for the evaluation of ICT training p146. A logic model presents a picture of how your effort or initiative is supposed to work. It explains why your strategy is a good solution to the problem at hand. Effective logic models make an explicit, often visual, statement of the activities that will bring about change and the results you expect to see for the community and its people. A logic model helps maintain the momentum of the planning and evaluation process and participant involvement by providing a common language and point of reference. A detailed model indicates precisely how each activity will lead to desired changes. In the UK, the logic frame model for evaluation has usually been used for planning and evaluating large-scale projects in developing countries, however, it is now seen as a relevant model for whenever evaluation is considered. A logic model is a plausible, sensible model of how a programme is supposed to work (Bickman, 1987, p5). It serves as a framework and a process for planning to bridge the gap between where you are and where you want to be. It provides a structure for clearly understanding the situation that drives the need for an initiative, the desired end state, and how investments are linked to activities for targeted people in order to achieve the desired results. A logic model is the first step in evaluation. The logic model describes the sequence of events thought to bring about benefits or change over time. The elements of the logic frame model are resources, outputs, activities, participation, short-, medium- and longer-term outcomes, and the relevant external influences, (Wholey, 1983, 1987). Sunra et al (2003) describe the logic model as a visual link of programme inputs and activities to programme outputs and outcomes, and shows the basic (logic) for these expectations. The logic frame model is an interactive tool, providing a framework for programme planning, implementation and evaluation, p6, and was one of the models reflected on by Giaffric Ltd (2007) in constructing its evaluation model for the Joint Information Systems Committee (JISC). See the complete model in section five of this document. At its simplest, the logic model may be illustrated by diagram 1. Input Outputs Outcomes Programme investments Activities Participation Short Medium Longterm Diagram 1. A simple logic frame model Major frameworks and evaluation models
9 In practice the diagram is likely to end up being more complex as each of the areas under consideration are set out in more detail. See diagram 2. Program Action Logic Model Inputs Activities Outputs Outcomes Impact Participation Short Term Medium Term Long Term Situation Needs and assets Symptoms versus problems Stakeholder engagement Priorities Consider: Mission Vision Values Mandates Resources Local dynamics Collaborators Competitors Intended Outcomes What we Invest Staff Volunteers Time Money Research base Materials Equipment Technology Partners What we do Conduct workshops, meetings Deliver services Develop products, curriculum, resources Train Provide counselling Assess Facilitiate Partner Work with media What we reach Satisfaction What the short term results are Learning Awareness Knowledge Attitudes Skills Opinions Aspirations Motivations What the medium term results are Participants Clients Agencies Decisionmakers Customers Action Behavior Practice Decisionmaking Policies Social Action What the ultimate Impact(s) is Conditions Social Economic Civic Environmental Assumptions External Factors Diagram 2. A more complex logic frame model Evaluation Focus Collect Data Analyse and Interpret Report Arguments for the use of this model Arguments against the use of this model This diagram is taken from the University of Wisconsin extension program found at http://www.uwex.edu/ces/pdande It integrates planning, performance measurement and evaluation in one model. A logic frame model describes a programme and its theory of change. It is useful in helping to focus an evaluation. Furthermore, suggest Taylor-Powell and Henert (2008), the process can facilitate team building and stakeholder buy-in, as well as ensuring that implicit program assumptions are made explicit. Evaluators have found the logic frame model process useful in a wide range of small and complex programmes and interventions in industrial, social and educational contexts. A logic frame model presents a plausible and sensible model of how the program will work under certain conditions to solve identified problems (Bickman, 1987). Thus the logic frame model is the basis for a convincing story of the programme s expected performance. A manager has to both explain the elements of the programme and present the logic of how the program works. Patton (1997) refers to a programme description such as this as an espoused theory of action, that is, stakeholder perceptions of how the programme will work. The logical approach does suggest that it is too simple as an evaluation framework as it appears to assume that all projects are linear. It is perceived as rigid and can lead to the simplification of complex social processes. The structure of the logic frame model suggests that everything will go according to plan programme activities, outcomes and goals are all laid out in advance, as are indicators with which to monitor these. As such, there is no provision for a change in project direction nor a space for learning to be fed into project implementation. Although the logic frame model can be altered during the course of a project, many commentators note that they are rarely revisited. Earle (2003 p2). The most common limitations include a logic frame model represents intention it is not reality. It focuses on expected outcomes, so people may overlook unintended outcomes (positive and negative). Major frameworks and evaluation models
10 Comment d. The self-review framework for ICT 1. Leadership and management 2. Curriculum 3. Learning and teaching 4. Assessment 5. Professional development 6. Extending opportunities for learning 7. Resources 8. Impact on pupil outcomes Evaluators have played a prominent role in using and developing the logic frame model. This may be why it is often called an evaluation framework. Development and use of logic model concepts by evaluators continues to result in a broad array of theoretical and practical applications, say Taylor-Powell and Henert (2008). This framework has been designed specifically for use by schools to assess the e-maturity of the school as an institution. The framework divides into eight elements which will support and challenge a school to consider how effectively it is using ICT. Staff from schools are able to sign up to use the framework on the Becta website: https://selfreview.becta.org.uk/about_this_framework On registering to use the framework the site offers clear guidelines for using it in your own context. There are also case-studies and video clips which are available to support and challenge your school/organisation. Develop and communicate a shared vision for ICT. Plan a sustainable ICT strategy. Plan and lead a broad and balanced ICT curriculum. Review and update the curriculum in the light of developments in technology and practice. Ensure pupils ICT experiences are progressive, coherent, balanced and consistent. Plan the use of ICT to enhance learning and teaching. Meet pupils expectations for the use of ICT. Encourage teachers to work collaboratively in identifying and evaluating the impact of ICT on learning and teaching. Assess the capability of ICT to support pupils learning. Use assessment evidence and data in planning learning and teaching across the whole curriculum. Assess the learning in specific subjects when ICT has been used. Identify and address the ICT training needs of your school and individual staff. Provide quality support and training activities for all staff in the use of ICT sharing effective practice. Review, monitor and evaluate professional development as an integral part of the development of your school. Understand the needs of your pupils and community in their extended use of ICT. Ensure provision is enhanced through informed planning, resulting in quality of use of ICT within and beyond the school. Review, monitor and evaluate opportunities to extend learning within and beyond your school. Ensure learning and teaching environments use ICT effectively and in line with strategic needs. Purchase, deploy and review appropriate ICT resources that reflect your school improvement strategy. Manage technical support effectively for the benefit of pupils and staff. Demonstrate how pupils can make good progress in ICT capability. Be aware of how the use of ICT can have a wider positive impact on pupils progress. Review pupil attitudes and behaviour and how the use of ICT can impact positively on pupil achievement. Major frameworks and evaluation models
11 Arguments for the model Arguments against the model Comments The test-bed e-maturity model The Becta site offers a number of positive comments from users about the use of the model. There are no negative comments about the model on the Becta site. The Next Generation Learning Charter is a four-level scheme to encourage schools engagement with, and progress through, the self-review framework. On registering with the framework, a school is asked to sign the charter, saying they will undertake a review of the use of ICT in the school during the next three years. When a school has reached a benchmark level in three of the eight elements, it can receive a recognition level certificate. The ICT mark accreditation is reached after an assessor s visit confirms that the school has reached the nationally agreed standard in all eight elements of the framework. The criteria for judging the ICT excellence awards are based on the highest levels in the framework, and form the top level of the charter. https://selfreview.becta.org.uk/about_next_generation_learning_charter This e-maturity model (emm) has been identified and used successfully in other project evaluations. The details of the e-maturity models developed by a team from Manchester Metropolitan and Nottingham Trent Universities in their ICT test-bed project can be viewed at: http://www.evaluation.icttestbed.org.uk/methodology/maturity_model The evaluation assessed the effectiveness of the implementation of ICT in educational organisations in relation to five key themes. The evaluation comprises a range of methodologies, including a survey, maturity model, action research, qualitative investigation, and benchmarking performance data. The development of the maturity models was funded by Becta/DfES and copyright of the models remains with Jean Underwood and Gayle Dillon (authors). Reproduction of the models, or any part of them, must be sought from the authors directly. Major frameworks and evaluation models
12 Section four: Evaluation tools Familiarisation Utilisation Integration Reorientation These are the elements of evaluation that provide the data to evaluate the indicators, processes and outcomes of ICT-based projects. Such evaluation tools sit within the broad structure of an evaluation model and provide the detailed data from which conclusions may be drawn. In the development of evaluation methodology it is important to ensure that we develop research designs that capture what is important rather than what is measurable (Coburn, 2003, p9). For this we have to consider a number of factors and, in her research, Coburn has identified four aspects of scale that she considers are vital to the success of projects designed to bring about reform in practices. Scale is usually considered as the increasing take-up of a particular reform and, in her research on teaching and learning reform in schools, she suggests that evaluators should be redefining scale in four dimensions as current views are too limiting and take-up does not indicate change. The four dimensions of scale are: Depth relates to the impact and recognition that the reform has on the individual, ie, changed their behaviour, understand and use the new pedagogy of the reform. Sustainability is the capacity of the organisation increased to enable all staff to maintain these changes? Spread describes the reform in terms of the understanding and acceptance of its principles and norms, not just to schools but to local authorities and collaborative groups. Shift in reform ownership no longer an external reform controlled by a reformer but becomes an internal reform with authority held by the school and teachers within the school who have the capacity to sustain, spread and deepen the reform principles themselves. The identification and measurement of these dimensions requires a range of complex tools, some of which are available and some of which have to be developed in order to gather the data that will inform the evaluation of each of these dimensions. Some of these issues were identified and measured in the CeDARE evaluation methodology. The summary of the following evaluation methods are from the CeDARE (2009a) ICT in ITT survey analysis. They formed some of the tools required to categorise data and define and measure objectives in the survey. a. The research of Hooper and Reiber (1995) has helped to support the recognition of indicators of the capacity of staff to spread and own changes in the use of ICT. They proposed a model of technology in the classroom that was set out in what they defined as the five phases of adoption of ICT by staff. The phases are: A teacher s initial experience with ICT. A teacher participates in an ICT training programme but does not then go on to use the information. A teacher tries out the ICT in their classroom but does not expand on its use. If the technology was taken away on Monday, hardly anyone would notice on Tuesday. This is the beginning of an understanding of ICT. The teacher decides to use the technology for something specific in their lesson, and if the technology is unavailable then the lesson is unable to proceed. If the teacher overcomes this hurdle they are likely to move to the next phase. The teacher uses ICT to develop learner-centred approaches and change their own approach to lessons. Evaluation tools
13 Evolution The teacher uses ICT to continue to develop new approaches to teaching and learning with such methods as enquiry-based learning, in which the whole learning environment is changed. At the reorientation and evolution stages the capacity for reform is clearly recognised. A model of adoption of ICT in the classroom Adapted from Hooper and Reiber (1995) 4 5 Reorientation Evolution 3 Integration 2 Utilisation 1 Familiarisation Using questionnaires built around this model would help the evaluator to recognise the development and capacity for reform of a member of staff. b. The use of a modified e-maturity model has helped to identify and measure the impact of ICT development and help identify the improvements and capacity of organisations. Examples of the emm evaluation tool may be found on the Becta website and on the ICT test-bed site. Client Note: the cover images supplied in the WORD document are not of sufficient quality for print reproduction - please supply highresolution image at least 300dpi http://feandskills.becta.org.uk/display. cfm?resid=38834&page=1886&catid= 1868 This is the updated version of a tool capable of assessing and improving the use of technology across the further education (FE) and skills sector. A common approach for all providers including colleges, work-based learning organisations, and adult and community education centres is now out for consultation. The initial model is built around four levels of maturity: beginning developing performing outstanding. Evaluation tools
14 There is also a self-development package on the use and development of emm from a University of Wellington website. Client Note: the cover images supplied in the WORD document are not of sufficient quality for print reproduction - please supply highresolution image at least 300dpi http://www.utdc.vuw.ac.nz/research/emm/ E-learning maturity model The underlying idea that guides the development of the emm is that the ability of an institution to be effective in any particular area of work is dependent on their capability to engage in high-quality processes that are reproducible and able to be extended and sustained as demand grows. This site provides a step-by-step guide to develop your evaluation questions on capability. A number of pilots have shown that an e-maturity model has the potential to identify the development and capacity of reform in an organisation. Chapman (2006) carried out a pre-event questionnaire using the levels in the e-maturity FE and skills developmental model, followed up with a post-event questionnaire some 18 months later. The effects of the training and its influence on change in pedagogy could be clearly recognised from this evaluation. c. Fisher et al (2006) has endeavoured to determine how teachers might learn with digital technologies using the work of Shulman and Shulman, who propose that ICT affords learners the opportunity to engage with activities. Trainee teachers and their learners may discover that technology provides a suitable vehicle for enquiry-based learning in which the teachers have changed learning practice and collaborate in the learning process. How this might be recognised in an evaluation of the use of technology could be by noting if teachers are ready, willing and able to teach as a result of their affordance of learning, using what Loveless (2006) calls clusters of purposeful activity. These are separated into vision for education, motivation to learn and develop practice, professional knowledge, understanding and practice, and reflection and learning in community as a basis of questions of individuals or focus groups. There is an example of its use in a questionnaire in the CeDARE (2009a) ICT in ITT survey, see questions 13 and 14. Evaluation tools
15 Section five: Information on evaluation Examples of resources available to support and guide evaluation of ICT Client Note: the cover images supplied in the WORD document are not of sufficient quality for print reproduction - please supply highresolution image at least 300dpi Sources of further information on tools and ideas for evaluating ICT from the UK and elsewhere. Some ICT specific models of evaluation practice, including data collection methods and approaches to assessment, may be found in handbooks from a number of sources. The methodologies are too detailed and comprehensive to review in this document, and this section provides you with a list of websites and titles that may be accessed for further comprehensive information. As with other areas of ICT the models are subject to change and development. Major sources of advice and information on evaluation methodology for ICT will be found on the Becta http://www.becta.org.uk and JISC http://jisc.ac.uk websites. a. Title: Educator s guide to evaluating the use of technology in schools and classrooms Link: http://www.gao.gov/policy/10_1_4.htm Sponsor: American Institutes for Research for US Dept of Education Scope: Evaluating technology use in elementary and secondary schools Audience: Anyone conducting technology evaluation in schools Format: Available both as web pages and Adobe pdf document b. Title: The learning technology dissemination initiative (LTDI) Link: http://www.icbl.hw.ac.uk/ltdi/evalstudies/es_all.pdf Scope: A range of case studies and ideas of evaluation in one downloadable text. The LTDI has put together a collection of papers LTDI: evaluation studies on evaluation that offer a number of case-study examples. The paper from Professor Barry Jackson, Middlesex University, Evaluation of learning technology implementation, is particularly relevant. Information on evaluation
16 c. Title: A guide to logical model development Link: http://www.ojp.usdoj.gov/bja/evaluation/guide/documents/cdc-logicmodel-development.pdf Scope: Sundra DL, Scherer J, Anderson LA (2003) present A guide to logic model development for CDC s Prevention Research Center. This is a website from the USA which has a very helpful guide to the production of a logic model framework and lots of case-study examples. d. Title: A practical guide to evaluation Link: http://www.rcuk.ac.uk/aboutrcuk/publications/corporate/ evaluationguide.htm Scope: This is, as it states, a practical guide to anyone drawing up an evaluation of a technology project. Client Note: the cover images supplied in the WORD document are not of sufficient quality for print reproduction - please supply highresolution image at least 300dpi This guide is designed for those who lead projects intended to engage general audiences in science, social science, engineering and technology and the social, ethical and political issues that new research in these areas raises. It is intended to help project managers evaluate individual projects, regardless of their experience of evaluation. Information on evaluation
17 e. Title: A practical guide to evaluation methods for lecturers Link: http://www.icbl.hw.ac.uk/ltdi/ltdi-pub.htm#cookbook Scope: This offers step-by-step guides to a range of approaches to evaluation. Client Note: the cover images supplied in the WORD document are not of sufficient quality for print reproduction - please supply highresolution image at least 300dpi A practical guide to evaluation methods for lecturers Recipe pages a list of the methods with summaries or a page grouping recipes by their use Step-by-step guides to the time, resources and process involved in different evaluation methods, with hints relating to the stages of the process and links to related pages. Information pages The recipes are linked to information pages that aim to provide some basic practical suggestions and advice, applicable to a range of different evaluation methods. Preparation pages Sections have been included to provide a framework to the planning and preparation process involved prior to carrying out an evaluation. These aim to encourage you to think in more detail about who the evaluation is for, what you are going to be evaluating, and how best you might carry out such an evaluation study. Testing, refining and presentation pages The final sections of the cookbook encourage you to think of your evaluation study as an ongoing process used to make improvements in teaching and learning. Guidance is provided to encourage you to reflect on ways in which you can act on your results and/or write up your findings in an evaluation report. Serving suggestions The serving suggestions sections are included to demonstrate some of the cookbook evaluation methods put into practice. These short exemplars outline the aims and objectives of various evaluation studies, the main findings from these studies, and some reflections on these findings. Information on evaluation
18 f. Title: The JISC handbook on evaluation commissioned by JISC from Glenaffric Ltd (2007). Six steps to effective evaluation: a handbook for programme and project managers Link: http://www.jisc.ac.uk/media/documents/programmes/digitisation/ SixStepsHandbook.pdf Scope: This offers a logic model framework approach for evaluating technology projects. Glenaffric Ltd states that this handbook may be useful for anyone engaged in development activities in the innovative use of ICT to support education and research. p1 6. Report Findings 1. Identify Stakeholders Evaluation Reports Stakeholder Analysis 5. Analyse Results Coding Frame 4. Gather Evidence Six Steps to Effective Evaluation 2. Describe Project and Understand Programme Logic Model Evaluation Data 3. Design Evaluation Evaluation Plan g. Title: The evalkit Link: http://www.jiscinfonet.ac.uk/resources/evalkit/index_html Scope: This is a directory of ICT evaluation tools and toolkits for use by the education sector covering the broad area of curriculum development, media selection, resource selection, quality assurance, and evaluation of ICT development projects. Please note it is not the aim of JISC to review or rate the tools and toolkits held within the database but to raise the awareness of the education community to ICT evaluation toolkits and tools that are currently available. The toolkit database is also available with a list of links and downloads for wide-ranging sources for tools to be used in evaluation. Found at http://www.jiscinfonet.ac.uk/resources/evalkit/toolkit-database Information on evaluation
19 h. Title: (i) The Kellogg Foundation logic model development guide (ii) The Kellogg Foundation evaluation handbook Link: (i) http://www.wkkf.org/pubs/tools/evaluation/pub3669.pdf (ii) http://www.wkkf.org/pubs/tools/evaluation/pub770.pdf Scope: They are useful guides to planning and evaluating projects with step-by-step advice. W.K. Kellogg Foundation Logic model development guide The program logic model is defined as a picture of how your organisation does its work the theory and assumptions underlying the programme. A program logic model links outcomes (both short- and long-term) with programme activities/ processes and the theoretical assumptions/ principles of the program. Client Note: the cover images supplied in the WORD document are not of sufficient quality for print reproduction - please supply highresolution image at least 300dpi The W.K. Kellogg Foundation logic model development guide, a companion publication to the evaluation handbook, focuses on the development and use of the program logic model. We have found the logic model and its processes facilitate thinking, planning, and communications about program objectives and actual accomplishments. W.K. Kellogg Foundation Evaluation handbook The Kellogg Foundation also has a useful evaluation toolkit found at http://www.wkkf.org/default.aspx?tabid =90&CID=281&ItemID=2810002&NID= 2820002&LanguageID=0 We believe anyone who is seeking to design a useful evaluation can benefit from it. Information on evaluation
20 i. Title: The 2002 user-friendly handbook for project evaluation Link: http://www.nsf.gov/pubs/2002/nsf02057/nsf02057_1.pdf Scope: A clear guide to setting out an evaluation framework for a project. Although based on science projects there are approaches that are applicable to the use of technology. The National Science Foundation 2002 Client Note: the cover images supplied in the WORD document are not of sufficient quality for print reproduction - please supply highresolution image at least 300dpi The handbook discusses quantitative and qualitative evaluation methods, suggesting ways in which they can be used as complements in an evaluation strategy. As a result of reading this handbook, it is expected that program managers will increase their understanding of the evaluation process and NSF s requirements for evaluation, as well as gain knowledge that will help them to communicate with evaluators and manage the actual evaluation. j. Title: Online evaluation resource library (OERL) Link: http://oerl.sri.com/ Scope: A collection of a range of resources for people seeking information on evaluation. Online Evaluation Resource Library OERL s mission is to support the continuous improvement of project evaluations. Sound evaluations are critical to determining project effectiveness. To this end, OERL provides: a large collection of sound plans, reports, and instruments from past and current project evaluations in several content areas, and guidelines for how to improve evaluation practice using the website resources. OERL s resources include instruments, plans and reports from evaluations that have proved to be sound and representative of current evaluation practices. OERL also includes professional development modules that can be used to better understand and utilise the materials made available. k. Title: The route map Link: http://publications.dcsf.gov.uk/default.aspx?pagefunction=productdetails &PageMode=publications&ProductId=RR659& Scope: These materials are intended for use by CPD leaders/coordinators, participants and providers, departments, teams, schools and LEAs. They are an edited version of materials produced as part of a two-year, DfES-funded research project undertaken by the Universities of Warwick and Nottingham. Appendix 8 of the report Evaluating the impact of continuing professional development in schools sets out a model for evaluating the impact of CPD in schools. It offers a series of steps to follow and questions to ask. Information on evaluation
The SO WHAT factor... Impact Evaluation strategies for Teacher Educators 21 Section six. This section gives a worked example of an evaluation model, the logical model framework, applied to an ICT in ITT project Evaluation factors The example is based on a small-scale technology development in an employment-based initial teacher training (EBITT) programme making up the Dorset Teacher Education Partnership (DTEP). Title of the evaluation Evaluation of the use of a virtual learning environment (VLE): improving reflective practice and self-assessment of progress against the QTS standards and supporting practice. The information for the evaluation is drawn from a short video of individuals talking about the use of the VLE in their practice. CeDARE (2009b). See VIDEO CASE-STUDY insert link to Dorset VLE study Any evaluation should consider the five principles of evaluation: identify the focus and purpose of evaluation build on what is already known gather evidence analyse and interpret communicate and feed back. (See main document for more detail) The use of the logic frame model ensures that these principles are adhered to as it encourages participants to clearly think about: a. input what is invested in the project b. outputs what is done as part of the project c. outcomes impact: what results are achieved in the project. The logic framework model offers both a vehicle for planning and a framework for evaluation. A logic model helps us match evaluation to the actual program so that we measure what is appropriate and relevant p1 Taylor-Powell E and Henert E (2008). In its simplest form the logic frame is made up of three elements which logically link activities and effects. Input Outputs Outcomes impact Programme investments Diagram 1. A simple logic framework model Activities Participation Short Medium Longterm To use this model the first step is to complete a flow model from need to final impact. When this is completed it will: provide a plan for future evaluation identify the outcomes that should be measured, and provide a guide as to the evaluation tools to be used. To draw up a simple logic frame model of the DTEP project you will need to review the video and refer to the model below and the guidelines available at http://www.uwex.edu/ces/lmcourse/ This section gives a worked example of an evaluation model, the logical model framework, applied to an ICT in ITT project
The SO WHAT factor... Impact Evaluation strategies for Teacher Educators 22 Ideally, the logical model should be drawn up at the beginning of a project and should involve all stakeholders. This will identify the focus and purpose of the evaluation from its outset. Try to complete the model step by step using a blank flow chart (a more detailed teach-yourself guide may be found at http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html) You need to: identify why the project was set up situation and priorities note what resources are anticipated for the project input identify the activities to be carried out by the project and who will participate in them outputs, and state the proposed results from the project at short-, medium- and long-term time-scales outcomes: impacts. We have detailed below a completed LFM for the DTEP project as an example. We have also provided the evaluation outcomes from the project to illustrate how a logical model would have helped with both planning and evaluation. Inputs Activities Outputs Outcomes Impact Participation Short Term Medium Term Long Term Situation Wheelbarrow full of paper (portfolio) at the end of the year to be used in assessing QTS Priorities (aims) To improve: 1. reflective practice 2. selfassessment of progress against the QTS standards What we Invest in the project TDA funding Equipment Consultants Staff time What we do Install software and provide laptops Involve consultants Train trainees and some staff Encourage use of VLE Facilitate communities of practice Who we reach All trainees receive training Some mentors and trainers receive training Managers, partners made aware of use of VLE Ofsted and consultants to be made aware of use of VLE What the short term results are Learning Trainees actively use the VLE Assessable work stored on VLE Trainees use VLE for reflection, communication, communities of practice What the medium term results are Action Trainees wish to personalise their VLE access. More staff to have more training on the VLE. Assessment of work on the VLE. Other staff to join the communities of practice. More issues to be identified to maximise the use of the VLE. Trainees start to share materials and develop communities of practice. What the ultimate Impact(s) is Conditions All staff actively involved in using the VLE. The organisation supports all staff in their use of the VLE. Communities of practice at work throughout the organisation and all of them using the VLE as the basis for that communication. Reflective practice developed. Note The work of CeDARE (after Coburn, 2003) in evaluating ICT in ITT has identified the importance of considering the impact of technology in terms of: Depth How do you ensure it impacts on classrooms and in different contexts? How does it impact on beliefs and attitudes? Scope How do you actively involve a critical mass of people in trying out the technology and changing their practice? Transfer of ownership How do you ensure people own the technology and the intervention set up by a project. The traditional Outcomes Impacts in the Logic Framework Evaluation should therefore be considered under the headings of depth, scope and transfer of ownership rather than learning, action and conditions. Depth Change in their pedagogy not only using VLE for storage of their evidence but also to develop their teaching resources. Trainees use the VLE to reflect on their evidence and to carry out selfassessment against the QTS standards. Scope Share their evidence with other trainees and mentors for their reflective comments. Developing communities of practice using the VLE. More staff actively involved in using the VLE with trainees including other partner schools. Transfer of ownership Personalising the VLE. Developing the potential of the VLE from storage to pedagogic vehicle for all staff. All staff changing their working practices to fully utilise the VLE. Assumptions All trainees will use the VLE effectively after a short training session. All mentors will actively use the VLE in their relationship with their trainee. Ofsted will be able to use the VLE store of evidence for assessment. External Factors Current limits on the use of VLE as a result of the supplier contract. Should consider the impact of the use of the VLE using: depth, scope and transfer of ownership. Diagram 2. The logic framework model for the DTEP project This section gives a worked example of an evaluation model, the logical model framework, applied to an ICT in ITT project
The SO WHAT factor... Impact Evaluation strategies for Teacher Educators 23 Scope of the implementation Depth of engagement Transfer of ownership Recommendations Comment Acknowledgments The findings of the evaluation from the video were as follows. Planning of the project was limited in its scope. There were a number of assumptions made based on little evidence. The training provided for the trainees and other staff was very limited and unsupported post-training. The VLE has meant that the Wheelbarrow of paper is no longer needed. Trainees found the VLE very useful not only in storing their evidence but also in developing other areas of their work. Still uncertain if Ofsted will accept assessment of trainees via a VLE. For some trainees the use of the VLE changed the way they worked and developed their reflective practice. New communities of practice were established by trainees. Few mentors changed their practice. During the project a number of other issues were identified and now need to be developed. The future potential of the VLE has been recognised by trainees and managers involved in the project. Some trainees were developing their own communities of practice with other users of the VLE. Some trainees were changing their practice as a result of having the availability and resources within the VLE. The partnership has recognised the potential of a VLE to develop, change and improve practice for more than just trainees. The project has met the aims of the project but has also highlighted the limitations of outlook of those original aims. The project leaders now need to: involve more staff in the use of the VLE develop future training events to meet the needs of other groups of staff in using the VLE involve Ofsted in the discussions of their future developments, and monitor the impact of the use of the VLE on teaching and learning for trainees and other staff. For those people who have watched the video and followed the steps in the model it is anticipated that similar recommendations would be suggested. The Logic Framework Model should enable a straightforward evaluation of any project. This example has used a limited range of activities but the model has the potential to be used in either a simple or multifaceted project. This document was conceived and funded in support of the Training and Development Agency for Schools (TDA) evaluation by Becta. Thanks are due to: Malcolm Hunt at Becta, who guided the process, and to Dr Michael Stokes at the University of Wolverhampton, who conceptualised the document and drew the strands together. This section gives a worked example of an evaluation model, the logical model framework, applied to an ICT in ITT project
The SO WHAT factor... Impact Evaluation strategies for Teacher Educators 24 References Bennett, J, 2003. Evaluation Methods in Research, London: Continuum Bickman, L, 1987. The Functions of Program Theory. In L Bickman (Ed) Using Program Theory in Evaluation: New Directions for Program Evaluation, 33, 5-18. San Francisco, CA, Jossey-Bass Publishers CeDARE, 2009a, ICT in ITT Survey, Final Report, Wolverhampton: University of Wolverhampton CeDARE, 2009b, Teacher Trainees: Virtual Learning Environments (VLEs) in Learning and Teaching, DTEP Case-Study, Wolverhampton: University of Wolverhampton Chapman, R W C, 2006. From Coordinated to Innovative: Investigating Change Management in the Use of Electronic Learning Technologies within a Large Further Education College. Unpublished MA dissertation, University Of Wolverhampton Chelimsky, E, 1996. Thoughts for a New Evaluation Society. Keynote speech at UK Evaluation Society Conference, London 19-20 September Davis, N, Preston, C, and Sahn, I, 2009. ICT Teacher Training: Evidence for Multi-Level Evaluation from a National Initiative, British Journal of Education Technology, vol 40, no 1, pp135 148 Easterby-Smith, M, 1986. Evaluation of Management Education, Training and Development, Hants, UK: Gower Fisher, T, Higgins, C, and Loveless, A. Teachers Learning with Digital Technologies: A Review of Research and Projects, Report 14, Futurelab Series Goodall, J, Day, C, Harris, A, and Lindsay, G, 2005. Evaluating the Impact of Continuing Professional Development, DFES RB659, London: DFES Guskey, T R, 1998. Evaluation Must Become an Integral Part of Staff Development, Journal of Staff Development, vol 19, no 4 Guskey, T R, 2001. JSD Forum: The Backward Approach, Journal of Staff Development, 22(3), 60 Guskey, T R, 2002. The Age of Our Accountability, Course Outline, University of Kentucky Hooper, S, and Rieber, L P, 1995. Teaching with Technology. In Ornstein, A C (Ed) Teaching Theory into Practice, pp154 170, Needham Heights, MA: Allyn and Bacon Kirkpatrick, D L, 1959.Techniques for Evaluating Programmes. In Journal of the American Society of Training Directors, vol 13, no 11, pp3 9 Nevo, D, 2006. Evaluation in Education. In Shaw, I F, Greene, J C, Mark, M M (Ed) The Sage Handbook of Evaluation, pp451 460, London: Sage Publications Ltd Patton, M, 1997. Utilisation-Focused Evaluation, 3rd Edition, Thousand Oaks, CA: Sage Publications RCUK, 2008. An Introduction to Evaluation, http://www.rcuk.ac.uk/aboutrcuk/publications/corporate/evaluationguide.htm accessed 11 March 2009 Scriven, M, 1967. The Methodology of Evaluation. In R E Stake (Ed) AERA Monograph Series on Curriculum Evaluation No. 1. Chicago: Rand McNally Stern, E, 1990. The Evaluation of Policy and the Politics of Evaluation, in The Tavistock Institute of Human Relations Annual Review This section gives a worked example of an evaluation model, the logical model framework, applied to an ICT in ITT project
The SO WHAT factor... Impact Evaluation strategies for Teacher Educators 25 Shulman, L S, and Shulman, J H, 2004. How and What Teachers Learn: A Shifting Perspective, Journal of Curriculum Studies, vol 36, N 2, pp257-271 Sundra, D L, Scherer, J, Anderson, L A, 2003. A Guide to Logic Model Development for CDC s Prevention Research Centre, Centre for Disease Control and Prevention http://www.ojp.usdoj.gov/bja/evaluation/guide/documents/cdc-logic-modeldevelopment.pdf accessed 12 March 2009 Tamkin, P, Yarnall, J, Kerrin, M, 2002. Kirkpatrick and Beyond: A Review of Models of Training Evaluation, Report 392, London: Institute of Employment Studies Taylor-Powell, E, and Henert, E, 2008. Developing a Logic Model: Teaching and Training Guide, Madison, WI: University of Wisconsin-Extension, Cooperative Extension, Program Development and Evaluation http://www.uwex.edu/ces/pdande accessed 12 March 2009 UK Evaluation Society http://www.evaluation.org.uk/resources/glossary accessed 5 March 2009 Wholey, J, 1983. Evaluation and Effective Public Management. Boston: Little, Brown Wholey, J, 1987. Evaluability Assessment: Developing Program Theory In Bickman, L (Ed) Using Program Theory in Evaluation. New Directions for Program Evaluation, no 33, San Francisco: Jossey-Bass This section gives a worked example of an evaluation model, the logical model framework, applied to an ICT in ITT project