Open Thinkering

Menu

Tag: evaluation

Evaluation: the absolute basics

Whilst I’ve done some work in the past around evaluation I’ve needed to brush up on it since joining the Mozilla Foundation. This post reflects some hours spent in Durham University Education Library earlier this week.

Office of Mayhem Evaluation

Introduction

Evaluation is a contested term. Even people involved in the field can’t agree what they mean:

“No single-sentence definition will suffice to fully capture the practice of evaluation.”

(Patton, 1982:4, quoted in Clarke, 1999)

However, the following general guidance is useful:

“The most important purpose of evaluation is not to prove but to improve.”

(Stufflebeam and Shinkfield, 1985:2, quoted in Clarke, 1999)

And among the definitions I’ve come across, the one I like best is this:

“Evaluation is the systematic investigation of the merit or worth of an object (program) for the purpose of reducing uncertainty in decision making.”

(Mertens, 1998:219)

The ‘evaluand’

Like every field, Evaluation methods has its jargon terms. For example, an evaluand is the subject being evaluated. The following are useful questions asked by Mertens (1998:231) in relation to the evaluand:

  1. Is there a written description of what is to be evaluated?
  2. What is the status of the evaluand? Relatively stable and mature? New? Developing? How long as the program been around?
  3. In what context will (or does) the evaluand function?
  4. Who is the evaluand designed to serve?
  5. How does it (the evaluand) work? Or how is it supposed to work?
  6. What is it supposed to do?
  7. What resources are being put into the evaluand (e.g. financial, time, staff, materials, etc.)?
  8. What are the processes that make up the evaluand?
  9. What outputs are expected? Or occur?
  10. Why do you want to evaluate it?
  11. Whose description of the evaluand is available to you at the start of the evaluation?
  12. Whose description of the evaluand is needed to get a full understanding of the program to be evaluated?

Dimensions to evaluation

There are many dimensions to evaluation, the most commonly known being summative vs. formative evaluation. Whilst formative evaluation has as its audience those within the program being evaluated, the audience for summative evaluation is those outside the program (such as policy makers, funders, or the general public).

Other dimensions over and above formative vs. summative should be considered (according to Malla Reddy, 2000:3) including:

  • Inside vs. Outsider
  • Experimental vs. Illuminative
  • Democratic vs. Bureaucratic
  • Product vs. Process
  • Quantitative vs. Qualitative

The last bullet point of this list has many books and articles dedicated to each element. Very basically, quantitative evaluation focuses on ‘hard’ numbers, whereas qualitative evaluation focuses on ‘soft’ experience.

Planning an evaluation

Mertens (1998:230) suggests the following steps when planning an evaluation study:

Focusing the evaluation

  • Description of what is to be evaluated
  • The purpose of the evaluation
  • The stakeholders in the evaluation
  • Constraints affecting the evaluation
  • The evaluation questions
  • Selection of an evaluation model

Planning the evaluation

  • Data collection specification, analysis, interpretation, and use strategies
  • Management of the evaluation
  • Meta-evaluation plans

Implementing the evaluation

  • Completing the scope of the work specified in the plan

O’Sullivan (2004:7) gives a brief overview of the various evaluation models or approaches from which to choose:

  1. Objectives – focuses on objectives to determine degree of attainment
  2. Management – focuses on information to assist program decision makers
  3. Consumer – looks at programs and products to determine relative worth
  4. Expertise – establishes peer and professional judgements of quality
  5. Adversary – examines programs from pro and con perspectives
  6. Participant – addresses stakeholders’ needs for information

To be honest, all of this seems a little over-the-top for some of the things I’ll be evaluating. That’s why I found Colin Robson’s book Small-scale Evaluation useful. Robson (2000:46) suggests using the following questions in an evaluation:

  1. What is needed?
  2. Does what is provided meet client needs?
  3. What happens when it is in operation?
  4. Does it attain its goals or objectives?
  5. What are its outcomes?
  6. How do costs and benefits compare?
  7. Does it meet required standards?
  8. Should it continue?
  9. How can it be improved?

Structuring an evaluation report

Robson also suggests how to structure an evaluation report (2000:122):

  • Heading – make it short and clear
  • Table of contents – simple list of headings and page numbers (without subheadings)
  • Executive summary – key findings and conclusions/recommendations
  • Background  – one-page setting of the scene as to why the evaluation was carried out, what questions you are seeking answers to, and why the findings are likely to be of interest
  • Approach taken – when, where and how the study was carried out (detail goes in appendices)
  • Findings – the largest section giving answers to the evaluation questions with the main message going at the beginning and using subheadings where necessary
  • Conclusions/recommendations – draws together main themes of the report and their implications
  • Appendices – include information needed by the audience to understand material in the main report (references)

He also suggests including the names/contact details of the evaluators.

Conclusion

This brief overview of evaluation should enable me to be more confident when evaluating things for my day-to-day role. Hopefully it’s also given you enough of a starting point to carry out your own evaluations.

Image CC BY-NC-SA xiaming

References

  • Clarke, A. (1999) Evaluation Research: an introduction to principles, methods and practice, London: Sage
  • Malla Reddy, K. (ed.) (2000) Evaluation in Distance Education, Hyderabad: Booklinks Corporation
  • Mertens, D. (1998) Research Methods in Education and Psychology: integrating diversity with quantitative and qualitative approaches, London: SAGE
  • O’Sullivan, R.G. (2004) Practicing Evalution: a collaborative approach, London: Sage
  • Robson, C. (2000) Small-scale Evaluation, London: Sage

 

css.php