2.1 What is program evaluation?At its most basic, evaluation is a judgement about how well something is working. The word ‘evaluation’ comes from Old French and Latin words which are about ‘value’. So evaluation is essentially putting a value on a program or activity.
There are many methodologies which can be used for evaluation, from simple clinical audits to rigorous randomised controlled trials. Nationally in Australia, people have been discussing the best way to evaluate breastfeeding services. At a national level, this will require consistent data systems and agreements about what information to collect. At the service level, you can make these decisions yourself and get started right away to learn more about how you are performing.
There are a number of ways to begin planning an evaluation. It can be helpful, before you get started, to ask some questions about whether in fact an evaluation is appropriate for your particular project. For instance, evaluation might be a good idea but there may be funding constraints, or there might be difficulties with getting accurate data. Four questions are provided below that you might consider before you begin planning your evaluation project.
Four standards of evaluation:
1. Utility (is it useful?)
2. Feasibility (is it practical?)
3. Propriety (is it ethical?)
4. Accuracy (are the findings correct?)
Baker et al (2000)
There is really no difference between program and project evaluation. Projects are often smaller scale activities which take place within a shorter timeframe. Programs are usually bigger, longer-term commitments, and may include a number of projects. Program evaluation, therefore, will differ from project evaluation primarily in scale. They will both use the same tools and approaches, but project evaluations may be more focussed on a single objective or outcome whereas program evaluations may be more complex and look at a wide range of objectives. For the purposes of this toolkit we will refer to program evaluation to include both program and project evaluation.
An evaluation may focus on the processes of a program or service (this is sometimes called a formative evaluation), or on the impacts or outcomes of a program or service (this is sometimes called a summative evaluation), or on both.
Process or ‘formative’ evaluation looks at the processes that make a service function. People often conduct process evaluations when no outcome data is yet available, for instance when a service hasn’t been running for very long, or when it will take a long time to see changes in health status. Process evaluations may look at who is using the service (and whether there are any access problems for potential users), how services operate, what structures, protocols or pathways are in place and whether they are effective. It is more about the function of a service than about the outcome of a service.
Outcome evaluations, sometimes called ‘summative’ evaluations, seek to assess what impact the service has had on its users, and what outcomes have occurred as a result of the service. Some people separate out impact evaluations from outcome evaluations, the former being more about stages on the way to the ultimate outcomes. Outcomes may be clinical, behavioural or attitudinal, or relate to changed structures and processes.
An evaluation may focus on processes or outcomes within the same project, or may only consider one aspect, for instance a process evaluation looking at how antenatal breastfeeding education is delivered and who is accessing it.
‘Monitoring’ is a word often paired with evaluation. While evaluation may be the judgement, monitoring is the process that can help you get there, by providing ongoing feedback through regular data collection and review. The difference between monitoring and evaluation is discussed in section 2.3 below.
2.2 Why undertake program evaluation?Evaluation is undertaken for a range of purposes, although ultimately all evaluations are undertaken to find out whether a program is operating as well as it might be. Some of the benefits of evaluation include:
- finding out what is working and what is not working
- identifying whether there is a good fit between planning and practice
- identifying ways of improving program or project quality
- identifying any current or ongoing program or project risks
- identifying whether any alternatives might work better
- demonstrating the appropriateness, effectiveness and efficiency of a program or activity to funding bodies and the community at large
- identifying any unintended consequences (negative or positive)
- responding more effectively to clients’ needs and improving program or project targeting
- learning what training is required for staff to perform well
- demonstrating adherence to, or establishing new service standards
- sharing good practice.
- it can be overlooked in the excitement of getting a program or project under way
- it can be seen as ‘diverting’ funds away from service delivery (especially if the funds for a program or project are limited)
- it is seen as complex or a task ‘only for the experts’
- it is seen as a burden on staff
- if it turns up adverse results, it might be perceived as a threat to the program, an organisation’s reputation, or people’s jobs.
- knowing the impact your program has made can be motivating for staff
- quantifying the impact of a service can help secure further funding
- including simple evaluation techniques to the daily routine can minimise the burden for any one person, or for bigger efforts you can partner with external evaluators.
- learning about your service and its performance as you go means you can continue to adjust, respond and improve the way you assist mothers and babies.
2.3 What is the difference between monitoring and evaluation?Monitoring is the ongoing, regular collection and analysis of agreed sets of data, and then the process of analysing what that data means. The purpose of monitoring is to be alert to how a project is developing or performing, and to respond to any issues or concerns that arise and are evident in the data. Monitoring is a task which can stand alone as a part of good practice, but it can also provide data which is useful in an evaluation.
Evaluation, on the other hand, is the systematic comparison of program objectives to outcomes and assessing the extent to which a program has achieved what it was established to do. Often, if an evaluation runs for a long period of time, say for several years, the first part of the evaluation is essentially largely comprised of data monitoring, for example by observing differences in breastfeeding uptake and maintenance in a population over time. Evaluation will involve making an assessment about what has led to those differences, whether the changes are significant and what the consequences might be.