The following questions should provide the foundation for the programme and the evaluation:
Ideally, programme managers and personnel should work together with those involved in programme evaluation to integrate a monitoring and evaluation component from the very beginning of programme design.
The outcomes to be measured in an evaluation should be based on the programme’s theory regarding risk and protective factors related to violence, the programme’s objectives, maturity and activities, as well as the resources available for the evaluation. For instance, it may be unrealistic to expect a media campaign to produce long-lasting changes in behaviour unless it is part of a broader prevention effort (Valle et al., 2007).
Ethical issues deserve special attention when gathering data for baseline and evaluation related to prevention of sexual or intimate partner violence. Several documents have been produced relating to the ethical issues around research on violence against women. Although they do not specifically relate to evaluating violence prevention programmes, the lessons they offer are still useful to this context. (Refer to the tools section below).
Ensuring that the evaluation is ethical includes making certain that anyone who is asked to share information during the course of the evaluation is informed about the following key aspects of the evaluation:
Because self-reporting is subject to denial and minimization, partner reports are the most valid and reliable measure for project evaluation, particularly when assessing programmes for perpetrators. Consequently, whenever possible and safe, partner contact should be attempted (Mullender and Burton 2000).
Include adequate funding in the budget for monitoring and evaluation activities
Even though emphasis is placed on ensuring that programmes be able to ‘show results’, programmers and donors often include inadequate funds for evaluation in their budgets. Programme managers should ensure that evaluation is adequately budgeted and built into the programme from the beginning.
Carry out baseline data collection
Collecting baseline data is essential for measuring change over time since programmes are unable to measure change if they have no point of comparison. Although a post-test only/intervention design may be what is feasible, it will not be able to assess change resulting from the programme, since there will be no baseline data to compare it to.
Formative evaluation falls under two broad categories:
1) Programme or approach formulation – carried out in the early stages of programme planning in order to help in the design of the programme.
2) Pre-testing – undertaken in order to test whether the materials, messages, approach, etc. are understood, feasibly, likely to be effective, or have any unanticipated effects (Valle et al., 2007).
Formative evaluation can help determine the extent of violence in the community, the factors that contribute to or protect from violence, the community context in which the prevention approach, including gender norms held by the community, will be conducted and ways to tailor the approach to increase its relevance and likelihood of achieving the desired results (Valle et al., 2007).
Process evaluation describes the programme and determines whether the programme is being delivered as intended. Process evaluations may look at staffing, programme content and delivery, and the numbers and characteristics of participants (Valle et al., 2007).
Outcome evaluation determines whether the programme is meeting or progressing toward its goal for preventing violence. It should be conducted once programmes are established and running consistently (Valle et al., 2007).
Economic evaluation includes cost-analysis, cost-effectiveness analysis and cost-benefit analysis (Valle et al., 2007).