Our Partners
Related Tools

What are some lessons learned about monitoring and evaluating programmes with men and boys?

Last edited: October 30, 2010

This content is available in

Options
Options

Think about the ‘big picture’ questions before the evaluation is planned

The following questions should provide the foundation for the programme and the evaluation:

  • What are the known risk factors for violence against women and girls?
  • How does the programme hope to prevent or change these patterns of violence?
  • How does the programme reflect the knowledge of the risk factors and protective factors associated with violence (Valle et al., 2007)?

Integrate a monitoring and evaluation component from the onset of the initiative

Ideally, programme managers and personnel should work together with those involved in programme evaluation to integrate a monitoring and evaluation component from the very beginning of programme design.

Select outcomes that are realistic and match the scope of the programme

The outcomes to be measured in an evaluation should be based on the programme’s theory regarding risk and protective factors related to violence, the programme’s objectives, maturity and activities, as well as the resources available for the evaluation. For instance, it may be unrealistic to expect a media campaign to produce long-lasting changes in behaviour unless it is part of a broader prevention effort (Valle et al., 2007).

Adhere to strict ethical and confidentiality considerations in data collection

Ethical issues deserve special attention when gathering data for baseline and evaluation related to prevention of sexual or intimate partner violence. Several documents have been produced relating to the ethical issues around research on violence against women. Although they do not specifically relate to evaluating violence prevention programmes, the lessons they offer are still useful to this context. (Refer to the tools section below).

Ensuring that the evaluation is ethical includes making certain that anyone who is asked to share information during the course of the evaluation is informed about the following key aspects of the evaluation:

  • The purpose of the evaluation
  • That participation is voluntary and can be discontinued at any time
  • What they are required to do
  • What information will be asked of them
  • Whether providing information poses any risks to them
  • How the information will be gathered
  • When the information will be gathered, including any contacts for follow-up information
  • Who will have access to the information they provide
  • How privacy and confidentiality will be ensured and what the limits to confidentiality are if any (e.g. in some contexts, if there is reason to believe that the individual may cause harm to her or himself or others, it is mandatory to report it). It is important to know and act in accordance with any local laws that may limit confidentiality.
  • How evaluation information will be used
  • Whom to contact if they have questions or concerns

Be aware of the limitations of certain self-reported data

Because self-reporting is subject to denial and minimization, partner reports are the most valid and reliable measure for project evaluation, particularly when assessing programmes for perpetrators. Consequently, whenever possible and safe, partner contact should be attempted (Mullender and Burton 2000).

Include adequate funding in the budget for monitoring and evaluation activities

Even though emphasis is placed on ensuring that programmes be able to ‘show results’, programmers and donors often include inadequate funds for evaluation in their budgets. Programme managers should ensure that evaluation is adequately budgeted and built into the programme from the beginning.

Carry out baseline data collection

Collecting baseline data is essential for measuring change over time since programmes are unable to measure change if they have no point of comparison. Although a post-test only/intervention design may be what is feasible, it will not be able to assess change resulting from the programme, since there will be no baseline data to compare it to.

When should a programme be evaluated?

  • Evaluations can be conducted with new or existing programmes.
  • Depending on the evaluation design, baseline data may have to be collected prior to the intervention taking place.
  • Ongoing evaluation can help a programme respond to changing community characteristics and needs (Valle et al., 2007).

What are the different types of evaluations?

Formative evaluation falls under two broad categories:
1) Programme or approach formulation
– carried out in the early stages of programme planning in order to help in the design of the programme.

2) Pre-testing – undertaken in order to test whether the materials, messages, approach, etc. are understood, feasibly, likely to be effective, or have any unanticipated effects (Valle et al., 2007).

Formative evaluation can help determine the extent of violence in the community, the factors that contribute to or protect from violence, the community context in which the prevention approach, including gender norms held by the community, will be conducted and ways to tailor the approach to increase its relevance and likelihood of achieving the desired results (Valle et al., 2007).

Process evaluation describes the programme and determines whether the programme is being delivered as intended. Process evaluations may look at staffing, programme content and delivery, and the numbers and characteristics of participants (Valle et al., 2007).

  • Was the programme carried out as planned? How many activities were conducted (.e.g. trainings, campaigns, workshops, etc.)?
  • Did the programme run into logistical or practical difficulties?
  • What modifications were made along the way and why?
  • Did the programme reach the number of men and boys intended?
  • What are participants’ perceptions of and satisfaction with the programme?

Outcome evaluation determines whether the programme is meeting or progressing toward its goal for preventing violence. It should be conducted once programmes are established and running consistently (Valle et al., 2007).

  • Is the programme having the intended effect? For example, did the programme produce changes in gender behaviours and norms?
  • What is the specific strategy and what is the frequency and duration required to achieve that result?

Economic evaluation includes cost-analysis, cost-effectiveness analysis and cost-benefit analysis (Valle et al., 2007).

  • What are the resources needed to conduct, replicate, or expand the programme?
  • What are the costs and benefits of different approaches?
  • Do programme benefits outweigh programme costs?