QUICK ESCAPE FROM SITE

What are the practical steps for planning and evaluation that is right for programme and organization?

Engage stakeholders - Stakeholders include those with a legitimate interest in the prevention programme, including staff, funding agencies, board members, policy makers, community members, partner organizations, gatekeepers to different sources of information or individuals who benefit from or participate in the programme. They can help prioritize questions to be asked, develop a logic model, determine the methods to be used and the information to be gathered, interpret the results and ensure that the evaluation is culturally-sensitive and acceptable to the community (Valle et al., 2007).

Describe the programme - Agreeing on a clear description of the programme will help to determine the proper evaluation questions and activities. Developing a logic model may help capture the essential elements of the programme and evaluation activities (for a clear description of how to develop a logic model, please look at Valle et al., 2007).

Focus the evaluation design – The following elements should be considered when developing the evaluation design (CDC 1999):

  • Purpose – What is the intent of conducting this evaluation? Gaining insights for designing a programme? Improve practice or services?
  • Users – Who are the specific people who will receive or benefit from the evaluation?
  • Uses – How will the evaluation results be used?
  • Questions – What are the most important questions to be answered by the evaluation?
  • Methods – What methods will provide information to answer the questions?
  • Agreements – How will the evaluation plan be implemented using available resources? What safeguards are in place to ensure that all ethical standards are met and all ethical concerns are raised?

Gather credible evidence – Building on the baseline data and evaluation plan developed during the programme design phase, determine what data to collect, who will provide it, when evaluation activities will take place, where data will be collected and what data collections methods will be used.

Analyse results – Determining in advance how findings will be analyzed will help ensure that the data collection plan provides the information needed and will also help determine what expertise and resources are needed to analyze the data.

Ensure use and share lessons learned - A plan should be made to identify the audiences for dissemination (e.g. media, policy makers, organizations); to determine how results will be reported and what reporting formats will be more appropriate to different audiences (e.g. TV, radio, web, print, testimonials).

What factors should determine the choice of evaluation?

The type of evaluation that will be needed will depend on a number of factors, including:

  • Programme’s maturity
  • Duration of project intervention (reasonable results expected)
  • Goals of the evaluation
  • Human and financial resources available
  • Time available for evaluation
  • Whether a baseline was implemented

How much evaluation is needed?*

  • If there are limited resources
  • Some formative research
  • Process indicators
  • Post-intervention-only focus groups
  • If there are modest resources for evaluation:
  • Formative research
  • Process indicators
  • Simple pre and post-intervention quantitative data with no control group
  • If there is a higher level of resources:
  • Extensive formative research
  • Multiple sites data collection, including a control group (or a delayed intervention group)
  • Triangulation with partners/women
  • Qualitative and quantitative data collection throughout

*This information was adapted from Gary Barker’s presentation “Evaluating Work with Boys and Men”. For the complete presentation, please click here.

What are the options for outcome evaluation designs?

Design

Collect pre-programme data

Implement Programme / Strategy

Collect Post-Programme Data

Collect Follow-up Data

Post-test only

No

Yes

Yes

Perhaps

Pre- and post-test

Yes

Yes

Yes

Perhaps

Pre- and post-test with comparison group

Yes (both groups)

Yes (programme group)
No (comparison group*)

Yes (both groups)

Perhaps (both groups)

Post-test with comparison group

No

Yes (programme group)
No (comparison group*)

Yes

Perhaps (both groups)

Randomized controlled trial (RCT)

Yes (both groups)

Yes (programme group)
No (comparison group*)

Yes (both groups)

Perhaps (both groups)

Time series

Yes, multiple times

Yes

Yes

Yes, multiple time

Source: Valle et al., 2007

 

Field test and evaluate new tools and interventions

It is essential to monitor and evaluate each new tool or intervention. Even when an intervention has been effective in other settings, this does not guarantee that it will work in a new country or region, or in a different language.

Pretest new or adapted materials

Pre-testing increases the likelihood that the proposed messages will be received as intended by the programme. The audience must be able to understand and respond in a positive way to the prevention materials. The following approaches can be used in pre-testing:

  • Trial runs – This approach allows a programme to test portions of the proposed approach or the entire approach on a small scale with a similar group to the group with which the approach will be used. This will allow a programme to know whether the prevention approach is conveying the intended message and to assess whether any aspect of the programme is offensive, harmful or ineffective (Valle et al, 2007).
  • Readability testing – Reviews and feedback from people who are similar to the programme’s intended audience will enable a programme to produce reader-friendly materials and materials for different literacy levels. Various word-processing programmes, such as WordPerfect or MicrosoftWord provide ‘readability estimates’ or the age/grade level that should be able to read the material. The Gunning Fog Index is another instrument that does the same thing (Valle et al., 2007). See the more information on calculating the Gunning Fog Index in English.

Choose realistic outcomes when designing an evaluation

Although violence prevention programmes may ultimately strive to change behaviours associated with violence perpetration, it often takes a long time to see such changes, requiring that programmes collect data about outcomes over a long period of time. Therefore, more realistic outcomes of many prevention programmes may be to change proximal factors that contribute to violence with the ultimate goal of preventing violent behaviour.

What are some of the proximal outcomes that may be used in place of longer-term behaviour outcomes?

At the individual level of the ecological framework, these may include documenting changes in knowledge, attitudes, skills and behavioural intentions. However, one must keep in mind that the relationship between these proximal outcomes and actual behaviours varies (Valle et al., 2007).

  • Knowledge relates to how well people understand or how much they know objectively about a concept. Although an important measure, it is important to note that simply changing knowledge about violence against women or appropriate behaviour is unlikely to prevent violence in the same way that changing knowledge about negative consequences of smoking does not necessarily change smoking behaviour (Valle et al., 2007).
  • Attitudes refer to how people subjectively think, feel, or believe, such as whether men think that violence is acceptable. Although attitudes seem to relate to behaviour, it is unclear whether changes in attitudes lead to changes in behaviour (Valle et al., 2007).
  • Skills refer to people’s ability to behave or perform in a certain way. Teaching skills may increase the likelihood that individuals will be able to perform behaviours, though it does not necessarily ensure that they will do so (Valle et al., 2007).
  • Behavioural intentions refer to a person’s subjective appraisal of whether or not they will perform a behaviour given a specific, future situation. This may include, for instance, prevention strategies that encourage bystanders to intervene to prevent violence against women or to discourage conversations that derogate women (Valle et al., 2007).


Be aware that evaluating one-session prevention programmes or single media spots may not be very useful

Although these brief approaches can be an important complement within comprehensive programmes, they are unlikely to result in long-lasting prevention of sexual and intimate partner violence on their own (unless part of a multi-faceted effort). They may also be difficult to evaluate given that people are bombarded with many messages each day and a single message will probably have minimal impact (Valle et al, 2007).

Recognize both the complexity and the importance of evaluating behavioural change

Evaluating programmes in the area of prevention of violence against women is challenging for a variety of reasons, including:

  • violence prevention requires multiple strategies and sectors, making it difficult to attribute outcomes to a single intervention;
  • defining and measuring levels of violence against women is methodologically challenging;
  • changing norms may require long-term investment; and
  • some changes produced may be counterintuitive, for instance, it is possible that an intervention may lead to greater reporting of violence and consequently to increased levels of violence as measured by the number of cases reported.

Be aware that qualitative evaluations are not necessarily less complex or costly

Although collecting qualitative data for evaluation purposes may seem like a less costly alternative to a community-based survey, it is important to note that collecting and analyzing qualitative data (such as data collected by focus groups) is complex and requires specific skills and experience on the part of evaluators. As a result, it is not necessarily a simpler or less expensive option. Some organizations have the expertise to collect quantitative data, but not qualitative data (and vice versa). For example, the least expensive or complex evaluation method for workshop type interventions are pre and post test questionnaires administered to men and boys who participate in the intervention, but this technique has its own limitations, such as not being able to assess whether changes are sustained over time or the possibility that a program’s ‘success’ may be actually the result of pretesting sensitization and learning how to answer the questions correctly.

Document the ‘how’ and the ‘how not to’

Most programmes tend to document what changes are achieved and not the process of how they were achieved. The process of ‘how’ a programme is able to accomplish attitudes and behavioural changes needs to be explored further and the field could benefit greatly from ‘failure’ as well as from ‘successful’ stories, though few are willing to document the former.

 

Examples of initiatives working with men and boys that incorporated robust evaluations within their programme:

Soul City ( South Africa ). Soul City, a multi-media health promotion and social change project initiated in South Africa and currently implemented in various countries, addressed various aspects of violence against women in its series 4. The evaluation of these series provides one of the most comprehensive evaluation designs in work with men and violence against women. See the Soul City IV Case Study and Evaluation Summary.
Stepping Stones (South Africa). Stepping Stones is a training package in gender, HIV, communication and relationship skills. The second edition of the South African adaptation of Stepping Stones underwent rigorous evaluation through a cluster randomized controlled trial that showed that Stepping Stones significantly improved a number of reported risk behaviours in men, with a lower proportion of men reporting perpetration of intimate partner violence across two years of follow-up and less transactional sex and problem drinking at 12 months. In women, there were self-reported increases in transactional sex at 12 months, but not at 24 months. For more information see the evaluation summary.
Program H (Brazil). Program H is a set of methodologies to motivate young men to critically reflect about rigid norms related to manhood and how they influence their lives in different spheres: health, personal relations, sexual and reproductive health, and fatherhood. Program H implemented a rigorous evaluation of its initiative in Brazil where they were able to demonstrate improved attitudes towards violence against women and other issues among young men exposed to weekly educational workshops and a social marketing campaign. For more information see the evaluation summary.
Yaari Dosti (India). Yaari Dosti is the adaptation of Program H (developed inBrazil ) by Horizons Program, CORO for Literacy, MAMTA, and Instituto Promundo. The team conducted operations research to examine the effectiveness of the interventions to improve young men’s attitudes toward gender roles and sexual relationships, and to reduce HIV risk behaviours and partner violence. In India , impact evaluation data documented a decrease in self-reported use of violence by men against women as a result of programme interventions. For more information see the evaluation summary.

Tools to assist in monitoring and evaluating programmes with men and boys:

Evaluating Work with Boys and Men (Instituto Promundo). This power point presentation by Gary Barker provides an overview of the ‘why’ and ‘how’ to evaluate gender transformative initiatives with men and boys.  Available in English.

International Men and Gender Equality Survey [IMAGES] (International Centre for Research on Women and Promundo) is one of the most comprehensive survey instruments developed to understand men’s behaviours and attitudes related to gender equality (including violence against women) – and changes in those attitudes and behaviours over time.  The survey is also implemented with women to compare attitudes and behaviours between the two. The men’s questionnaire is available in English and Portuguese.  The women’s questionnaire is available in English and Portuguese.

Measuring the Impact of Gender-Focused Interventions (Julie Pulerwitz).This power point presentation reviews the development of scales to measure gender-related dynamics and describes their application in evaluating the impact of three different initiatives: Stepping Stones, Program H and Sexto Sentido. Available in Spanish.  See the ppt.

Gender-Equitable Men (GEM) Scale (Instituto Promundo, Population Council). The Gender-Equitable Man (GEM) scale is used to assess attitude change, recognizing it as an important step toward achieving (and subsequently measuring) behaviour change. The scale, which has been shown to be psychometrically valid, has been used as an evaluation tool in interventions with men in a myriad of diverse countries, such asBrazil ,Ethiopia and India. The scale seeks to assess how much a given group of adult or young men adhere to or believe in a rigid non-equitable and violent version of masculinity. How men respond to the scale is highly associated with their self-reported use of violence against women. In Brazil , for example, young men who scored in the least equitable third of the population were four times more likely to have reported using violence against a female partner than were men who scored more equitably (Pulerwitz et al, 2006).

See a brief summary of the GEM scale in English. See the questionnaire in English, Spanish and Portuguese and the questionnaire used in Ethiopia.

Arizona Rape Prevention and Education Project (University of Arizona,USA ). The Evaluation Measures Web Page offers references and information on measures used to study behaviours and attitudes related to rape that are also used when evaluating rape prevention and education programmes. Available in English

Sexual and Intimate Partner Violence Prevention Programmes Evaluation Guide, (Centers for Disease Control , USA. This guide presents an overview of the importance of evaluation and provides evaluation approaches and strategies that can be applied to sexual violence and intimate partner violence programmes. Chapters provide practical guidelines for planning and conducting evaluations; information on linking programme goals, objectives, activities, outcomes, and evaluation strategies; sources and techniques for data gathering; and tips on analyzing and interpreting the data collected and sharing the results. The Guide discusses formative, process, outcome, and economic evaluation. Hard copies of these publications can be ordered.

Measuring Violence-Related Attitudes, Behaviours, and Influences Among Youths: A Compendium of Assessment Tools (2nd edition) by the CDC (US). This compendium provides researchers and prevention specialists with a set of tools to assess violence-related beliefs, behaviours, and influences, as well as to evaluate programmes to prevent youth violence. It may be particularly useful for those new to the field of youth violence prevention but, for more experienced researchers, it may serve as a resource to identify additional measures to assess the factors associated with violence among youth. Available for download.

Measuring Intimate Partner Violence Victimization and Perpetration: A Compendium of Assessment Tools (Centers for Disease Control, US). This compendium provides researchers and prevention specialists with a compilation of tools designed to measure victimization from and perpetration of intimate partner violence. It includes over 20 scales. Available for download.

Violence against Women and Girls: a Compendium of Monitoring and Evaluation Indicators, (Measure EvaluationUSAID), by Shelah Bloom (2008) provides a variety of indicators used to monitor and evaluate violence against women programmes. Section 7.3 starting on page 228 provides various indicators that are used to monitor and evaluate programmes with boys and men. Available in English.

Measures for the assessment of dimensions of violence against women. A compendium (Michael Flood). Unpublished. Melbourne: Australian Research Centre in Sex, Health & Society, La Trobe University. This is a compendium of measures for the assessment of dimensions of violence against women. It also includes measures regarding gender and sexual norms and attitudes. However, it does not cover measures related to child abuse, child sexual abuse, or sexual harassment. Available for download in English.

Putting Women First: Ethical and Safety Recommendations for Research on Domestic Violence Against Women (WHO). These recommendations emerged from discussion of the approach to be taken for the WHO Multi-country Study on Women’s Health and Domestic Violence Against Women. They focus in particular on the ethical and safety considerations associated with conducting population-based surveys on domestic violence against women. However, many of the principles identified are also applicable to other forms of quantitative and qualitative research on this issue. Available in English, French and Spanish.

WHO Ethical and Safety Recommendations for Researching, Documenting and Monitoring Sexual Violence in Emergencies (2007). This document applies to all forms of inquiry about sexual violence in emergencies. In total, eight recommendations are offered (see Part III). Collectively, these recommendations are intended to ensure that the necessary safety and ethical safeguards are in place prior to commencement of any information gathering exercise concerning sexual violence in emergencies. In each case, accompanying text sets out key safety and ethical issues that need to be addressed and the questions that must be asked when planning any informa­tion collection exercise involving sexual violence. These should also inform decisions about whether such an exercise should be under­taken. Wherever possible, the discussion is supported by boxed examples of good practice drawn from experience from the field in both emergency and non-emergency settings. For further informa­tion on a range of topics, users are referred to the list of additional resources and suggested further reading which is included as an Annex to this document.  Available in English and French.