Monitoring is a form of evaluation or assessment, though unlike outcome or impact evaluation, it takes place shortly after an intervention has begun (formative evaluation), throughout the course of an intervention (process evaluation) or midway through the intervention (mid-term evaluation).
Monitoring is not an end in itself. Monitoring allows programmes to determine what is and is not working well, so that adjustments can be made along the way. It allows programmes to assess what is actually happening versus what was planned.
Monitoring allows programmes to:
When monitoring activities are not carried out directly by the decision-makers of the programme it is crucial that the findings from those monitoring activities are coordinated and fed back to them.
Information from monitoring activities can also be disseminated to different groups outside of the organization which helps promote transparency and provides an opportunity to obtain feedback from key stakeholders.
There are no standard monitoring tools and methods. These will vary according to the type of intervention and objectives outlined in the programme. Examples of monitoring methods include:
Outcome evaluations measure programme results or outcomes. These can be both short and long-term outcomes.
Impact evaluation measures the difference between what happened with the programme and what would have happened without it. It answers the question, “How much (if any) of the change observed in the target population occurred because of the programme or intervention?”
Rigorous research designs are needed for this level of evaluation. It is the most complex and intensive type of evaluation, incorporating methods such as random selection, control and comparison groups.
These methods serve to:
For example, an impact evaluation of an initiative aimed at preventing sexual assaults on women and girls in town x through infrastructural improvements (lighting, more visible walkways, etc.) might also look at data from a comparison community (town y) to assess whether reductions in the number of assaults seen at the end of the programme could be attributed to those improvements. The aim is to isolate other factors that might have influenced the reduction in assaults, such as training for police or new legislation.
While impact evaluations may be considered the “gold standard” for monitoring and evaluation, they are challenging and may not be feasible for many reasons, including:
Impact evaluations may not always be called for, or even appropriate for the needs of most programmes and interventions looking to monitor and evaluate their activities.
An evaluation of the impact of a campaign to raise awareness around the provisions of a recently enacted law on violence against women for example would need to incorporate:
baseline data on awareness of the law’s provisions prior to the campaign for the intervention group;
endline data on awareness of the law’s provisions after the campaign for the intervention group;
baseline data on awareness of the law’s provisions prior to the campaign for a closely matched control group not exposed to the campaign; and
endline data on awareness of the law’s provisions after the campaign for a closely matched control group not exposed to the campaign.
Endline data allows the programme to see if there were external/ additional factors that might influence the level of awareness among those not exposed to the campaign. If the study design does not involve a randomly-assigned control group, it is not possible to make a definitive statement regarding any differences in outcome between areas with the programme and areas without the programme.
However, if statistically rigorous baseline studies with randomly assigned control groups cannot be conducted, very useful and valid baseline information and endline information can still be collected.
Evaluation requires technical expertise and training. If the programme does not maintain the capacity in-house, external evaluators should be hired to assist.
Guidance Note on Developing Terms of Reference (ToR) for Evaluations (UNIFEM, 2009). Available in English. Once an evaluation is completed, a comprehensive report should be drafted to document the programme intervention’s results and findings.
Guidance: Quality Criteria for Evaluation Reports (UNIFEM, 2009). Available in English. The evaluation report (or a summary of the report where appropriate) should be disseminated to staff, donors and other stakeholders.
Guidance Note on Developing an Evaluation Dissemination Strategy (UNIFEM, 2009). Available in English.
For additional monitoring and evaluation reports by sector, see the following sections:
M&E Fundamentals: A Self-Guided Minicourse (Frankel and Gage/MEASURE Evaluation, 2007). Available in English.
Monitoring and Evaluating Gender-based Violence Prevention and Migation Programs (USAID, MEASURE Evaluation and Inter-agency Gender Working Group). The power point and handouts are available in English.
Monitoring and Evaluating Gender-Based Violence: A Technical Seminar Recognizing the 2008 '16 Days of Activism' (Inter-agency Gender Working Group/USAID, 2008). Presentations available in English.
Sexual and Intimate Partner Violence Prevention Programmes Evaluation Guide (Centers for Disease Control and Prevention). The guide presents information for planning and conducting evaluations; information on linking programme goals, objectives, activities, outcomes, and evaluation strategies; sources and techniques for data gathering; and tips on analyzing and interpreting the data collected and sharing the results. It is available for purchase in English.
A Practical Guide to Evaluating Domestic Violence Coordinating Councils (Allen and Hagen/National Resource Center on Domestic Violence, 2003). Available in English.
Building Data Systems for Monitoring and Responding to Violence Against Women (Centers for Disease Control and Prevention, 2000). Available in English.
Sexual Violence Surveillance: Uniform Definitions and Recommended Data Elements (Centers for Disease Control and Prevention, 2002). Available in English.
Using Mystery Clients: A Guide to Using Mystery Clients for Evaluation Input (Pathfinder, 2006). Available in English.
A Place to Start: A Resource Kit for Preventing Sexual Violence (Sexual Violence Prevention Programme of the Minnesota Department of Health). Evaluation tools available: Community Assessment Planning Tool; Evaluation Planning Tool; Opinions About Sexual Assault; Client Satisfaction Survey; Participant Feedback Form; Teacher/Staff Evaluation of School Presentation; Program Dropout Form
National Online Resource Center on Violence Against Women Evaluation page.
Gender Equality and Human Rights Responsive Evaluation (UN Women, 2010). Available in English. See also the UN Women online guide to gender equality and human rights responsive evaluation in English, French and Spanish.
Putting the IPPF Monitoring and Evaluation Policy into Practice: A Handbook on Collecting, Analyzing and Utilizing Data for Improved Performance (International Planned Parenthood, 2009). Available in English.
Previous Topic Baseline assessments (Quantitative and Qualitative)