Practical tips for developing the M&E framework

Last edited: January 03, 2012

This content is available in

Options
Options

PRACTICAL TIPS FOR DEVELOPING THE M&E FRAMEWORK

 1.  Design it in a participatory manner, e.g. in a workshop with the campaign team/ alliance and experienced facilitators who can advise on methods. Intended users of the information generated by M&E should be involved in every step from planning to implementation of the M&E framework, so as to make sure it serves the purpose of the campaign and is “owned” by all relevant stakeholders. Participatory data gathering involving target audiences can be a good way to enroll new activists.

Example: As part of its Phase I impact assessment, the We Can campaign in South Asia trained teams of volunteers – young women and men who were part of their target audiences – to facilitate and take notes in hundreds of interviews and focus groups discussions with the campaign audience. The process deepened the volunteers’ understanding of social issues in their communities and strengthened their commitment to the campaign (Aldred & Williams, 2009. We Can: The Story So Far, New Delhi).

See the We Can evaluation.

2.    Build the M&E framework around a theory of change or a logical model. In a log-frame approach, which can be suitable in campaigns for institutional change, envision the results chain (inputs, activities, outputs, outcomes, impact) and assess what information on each aspect is crucial for campaign management. Causal links, i.e. explanations as to how and why campaign activities lead to the desired results, also need attention. For behaviour-change campaigns, less linear, multi-dimensional theories of change may be more effective to take into account complex realities. See Theories of Change in Campaigning in the Campaign Planning section of this module. (See also Getting Started: A Self-administered Guide to Theory of Change Development and Advocacy Evaluation Planning, by Organizational Research Services on behalf of the Annie E. Casey Foundation, 2009.)

3.    Be clear as to what you must know and when, and focus on that. Focus on information essential for the users so as to keep the amount of data manageable and limit the work-load of those gathering the data. In most campaigns, information is needed on:

  • Process, to verify whether key campaign activities take place as planned;
    • Outcomes, to verify key results the campaign achieves – including both unexpected and undesirable outcomes (e.g. negative reactions to images displayed on campaign posters), which you need to know about so as to react effectively;
    • External factors that have a strong influence on the campaign and its outcomes, especially factors identified as risks – but also potential new opportunities.

4.   Choose benchmarks and indicators wisely and sparingly. Do not overload yourself with excessive data gathering. There is always a trade-off between effective use of the resources available and scientific rigor.

5.   Determine time-lines and responsibilities. What data needs to be collected and analyzed by whom, when and how? Data for monitoring must be collected and recorded regularly so as to yield meaningful information for the intended users.

6.  Determine how to share findings within the campaign team and beyond – at which intervals, in which forms (written reports, staff meetings, workshops or other forms of communication).

7. Create an atmosphere of openness and trust in monitoring and evaluation activities. There should not be any “right” or “wrong” answers to questions asked in monitoring or evaluation; both positive and negative critique should be valued equally. Reward honest feed-back even if it may at times appear discouraging.

8.    Promote a culture of regular consultation and feed-back, e.g. through systematic briefings and debriefings.

9.    Follow ethical guidelines applicable to social research and to research on violence against women and girls in particular.