Related Tools

Overview and key considerations

Last edited: December 29, 2011

This content is available in


Monitoring and evaluation of security sector initiatives are critical for ensuring the effectiveness of programmes, although there is not a “one-size-fits-all” or “gold standard” approach (Holland, 2010). Monitoring and evaluation should inform the way in which initiatives develop, and can result in more efficient preventative action by security personnel, improved quality of responses provided to survivors, and increase the likelihood that perpetrators will be apprehended, prosecuted and sentenced. In addition to the general guidance on Monitoring and Evaluating in the Programming Essentials section, key considerations when developing an effective monitoring and evaluation system for security initiatives include (OECD/ DAC, 2011):

  • Use a participatory, people-centred approach to foster ownership and strengthen capacity. This should involve both institutions and actors implementing initiatives (e.g. donors, government, security personnel, oversight bodies, women’s organizations) as well as the individuals and communities that are the ultimate programme beneficiaries – particularly female survivors. Consultation may take place during the design phase (e.g. to establish programme objectives and measures of success), or as part of ongoing monitoring (e.g. through evaluating progress and making decisions on how to adapt programme activities and priorities). This approach has several benefits and can specifically:

    • Encourage security institutions to set up internal systems and processes to evaluate their activities, setting their own performance criteria and self-evaluating progress against objectives, which can improve commitment to meeting performance goals.

    • Foster ownership of the initiatives as well as strengthen national capacities for data collection, monitoring and evaluation. It can improve the effectiveness and contributions of oversight bodies, who may consult security institutions to jointly set objectives and performance indicators to be regularly monitored, and will better inform oversight bodies of the context in which they propose recommendations to institutions.

    • Empower female survivors of violence, or women’s organizations that support them, who may lack the information, capacity and authority to undertake effective monitoring and evaluation of security institutions. For example, they may have limited knowledge about security programmes and processes and lack access to decision-makers, particularly within security institutions (International Alert, 2008). Providing opportunities to women to articulate their security needs and experiences and determine how the quality of services and the response of security personnel should be measured is a critical component to providing survivor-centred services.

    • Facilitate a transparent, impartial and credible monitoring and evaluation process, where results are made widely available.

    • Help to address potential mistrust that may exist between stakeholders (i.e. civil society, including women’s groups, and security institutions) by engaging groups in relationship-strengthening activities.

  • Involve evaluators with the right skill set. To maximize the relevance and value of assessment processes, initiatives should engage independent evaluators who combine knowledge of the security sector, monitoring and evaluation, with expertise in violence against women, especially in low-resource settings. A general lack of gender awareness on evaluation teams can reduce the attention given to the impact of programmes on survivors and existing gender inequality broadly (Popovic, 2008).

  • Allocate adequate financial resources to processes. Maintaining the quality and frequency of data collection as part of ongoing monitoring enables programme implementers to assess the short-term outputs and outcomes of initiatives and is critical to allow for adjustments in programme activities and plans, which may be necessary to ensure progress is made on established targets and objectives. At a minimum, between 3-10% of programme budget should be allocated to monitoring and evaluation activities, being closer to 10% in more fragile contexts (UNIFEM, 2009; DFID, 2010).

  • Identify the scope of security actors most engaged by an initiative. Security institutions, which are often broad entities comprising many different sections and departments, (e.g. field officers, administrative staff, management and oversight), vary significantly in their purpose, function and orientation. It is important to identify the specific individuals and groups targeted by an initiative to facilitate the process of tracking, evaluating and, where possible, attributing changes. This is particularly important as each section within institutions is often interlinked with others (OECD/ DAC, 2011). Where a multi-sector approach is used, it may be more effective to monitor and evaluate the work of security actors as part of a larger response mechanism, rather than trying to measure the impact of the sector in isolation. Monitoring may also focus on a single institution (e.g. police) or a specific system (e.g. criminal justice - covering police/justice/prisons), which is more realistic since most programmes do not engage the full range of sector actors.

  • Use a combination of data sources to address gaps in data. National systems and capacities for data collection, handling, storage and analysis within security institutions may be weak or non-existent. Obtaining accurate data on women and girl’s experiences with violence, which is widely under-reported, is particularly challenging for a variety of reasons. Inaccurate and insufficient information about the nature and extent of violence, especially sexual violence, impedes efforts to address it effectively (International Alert, 2007; Roth, Guberek and Green, 2011). Even where data is collected, it is often not properly systematized to be able to track changes. While standardizing data and information systems, monitoring and evaluation plans should use a mix qualitative and quantitative methods including police records, crime victimization surveys, as well as more in-depth studies on women and girl’s experience with security responses to incidents of violence.

For example, see The War at Home - GBV Indicators Project (Gender Links and the Medical Research Council, 2011), which combines police and other administrative records, qualitative data from samples of men and women self-reporting on perpetrating and experiencing violence to present an overview of gender-based violence in the Gauteng province of South Africa and provide recommendations for improving the practices and management of cases by police and other service providers.


  • Explore the feasibility of joint evaluations: Partnerships between security institutions and civil society organizations, particularly women’s organizations, may be an effective approach to encourage collaboration on the issue and increase the credibility and validity of reported results and progress achieved by the sector (such as the “Observatorio” group for the Chilean National Action Plan on Security Council Resolution 1325). Where such cooperation is not possible, women’s organizations and other civil society groups can be encouraged to set up independent and/or shadow monitoring systems. See for example, the Case Study reviewing Women’s Police Stations in Latin America