You are here: Home / Pathfinder / Monitoring and evaluation / Monitoring and evaluation

Define purpose of evaluation

Early on in the planning stages of an evaluation it is important to clarify the reasons for undertaking the evaluation and ensure that everyone participating in implementing it are in agreement. Lack of discussion about this can result in confusion when deciding what indicators to collect, what kind of data are relevant, what methods and expertise are needed and what could be considered as 'successful' adaptation. The two fundamental questions are 'have we done things right?' (that is, the things we said we would do in the adaptation plan) and 'were they the right things'(how relevant were they? will they enable us to be less vulnerable or adapt better?). (A third question might be 'how should we measure these things?'). Ideally, evaluations bring in a mixture of different types of information (scientific, political, legal, technical as well as local knowledge).

AP interactive decision tree - click any node to select it

As well as finding effective ways to understand the context of interest presenting scientific understanding of likely hazards or the political situation of the context of interest e.g. through a science-policy dialogue can enable a deeper understanding to emerge for everyone involved. Aspects essential for creating sustainable solutions may not be adequately captured through local indicators, and scientific indicators could be used to provide a fuller picture. Having a wider understanding of the whole system can help in identifying points of leverage for catalysing change; informing decision-making in the change process; informing facilitation strategies of research teams and supporting evidence-based policy making.

An evaluation may have more than one purpose and it is important for everyone involved to understand what they are.

Different reasons for undertaking evaluations in adaptation projects are listed in UKCIP's ADAPTMe Guidance (Pringle, 2011):

To evaluate effectiveness1
To determine whether an intervention has achieved the outputs and outcomes originally intended. For this it is essential that the objectives (outputs and outcomes) are clearly specified at the start. Understanding effectiveness is important in the context of adaptation as we are still learning what are the most effective interventions, in what circumstances and why. It is also important to consider whether what was intended was actually appropriate or needed? What is the potential for this to result in maladaptation e.g. in terms of economic efficiency, social equity, and technical appropriateness.

To assess efficiency
Evaluators may want to determine the efficiency of the intervention including assessing the costs, benefits and risks involved and the timeliness of actions. This may mean involving or utilising economic evaluation techniques where the costs and benefits are calculated in financial terms. Such an approach may be required in the context of adaptation as additional investments will need to be assessed and justified.

To understand equity
The impacts of climate change will be experienced unevenly, both spatially and temporally and the consequences of climate change will also vary as a result of the differing vulnerability of individuals and communities. Thus equity and justice are important factors to consider when evaluating the appropriateness and effectiveness of adaptation interventions. This may raise questions about the effects of the project on different social groups and their ability to engage in (procedural justice) and benefit from the intervention; whether the intervention has targeted the 'right' people; and whether certain groups are exposed to disproportionate risks, bear additional costs or suffer disbenefits as a result of the intervention.

To provide accountability
There may be a contractual or procedural requirement to undertake an evaluation to ensure that commitments, expectations, and standards are met. This is especially true where public money has been invested in adaptation and evidence is needed to illustrate the achievements and challenges of the project. Accountability may overlap with efficacy and efficiency considerations, for example to account for an investment in terms of its costs and benefits.

To assess outcomes
An evaluation may seek to provide an understanding of the outcomes of an intervention and the impacts that it has had. This can be challenging as there is a need to disentangle those outcomes which can be attributed to the intervention, as opposed to those resulting from a range of other variables. In the context of climate change, this can be made harder as there may be a long time period before outcomes can be assessed. In addition, the avoidance of negative consequences can be a successful outcome in adaptation yet can be hard to measure and assess, precisely because they have been avoided. The assessment of outcomes tends to be associated with summative evaluation approaches and the use of impact indicators.

To improve learning
Learning should permeate all evaluations; we should always be looking to improve our understanding of adaptation interventions, what works and why2. However, the reality is that the creativity and time invested in learning can vary considerably between evaluations. This can be the result of a tension between learning ('what happened and why?') and accountability ('have we done what we said we would?') and the limitations placed upon monitoring and evaluation processes. Recognising these tensions and identifying who should be learning what and how, can help achieve learning objectives. Learning can occur in different spaces, within and between organisations, communities and sectors. Given the complex nature of adaptation, we should look to combine our own learning objectives with broader societal learning about adaptation. While some information may be commercially sensitive, much of the time sharing knowledge and experience of adaptation makes sound business sense, helping to make future adaptation interventions more efficient and cost effective.

To improve future interventions
The purpose of your evaluation may be to strengthen future activities and interventions either at the end of a project (to inform future projects) or mid-way through an ongoing project. This would suggest a strong focus on learning in the design of your evaluation and, where appropriate, use of the formative methodology. Given we are at an early stage in adapting to climate change this should be a strong consideration for all evaluation processes.

To compare with other evaluations
You may wish to compare the experiences, results and other learning from different evaluations to understand how the impact of an adaptation intervention varies in different locations or communities and how it related to differences in management etc, or to compare the implementation and outputs of one adaptation option with those of another. The choice of the purpose or purposes will have an obvious influence on the type of indicators to be developed, the type of data to be collected, the level of detail required etc. which then dictates the level of complexity of the evaluation process with implications for available resources, manpower requirements and time needed to collect the data. For example, Defra (2010), 'Measuring adaptation to climate change - a proposed approach' suggests that obtaining a snapshot of the adaptation status of the UK could consist of collecting and interpreting data for four components:

  1. Level of embedding: the degree to which climate risk management is embedded in mainstream risk management and decision-making processes across society (including the policies, programmes and systems of government).
  2. Adaptive Capacity: the ability of a system to adjust to climate change (including climate variability and extremes) to moderate potential damages, to take advantage of opportunities, or to cope with the consequences. Note, within the skills, knowledge and understanding (e.g. of interdependencies) required for adaptive capacity are the capability to track and provide projections of climate and weather events: the existence and quality of monitoring/warning systems that indicate when a climate event and/or their effects is likely to take place, is taking place or has taken place, as well as providing timely warning of when significant climate sensitive thresholds are being approached.
  3. Effectiveness of actions: the relative effectiveness of past/current adaptive actions and options in terms of sustainably reducing the rate and magnitude of the impacts and enhancing adaptive capacity and resilience.
  4. Degree of flexibility preserved: the degree of flexibility preserved or promoted in society's systems by maintaining or increasing flexibility and future options for evolution through adaptive actions taken.
While it is important to design an evaluation process that is comprehensive and focused on the key areas of interest there will always be a balance between types of data that would be ideal to collect for a given purpose and what is pragmatically possible given data availability, and availability of resources.

In addition to clarifying the purpose of the evaluation it may also be useful to articulate principles underlying the work. WRI (Spearman and McGray, 2011) suggest three principles underpin effective M&E systems for adaptation interventions: design for learning; manage for results; and maintain flexibility in the face of uncertainty. The report emphasises the need to carefully articulate adaptation objectives when undertaking an evaluation, to clarify the basis for project design, and make transparent assumptions regarding, for example, the climatic, social and economic factors that may influence the project's ability to help vulnerable people thrive in a changing climate. Once this has been clarified and agreed it is then possible to go on to select indicators and build information systems that are able to track adaptation success.
1 It is also important to consider how outcomes are achieved and the social return on investment see also:

2 Often learning is considered to be important but little attention given to exactly what should be being learnt, by whom, when etc. To learn effectively from adaptation there is a need for structured opportunities to reflect on what is emerging, acknowledging that this a need to learn both rapidly (in the short term) and slowly (over the longer term). For example, it might take 10-15 years to learn that a management practice implemented to reduce vulnerability to increasing water scarcity does not work well (e.g. planting trees). We need quicker ways to check the assumptions that we have made about what needs to change and how it changes to reach our adaptation goals e.g. what are we assuming about farmers changing their practice in response to new guidance? Are they actually adopting new practices? If adoption rates are low, why is this? We also need to learn slowly e.g. about the long term effects of various stressors on mangroves in the next 20 years.


Read more in the Toolbox under the following category:

Tools for reflection and learning

This section is based on the UNEP PROVIA guidance document

Criteria checklist

1. You want to monitor and evaluate implemented adaptation actions.
2. The purpose of the evaluation is unclear.