Meta-Evaluation of Global Affairs Canada’s Decentralized Evaluations

March 2017

Table of Contents

Acknowledgments

The Global Affairs Canada Development Evaluations Division would like to thank all those that contributed to this evaluation, including Development Evaluation Working Group colleagues and others who provided support and advice throughout the evaluation process.

The evaluation was executed by an independent evaluation team from Itad Limited. The Development Evaluations Division provided oversight and management throughout the evaluation process.

David Heath
Head of Development Evaluation

List of Acronyms and Abbreviations

ADM
Assistant Deputy Minister
BASICS
Better Approaches to Service Provision through Increased Capacities in Sulawesi
CARTAC
Caribbean Technical Assistance Centre
CIDA
Canadian International Development Agency
DAC
Development Assistance Committee (of OECD)
DED
Development Evaluation Division’s
DFAIT
Department of Foreign Affairs and International Trade
DFAT
Department for Foreign Affairs and Trade (Australia)
DFATD
Department of Foreign Affairs, Trade and Development
DFID
Department for International Development (UK)
EDRMS
Enterprise Document and Records Management System
EGM
Europe, Middle East and Maghreb (Program Branch)
ESU
Evaluation Support Unit
GAC
Global Affairs Canada
GAVI
Global Alliance for Vaccines and Immunization
IADB
Inter-American Development Bank
IPDET
International Program for Development Evaluation Training
ISFP
International Statistical Fellowship Program
KFM
Partnerships for Development Innovation (Program Branch)
LoE
Level of Effort
M&E
Monitoring and Evaluation
MFM
Global Issues and Development (Program Branch)
MIS
Management Information Systems
MNCH
Maternal, Newborn and Child Health
NGM
Americas (Program Branch)
OECD
Organization for Economic Cooperation and Development
OECS
Organization of Eastern Caribbean States
OGM
Asia Pacific (Program Branch)
PCE
Development Evaluation Division (formerly DED)
PM
Programme Manager(s)
PO
Project Officers
PTL
Project Team Leader
QA
Quality Assurance
RBM
Results-Based Management
RCT
Random Controlled Trial
SNSS
Strengthening National Statistical Systems in Developing Countries
SPELS
Sustaining Peace and Enhancing Livelihoods in Southern Sudan
TB
Treasury Board, Government of Canada
ToR
Terms of Reference
UNICEF
United Nations Children’s Emergency Fund
USAID
United States Agency for International Development
WGM
Sub-Sahara Africa (Program Branch)

Executive Summary

Introduction

This report presents a review of decentralized evaluations over the period 2009–14. During this period, decentralized evaluations were managed by Global Affairs Canada’s Program Branches, and were intended to support accountability, learning and decision making, as well as lead to improvements in programming. Decentralized evaluations were commissioned and managed within Program Branches, alone or jointly with other donors and focus on Branch-level projects or investments.

The purpose of this study is to provide Global Affairs Canada with an assessment of the quality of decentralized evaluations in order to develop recommended actions for the Global Affairs Canada to improve the credibility, reliability, validity and use of evaluations; and to improve management information systems for storing and sharing evaluation knowledge.

Global Affairs Canada’s six Development Program Branches have full responsibility for commissioning and conducting decentralized evaluations. The role of PCE, and specifically its Evaluation Support Unit (ESU), has been to provide guidance, technical support and quality assurance of decentralized evaluations as an integral part of PCE’s oversight function. The Unit did not have any authority over the acceptance or approval of such deliverables as this is the role of the Program Branches.

The scope included examining the planning of decentralized evaluations, the quality of evaluation products: the terms of reference (ToRs), work plans, reports and management responses, the effectiveness of support services, and finally the utility of decentralized evaluations.

Quality was determined by how well Global Affairs Canada’s decentralized evaluations met the OECD DAC quality standards for evaluation.Footnote 1 These are important as they represent an agreed international framework and were adopted by 32 bilateral donors and multilateral development agencies in 2010. They focus on the management and processes for conducting evaluations and were intended to strengthen the contribution of evaluation work to development outcomes, as well as to form the benchmark against which OECD DAC members are assessed in DAC Peer Reviews.

The study was undertaken by a team from Itad Ltd, a UK-based consulting firm, on behalf of the PCE. The methodology involved six components: a quality review of 116 decentralized evaluations, an online survey of Global Affairs Canada Development Branch staff, a review of support services provided by PCE, an in-depth case study of eight decentralized evaluations, a review of Global Affairs Canada and four other peer agencies information management systems for evaluations, and a review of status of the recommendations from the previous 2008 meta-evaluation.

Findings

Quality of decentralized evaluations

Between 50–60 percent of the three major evaluation products (ToRs, work plans and evaluation reports) met OECD DAC quality standards, but few of them achieved a highly satisfactory standard.

There was some indication of improvement in quality over the review period, although this trend was not definitive. The type of evaluation (formative or summative) or the Branch conducting it did not influence quality substantially, but higher budget projects were better evaluated. Evaluation team composition or size of evaluation budget did not affect quality, but if the consultant team leader was also an evaluator this improved quality.

The evaluation products had a varied performance against different OECD DAC standards. They did well in most areas of evaluation planning and implementation, although fell short in documenting ethical standards, evaluability and methodology.

Although aspects of the management response reports prepared by the Branches were of reasonable quality, such as giving an immediate response to recommendations, the follow-up and dissemination aspects were of lower quality, as judged by the level of detail provided in these reports.

Good quality ToRs were associated with a good quality work plan and also with a good quality evaluation report. This suggested that investing in a carefully written and high standard ToR was worth doing. There was no link between the quality of these products and the quality of the management response as measured by the level of detail given in these responses.

The quality of evaluation reports was most closely related to four OECD DAC criteria: appropriate evaluation questions; robust methodology; logical and well-attributed findings; and clarity of analysis. Further analysis indicated that a sound methodology was the most important single factor that distinguished between good and poor quality ToRs, work plans and evaluations.

Quality of decentralized evaluation planning processes

Evaluations were generally planned well in advance, but broad evaluation purposes combined with too many evaluation questions made it difficult for consultants and their evaluation teams to produce high quality reports. Program managers in Branches came from a broad range of professional backgrounds that may not necessarily have included evaluation, and evaluation budgets and levels of effort tended to be insufficient to match scope and quality expectations. At the same time, the ESU within PCE faced difficulties in ensuring quality because its mandate was advisory and its involvement was insufficient or not tailored to improve the evaluation planning process and so contribute to better evaluation quality. At the same time, ESU did not always take a situational approach to quality, leading to an inappropriate application of quality standards. The supply of ESU staff was also not appropriately matched to meet the demand for its services. Providing advice in a timely manner was a key challenge.

Use of decentralized evaluations

The overriding use of decentralized evaluations was for operational purposes. Program Branch staff undertook such evaluations largely for accountability reasons to check on implementation progress and achievement of objectives. Mid-term evaluations were commissioned to improve project performance and final evaluations to assist with the design of a new phase of a project or to help decide whether to extend an existing operation.

Branch staff found decentralized evaluations more useful if they were timely; and use was also stronger if there was ownership, a clear purpose and expressed management interest. While poor evaluation quality could undermine evaluation use, good quality reports were not a guarantee of widespread dissemination and follow-through of management action plans.

Conclusions and recommendations were perceived to be the most useful parts of evaluations. Lessons were the least used, mainly because there was no comprehensive knowledge management system in Global Affairs Canada to facilitate broader learning beyond the individual intervention. Furthermore, very few decentralized evaluations were published on the Global Affairs Canada website, further limiting broader uptake and use and placing Global Affairs Canada behind its peer agencies in terms of transparency.

Review of Global Affairs Canada information systems

There is strong dissatisfaction among stakeholders with the current approach to storing and managing decentralized evaluations and its ad hoc nature was found not fit for purpose. The lack of a centralized system for storing decentralized evaluations is curtailing their value to the wider organization. The challenge that Global Affairs Canada faces is not unique. Other agencies are either tracking evaluations through placing them on an external website or setting up a dedicated evaluation database to provide a more comprehensive way to store, search and access evaluations. The latter requires investment in more than just system hardware, but also in incentives for the system to be used. By comparison, two UN agencies encourage uploading and use by applying a quality rating system, by tracking evaluations planned and uploaded, and by recognizing the best evaluations each year.

Implementation of 2008 meta-evaluation recommendations

The 2008 report recommended strengthening PCE and clarifying its function and responsibilities for decentralized evaluation work. Since then, with the merger of the Canadian International Development Agency (CIDA) into the Department for Foreign Affairs, Trade and Development (DFATD), no significant strengthening of staff in terms of filling the number of established positions occurred over the review period, and there was considerable staff turnover. The 2005 Departmental Evaluation Policy has not been updated, and while the Treasury Board Policy on Evaluation in 2009 set out the requirements for the evaluation function across government, its application to decentralized evaluations has not been clarified. Very limited progress has been shown around recommendations dealing with training, information management and communication.

Conclusions

The study concludes that the 2005 Departmental Evaluation Policy Framework, though a sound document at the time, did not reflect changes in international evaluation standards and practices such as the introduction of the 2009 OECD DAC standards, nor the significant changes in the Government of Canada’s institutional environment for development assistance such as the amalgamation of CIDA and DFAIT as well as new Treasury Board policies for evaluation. PCE’s role and status has not been strengthened to meet its role of helping to ensure that the effectiveness of development assistance is robustly measured.

Between 50–60 percent of decentralized evaluation products showed acceptable quality, although very few achieved a highly satisfactory standard. Over the period 2009–14, over 40 percent of projects examined by the review team (representing a program spend of $2.8 billion)Footnote 2 were not evaluated well, and hence their results and performance were not reliably assessed.

PCE’s role in supporting decentralized evaluation quality remained unclear. The ESU was not sufficiently resourced for the role it was expected to play, and while its support was viewed by users as valuable, ESU’s quality improvement work was not as effective as it could be. ESU’s quality assurance function appeared to be optional as Branch staff were not always requesting the Unit’s help, and bypassed ESU processes. In addition, who has the final say on quality and approval of such evaluations was not certain.

The quality tools and templates offered by ESU were perceived by users as broadly useful, but they did not solve the issue of generally low evaluation capacity among Branch staff. The support strategy adopted by ESU was not able to solve the conflict between, on the one hand, the need for better decentralized evaluation quality, and on the other, the need at branch-level evaluation for sharper timing and better use of evaluations.

The system currently still in use by ESU to track, store and learn from decentralized evaluations is assessed as not effective. Retrieving documents is difficult and aggregating findings and lessons is not happening.

During the period of the review, there was the perception that decentralized evaluations had less status and attention than corporate evaluations, even though they accounted for the bulk of evaluation spend. The body of knowledge being generated was substantial, and could have been more effectively used to inform Global Affairs Canada, the Government of Canada, the Canadian public and the international audience.

Recommendations

The decentralized evaluation system in Global Affairs Canada is intended to provide both operational guidance for individual projects and a source of wider learning for the organization. There is room for the quality of such evaluations to improve further, and for their planning, implementation, storage and use to be supported more strongly and effectively. Therefore the study recommends the following:

  1. Strengthening support for decentralized evaluations.
    1. Strengthen the role of the evaluation focal point in Branches by giving them clear accountability and oversight for the use of evaluations and for promoting knowledge sharing in their Branches.
    2. Clarify roles and responsibilities of all stakeholders (Branch staff, ESU, senior managers, the Development Evaluation Committee, etc.), ensuring that there is adequate support for the improvement of evaluation quality (i.e. that is integrated and comprehensive), and that this is reflected (and communicated) in an updated evaluation policy.
    3. Increase the training on evaluation skills for Branch staff and consider rotating staff with evaluation skills to region/country offices, particularly to Global Affairs Canada’s countries of focus.
    4. Provide sufficient support to PCE/ESU in terms of appropriate evaluation staff and budgetary resources.
  2. Strengthening the sharing and use of decentralized evaluations.
    1. Present a periodic statement as a standing item on the number, status, quality of decentralized evaluations planned, commissioned and completed by Branch, theme, Departmental priority and budget at the Development Evaluation Committee.
    2. Develop a comprehensive knowledge management strategy including fora for sharing learning from decentralized evaluations, and the publication of decentralized evaluation reports in line with international practice.
    3. Build a stronger culture for evaluation knowledge sharing and use through better communication and use of different media (web, social, networks), as well as rewarding and recognizing high quality / good practice. Work towards achieving publication of all completed evaluations in the medium term.
  3. Strengthening planning and conduct of decentralized evaluations.
    1. Strengthen the role, capacities and resources for decentralized evaluation work at Branch level. Achieve this by developing and implementing a training strategy to provide basic evaluation skills and more practical guidance to Branches.
    2. Strengthen ESU’s role as a knowledge broker rather than focusing on QA.Footnote 3 To achieve this, consider the option of focusing the ESU on developing appropriate guidance and tools for decentralized evaluations, working more closely with Branches on selected evaluations, and developing better knowledge management systems. Build ESU capacity through staffing the unit with the right mix of evaluation skills and experience and organizational/contextual knowledge and expertise.
    3. Ensure that Branches plan decentralized evaluation on an annual basis with the guidance of ESU tools and staff.
  4. Strengthening information management of decentralized evaluations.
    1. In the short to medium term, strengthen accessibility to decentralized evaluation. For example, PCE could maintain a consolidated list on the intranet and develop functionality that updates staff on completed evaluations, management responses and the key findings. Explore the use of existing databases (e.g. CRAFT) to this end.
    2. In the long term, create a platform for staff and external stakeholders to access all decentralized evaluations.

Management Response

Overview

The Meta-Evaluation of Global Affairs Canada’s Decentralized Evaluations for the period of 2009-2014 was undertaken by Itad Ltd, a UK-based consulting firm, on behalf of PCE (the Development Evaluation Division). The purpose of the evaluation was to provide Global Affairs Canada with an assessment of the quality of decentralized evaluations, in order to develop recommended actions aiming to improve the credibility, reliability, validity and use of evaluations; and to improve management information systems for storing and sharing evaluation knowledge.

Based on the findings and conclusions and in order to support improved levels of planning, quality and use of decentralized evaluations, the report recommended strengthening four areas: 1) support for decentralized evaluations; 2) the sharing and use of decentralized evaluations; 3) planning and conduct of decentralized evaluations; and, 4) information management of decentralized evaluations. Of the twelve recommendations formulated in the report, all are accepted.

The Development Evaluation Division of Global Affairs Canada takes note of the performance gaps identified by the evaluators regarding the Meta-Evaluation. Many of those gaps have underlined a need for a renewed process and updated tools to support improvement in the quality and use of decentralized evaluations conducted by Global Affairs Canada’s Development Branches. PCE management welcomes these recommendations as an opportunity to review and improve PCE’s Evaluation Services Unit’s resources and role in providing guidance to branch staff through the lifecycle of decentralized evaluation planning, to ensure that adequate support is provided to meet the demand for its services. PCE also recognizes the need to leverage Global Affairs Canada’s resources across multiple channels including: learning and training; communications; knowledge generation and transfer; results and delivery; data and information management; and, performance measurement.

The recommendations identified by this evaluation give branch management further impetus to update processes and leverage existing platforms in order to address persistent gaps currently impacting the planning, quality and use of a large proportion of decentralized evaluation products. As reflected in the commitments and action items proposed in the table below, PCE will lead on some aspects towards the strengthening of the processes related to decentralized evaluations and their dissemination; while the collaboration of the listed branches will also be essential to oversee and monitor the implementation of this management response.

RecommendationsCommitmentsActionsResponsibility Centre4Target Completion Date
4 ‘Lead’ indicates the group responsible for implementing specific actions. Consultation indicates groups that will be asked to provide input.
1. (i) Strengthen the role of the evaluation focal points within Branches by giving them clear responsibility and oversight for promoting knowledge sharing on evaluation in their Branches.Accepted: a) PCE to lead updating of GAC Development Evaluation Policy (in consultation with ZIE and other relevant stakeholders) and obtain approval.

PCE will draft an updated GAC Development Evaluation Policy that defines the roles, responsibilities and areas of accountability of key stakeholders as well as those of the Head of Evaluation. This new policy will reflect the Head of Development Evaluation’s responsibility to approval decentralized evaluation Terms of Reference as well as final evaluation reports.

Development of the Policy will include consultations with key stakeholders across the department and alignment with the new TB Policy on Results; the Departmental Results Framework; Canadian open government commitments; as well as other RBM-related guidance.

PCE will present the updated policy at the Development Evaluation Committee (DEC) for their review and recommendation to the chair of the DEC for approval.

PCE will develop guidelines, directives and supporting tools as required to support the department in implementing the updated policy.

Lead: PCE

Consultation: PCC, DDR, ZIE, DEC

Presentation to DEC by January 2018

March 2018

Accepted: b) Branches (WGM, EGM, NGM, OGM, KFM, MFM) will identify Branch focal points and ensure they are sufficiently tasked/resourced to undertake their responsibilities. (It is expected that Branches will leverage existing expertise, such as Departmental Evaluation Working Group members and/or Performance Measurement Advisor Network members for this work – adjustments to workload or job requirements may be necessary based on the assessment of actual workload given additional tasks).

PCE will work with PCC to identify opportunities for collaboration by combining Branch RBM focal points and Evaluation focal point roles.

Branches will identify focal points who will be tasked with coordinating the planning of evaluations and will represent the Branch regarding evaluation-related tasks. At a minimum, responsibilities will be included in performance management agreements and identified in TeamInfo.

Branches will also be responsible for developing an annual plan of decentralized evaluations for the upcoming year. The plan will be submitted to PCE for approval and integrated into the department al five-year evaluation plan.

Branches will assess the workload required to complete Branch focal points activities and resource accordingly.

Branches will ensure that a minimum of 3 full days of training in evaluation-related work/year for each Branch focal point is included in their Learning Plans. PCE will provide support on training and development of learning plan path for evaluation focal points.

Lead: WGM, EGM, NGM, OGM, KFM, MFM

Collaboration: PCE, PCC

April 2017

April 2017

April 2017

April 2017

Accepted: c) PCE will propose a platform (technological or otherwise) for liaison with Branch focal points.

PCE will meet regularly with Branch focal points (leveraging existing committees such as the Development Evaluation Working Group (DEWG) or the Performance Management Advisor Network (PMA network).

PCE will identify existing technological or other platforms that will facilitate the sharing of evaluation related work with Branch focal points.

Lead: PCE

Collaboration: PCC’s PMA Network

Initial meeting: May 2017

July 2017

(ii) Clarify roles and responsibilities of all stakeholders (Branch staff, ESU, senior managers, the Development Evaluation Committee, etc.), ensuring that there is adequate support for the improvement of evaluation quality (i.e. that is integrated and comprehensive), and that this is reflected (and communicated) in an updated evaluation policy.Accepted: Recommendation 1. ii will be addressed by the commitments and actions outlined in 1.i.   
(iii) Increase the training on evaluation skills for Branch staff and consider rotating staff with evaluation skills to region/country offices, particularly to Global Affairs Canada’s countries of focus.Accepted: a) PCE will implement the Horizontal Learning Strategy for Development Evaluation

Develop initial training sessions and materials as per the Horizontal Learning Strategy (HLS).

PCE will collaborate with CFSI and PCC to develop or link evaluation related training into existing training (for example, RBM 3-day course and APP training session).

Explore opportunities for workshop training specific to project officers starting an evaluation.

Develop new tools and templates and post on Modus (minimum 2/year).

Identify initial learning opportunities to the Branch focal points as a standing item on regular meetings.

Lead: PCE

Collaboration: CFSI, PCC

April 2017

June 2017

April 2017

October 2017

April 2017

(iv) Ensure PCE/ESU has adequate budgetary resources and is staffed with evaluation experts.Accepted: a) PCE will work towards ensuring that the Evaluation Services Unit is fully staffed.

Existing positions within PCE will be staffed.

Within the existing envelope, resources will be re-assigned to ESU, if necessary.

Workload will be tracked and assessed over a period of 6 months in order to more accurately estimate work requirements.

Lead: PCE

February 2017

February 2017

August 2017

2. (i) Present a periodic statement as a standing item on the number and status of decentralized evaluations planned, commissioned and completed by Branch, theme, and budget at the Development Evaluation Committee.Accepted: a) Ensure Development Evaluation Committee (DEC) is aware of and able to comment on the status of decentralized evaluations.

An initial report on the status of decentralized evaluations following implementation of the new evaluation policy will be presented to DEC. It is expected that DEC will make recommendations based on this status report. The status report will be included as a standing item at DEC meetings.

Assess the availability of data for inclusion in status report updates. Note, this will be produced by ESU based on the data available at the time the report is required. Information on Branch plans will be included in the status report as they become available on an annual basis.

Lead: PCE

Collaboration: PCR, SID, PCC, SGCP

Initial report: January 2017

April 2017

(ii) Develop a comprehensive knowledge management strategy including fora for sharing learning from decentralized evaluations, and the publication of decentralized evaluation reports in line with international practice.Accepted: a) Ensure decentralized evaluation reports are available and accessible to internal and external stakeholders

Ensure publication of evaluation summaries on GAC’s website, as well as full reports for selected evaluations, as per commitments in GAC’s IATI Implementation Schedule.

Link evaluation-related guidance and tools to existing processes and guidelines (RBM, APP) as well as the work led by SII on a Knowledge Management Roadmap.

Enhance existing tools/mechanisms to track and store meta-data on decentralized evaluations. This may include leveraging MRT, GCDOCS or other databases that may evolve from Policy on Results implementation requirements.

Lead: PCE

Collaboration: LOD, PCR, PCC, SGGP, SII, PVD

December 2017

August 2017

August 2018

(iii) Build a stronger culture for evaluation knowledge sharing and use through better communication and use of different media (web, social, networks), as well as rewarding and recognizing high quality / good practice. Work towards achieving publication of all completed evaluations in the medium term.

Accepted: a) PCE to collaborate with PVA and other relevant stakeholders towards developing a joint strategy to facilitate the development, sharing and use of evaluation knowledge.

Accepted: b) PCE will also collaborate with LOD to ensure strategy conforms with corporate standards.

Explore opportunities to leverage existing decentralized evaluations by piloting unstructured data analysis in collaboration with PCR.

Use existing and new knowledge sharing mechanisms and products (ex. ‘Au Courant’, Broadcast Messages, participation in knowledge management committees, etc.) to disseminate new knowledge from decentralized evaluations more effectively and in a format adapted to client needs and constraints in collaboration with PVA/LOD.

In the longer-term, develop and ensure implementation of guidance for the dissemination of evaluations and results to partners and stakeholders.

Lead: PCE

Collaboration: PVA, LOD, PCR

March 2017

Initial strategy: July 2017

Accepted: c) Recognize and encourage quality evaluationsPCE to propose recognition or incentive options for encouraging quality improvement in decentralized evaluationsLead: PCEFebruary 2018
3. (i) Strengthen the role, capacities and resources for decentralized evaluation work at Branch level. Achieve this by developing and implementing a training strategy to provide basic evaluation skills and more practical guidance to Branches.Accepted: Recommendation 3.i will be addressed by the commitments and actions outlined in 1.iii.   
(ii) Strengthen ESU’s role as a development evaluation knowledge broker rather than focusing on quality assurance. To achieve this, consider the option of focusing the ESU on developing appropriate guidance and tools for decentralized evaluations, working more closely with Branches on selected evaluations, and developing better knowledge management systems. Build ESU capacity through staffing the unit with the right mix of evaluation skills and experience and organizational/contextual knowledge and expertise.Accepted: Multiple commitments above will address this requirement.   
(iii) Ensure that Branches plan decentralized evaluations on an annual basis with the guidance of ESU tools and staff.Accepted: a) Provide guidance to respective branches (WGM, EGM, NGM, OGM, KFM, and MFM) as they develop 5-year rolling evaluation plans (to be updated annually).

Branches and Branch focal point will leverage the existing risk-based planning tool developed by PCE for planning evaluations.

Branches will be responsible for developing an annual plan of decentralized evaluations for the upcoming year that outlines information about the timing, scope and budget of each evaluation. The plan will be submitted to PCE for approval and integrated into the department al five-year evaluation plan.

Branch ADMs will have commitments outlined in Branch Evaluation Plans be included as part of their Performance Management Agreements.

Lead: Branch focal points (and other Branch representative as required)
ADMs of WGM, EGM, NGM, OGM, KFM, and MFM

Collaboration: PCE

Initial plan: November 2017

4. (i) In the short to medium term, strengthen accessibility to decentralized evaluation.  For example, PCE could maintain a consolidated list on the intranet and develop functionality that updates staff on completed evaluations, management responses and the key findings. Explore the use of existing databases to this end.

(ii) In the long term, create a platform for staff and external stakeholders to access all decentralized evaluations.

Accepted: Based on corporate decisions and movement on databases and structures for capturing knowledge and data, PCE to collaborate with stakeholders to plan strategy to store decentralized evaluation products and meta-data, and ensure that these products and the information contained within them is searchable.

Maintain an excel spreadsheet with current decentralized evaluation products including metadata in preparation for its eventual inclusion on a technological platform.

Will build on actions and collaboration detailed in recommendations 2 (ii) and 2 (iii) to address this requirement.

Lead: PCE

Lead: PCE

Initial version: January 2017

Initial plan: October 2018

1 Introduction

This report forms the final deliverable of a meta-evaluationFootnote 5 conducted for Global Affairs Canada.Footnote 6 The report focuses on the quality and use of decentralized evaluationsFootnote 7. Quality was determined by how well Global Affairs Canada’s decentralized evaluations met the OECD DAC quality standards for evaluation.Footnote 8 These standards are important as they represent an agreed international framework and were adopted by 32 bilateral donors and multilateral development agencies in 2010. They focus on the management and processes for conducting evaluations and were intended to strengthen the contribution of evaluation work to development outcomes, as well as to form the benchmark against which OECD DAC members are assessed in DAC Peer Reviews.

The study was carried out over the period October 2015 to September 2016, with an inception phase from October-December 2015, and data collection from January – June 2016 . It was undertaken by a five-person team from Itad Ltd, a UK-based consulting firm

This final report includes eight sections: introduction (section 1), objectives and scope (section 2), evaluability assessment (section 3), methodology (section 4), findings (section 5), conclusions, lessons and recommendations (sections 6, 7 and 8).

1.1 Decentralized Evaluations at Global Affairs Canada

Decentralized evaluations are evaluations that are commissioned and managed by the Program Branches,Footnote 9 and are intended to support accountability, learning and decision making, and to lead to improvements in programming. They are conducted by private sector evaluators or evaluation firms that are commissioned through the use of Government of Canada procurement vehicles. The decision to conduct a decentralized evaluation, its scope, design and the specific issues addressed are based on the needs and requirements of managers within Program Branches and those of project/program partners. The Program Branches have full responsibility for commissioning, conducting and ensuring the use of decentralized evaluations.

Decentralized evaluations are also the building blocks for corporate evaluations that are conducted within the Policy Branch as they are a key source of data, particularly in the assessment of program effectiveness. Decentralized evaluations are different to corporate evaluations, which address broader thematic issues and are commissioned and managed directly by the Development Evaluation Division (formerly known as DED but now PCE) of Global Affairs Canada.

PCE has a role to support the quality of decentralized evaluation. More specifically, the Evaluation Support Unit (ESU) within PCE, over the period of the review, provided guidance, technical support and quality assurance of decentralized evaluations as an integral part of PCE’s oversight function. The Unit comprises a small team who review evaluation deliverables, such as terms of reference (ToR), work plans and evaluation reports. It is a demand-led activity, depending on requests submitted by the Program Branches. The Unit has provided report templates and quality standards, based on OECD DAC principles, for use by those commissioning and authoring decentralized evaluations. With the exception of ToRs that require ESU approval before going to tender, the Unit does not have any authority over the acceptance or approval of such deliverables as this is the role of the Program Branches. The ESU is also not involved in assessing consultant bids on evaluations.

2 Objectives and Scope

2.1 Purpose

The purpose of this meta-evaluation of decentralized evaluations is to provide Global Affairs Canada with:

  1. An assessment of the quality of decentralized evaluations;
  2. A set of recommended actions for PCE to improve the quality of Global Affairs Canada tools, guidance and planning processes to increase the credibility, reliability, validity and use of evaluations; and
  3. Recommendations to improve management information systems (MIS) supporting the storing and sharing of evaluation knowledge.

The users of this meta-evaluation are PCE and the Program Branches.

The underlying rationale for the study is based on one of PCE’s stated roles. According to the Development Evaluation Policy (CIDA, 2005), the Development Evaluation Division “is responsible to report to the Evaluation Committee on the quality of decentralized evaluations by performing periodic quality assessment and meta-evaluations.”Footnote 10

A previous meta-evaluation exercise was conducted in 2008.Footnote 11 A review of its recommendations forms a further rationale for this current study.

Since then, new quality standards (based on the OECD DAC Quality Standards for Development Evaluation)Footnote 12 were formally introduced into the 2009 Corporate Standing Offers Arrangements for evaluation services, and these standards now provide a fresh framework for assessing evaluation quality.

This study provides a valuable opportunity to assess how far Global Affairs Canada’s decentralized evaluations have improved over the past seven years, how well they match the OECD DAC’s guidance, what barriers there are to quality, and how PCE can strengthen its support to Program Branches in this area.

Finally, with the formation of Global Affairs Canada (then DFATD) by amalgamating CIDA and DFAIT, the period under study has seen significant organizational and personnel changes compared to the pre-amalgamation era. Merging the functions of foreign affairs and trade with that of development assistance, and the moving of evaluation into a policy department, has created potential for synergies but has also necessitated the reframing of development work within this new structure. Incentives have changed, with a greater emphasis on the scrutiny and the value added of aid following the Federal Accountability Act. The recent change in political leadership in Canada in 2015 has also initiated a new policy direction for aid as well as for results measurement and accountability, and the implications for evaluation are yet to emerge fully.

2.2 Specific Objectives

The ToR require the meta-evaluation to address eight specific objectives:

  1. Assess the quality of the decentralized evaluations in relation to OECD DAC Quality Standards for Development Evaluation;
  2. Determine the quality aspects or factors in which Global Affairs Canada evaluation documents excel and where they fall short;
  3. Examine strengths and weaknesses of decentralized evaluation planning processes;
  4. Determine whether decentralized evaluations benefited from the support, expertise and tools developed by PCE;
  5. Examine whether the findings, conclusions, recommendations and lessons of decentralized evaluations have been used, for what purpose and where they were not used, why not;
  6. Determine the extent to which the then CIDA 2008 meta-evaluation recommendations were implemented;
  7. Review the existing MIS for storing and sharing decentralized evaluations and explore opportunities to improve or upgrade it;
  8. Recommend measures needed to enhance the quality of evaluations.

2.3 Evaluation Object and Scope

The ToRs cover a comprehensive set of four objects for this meta-evaluation. Together they provide a basis for assessing the entire lifecycle of decentralized evaluations from their planning to their eventual use. These are set out in Table 1.

Table 1: Evaluation object and scope
Evaluation objectScope
13 Report on the Inventory of Decentralized Evaluations – Summary, GAC, October 14, 2015.
14 Cousins, B. and Leithwood, K., Current empirical research on evaluation utilization, Review of Educational Research, Vol. 56, 1986.
1. The planning processCover all stages in the formative phase of conducting a decentralized evaluation, from policies and procedures in Global Affairs Canada determining what, when and how many evaluations are required, to the administrative steps involved in commissioning, budgeting and launching an evaluation.
2. The quality of evaluation documents at ‘entry’ and ‘exit’All decentralized evaluations (carried out through six Global Affairs Canada Branches) falling between April 1, 2009 and March 31, 2014. PCE has an inventory of 1,124 documents for 467 projects.13 Only projects costing more than $2 million are included. The total disbursement value of these was $6.93 billion. They cover four types of deliverable (ToRs, work plan, evaluation report and management response). There are three main categories of evaluation: ‘formative’, ‘summative’ and ‘other’. There are also routes: some have been managed by Global Affairs Canada through consultant standing offers (now expired), some managed by Program Branches in the field, and others managed jointly with other development partners using these partners’ procurement.
3. PCE services and tools

PCE’s capacities, systems and tools for supporting decentralized evaluations, with a focus on the ESU. These include:

  • a mapping of the processes and procedures that are in place for PCE/ESU to support decentralized evaluations
  • the scope, quality and use of the PCE/ESU guidance and tools that are available
  • the scope, quality and use of the training and support services that are available
  • the capacity of PCE/ESU staff to deliver support services to decentralized evaluations
4. Utility of evaluations and processes

The level of use of a sample of evaluation reports by various stakeholders as well as how fit for purpose the Global Affairs Canada MIS is for storing and accessing all evaluation deliverables in a way that users find helpful. Use is understood in three ways:

  1. The evaluation being used to inform a specific management decisions
  2. The evaluation being used for learning
  3. The evaluation being used as an opportunity for reflection through involvement in the process14

Given the differing nature of these four objects, the study team designed separate methodologies and tools for each. These are described in Chapter 4. Evaluation utility is not only a result of the quality of the evaluation product, but depends on the institutional setting and incentives. The scope of this meta-evaluation therefore also includes an assessment of these factors.

Stakeholders: As part of the scope, the study recognizes the need to include various sets of stakeholders. These are well defined in the ToR, and include three main groups: the commissioners of decentralized evaluations; the executors of decentralized evaluations; and the users of this meta-evaluation. The work plan sets out how the study team would reach these different actors.Footnote 15

3 Evaluability Assessment

The work plan recognized and sought to mitigate potential constraints to the evaluation process and design. The study approach, discussed in detail in the work plan, sought to address limitations arising from the methodology, the practical and logistical constraints the team might face, the availability of data and people, and any conflicts of interest arising from our team’s past experience. The experience of the previous meta-evaluation conducted in 2008 also informed the work plan. For example, the 2008 study faced serious challenges in accurately identifying and retrieving evaluation documents. To overcome this, PCE took over a year to prepare an inventory of evaluation deliverables for this current study. The study team then chose a purposive sampling method to obtain as representative a coverage of the inventory as possible, and to replace cases where there was incomplete documentation. The data gathering strategy was designed to allow the team to engage with key stakeholders in different ways. Thus, in order to overcome potential gaps in evidence, the methodology used multiple sources of evidence to mitigate these gaps and establish more robust findings.

To avoid compromising the independence of the evaluation, Itad also assessed the team members’ past relationship with CIDA/Global Affairs Canada, and removed any areas of conflict of interest, such as where team members had previous involvement in evaluations. Data from interviews and documents has been held in a confidential and secure manner.

4 Methodology

4.1 Evaluation Questions

The ToR set out six main questions to be answered. These address: (i) the quality of decentralized evaluation documents against OECD DAC and Global Affairs Canada standards; (ii) where these documents exceed or fall short on these standards; (iii) the strengths and weaknesses of decentralized evaluation planning processes; (iv) the utility of the decentralized evaluation reports; (v) whether the 2008 meta-evaluation recommendations were implemented; and (vi) what opportunities exist to improve existing Global Affairs Canada information management systems.

The six questions are broken into 20 sub-questions and these in turn form the structure of Chapter 5 on findings. A detailed evaluation design matrix was prepared to show how the study would answer each of these sub-questions.

4.2 Methods

The scope of this assignment allowed the Itad team to conduct a comprehensive study of evaluation quality for Global Affairs Canada. While past meta-evaluations have concentrated on only selected aspects of determinants of quality, in this case the study has examined the quality of evaluation products, as well as the processes, institutional context, capacities and supporting systems that all play a role in determining quality. To achieve this, the meta-evaluation involved six components: (i) a desk-based quality review; (ii) an online staff survey; (iii) a review of PCE/ESU processes; (iv) a set of in-depth case studies; (v) a review of Global Affairs Canada’s information systems for evaluations; and (vi) a review of the 2008 meta-evaluation. The relationship between these components and Global Affairs Canada’s decentralized evaluation processes is shown in Figure 1. The study work plan, approved by PCE, describes the methodology in full.Footnote 16

Figure 1: Evaluation processes and study components

 

4.2.1 Desk-based quality review

The quality review covers ToRs, work plans, draft and final evaluation reports and management responses. The templates are based on PCE tools and OECD DAC standards. A four-point quality rating score was applied to each of 72 individual quality criteria and to 16 overall quality ratings. While a sample of 125 evaluations were initially selected,Footnote 17 the actual sample was 116.Footnote 18 Quantitative and qualitative analysis was performed on the review data with a view to exploring: how representative the sample was, the characteristics of the evaluation documents, patterns of variation in quality against a range of independent variables, and associations between quality across the four deliverables.

4.2.2 Online survey

A web-based survey of Global Affairs Canada Branch staff gathered perceptions on the planning and use of decentralized evaluation and the utility of ESU support. This helped to identify issues for follow-up during the in-depth case studies and in interviews with senior management. Some 114 staff responded and usable results were obtained from 90 staff.

4.2.3 In-depth case studies

The case studies explored the barriers and enablers of good evaluation planning and identified how and why decentralized evaluations are used or not used. A purposive sample of 10 evaluations was chosen to reflect different levels of quality and use across the six Branches. Only eight could be completed due to difficulties in contacting relevant personnel and interviews were held with 25 people.

4.2.4 Review of PCE processes for supporting decentralized evaluations

The focus was around the work of the ESU, which provides both quality assurance (QA) throughout the evaluation process and technical advice on evaluation concepts and methods to Global Affairs Canada Branch staff. The review examined ESU staff capacities and the tools and processes used in ESU’s work. Evidence was drawn from several sources: the quality review, interviews with 23 staff and consultants, the in-depth case studies, and additional interviews with senior management and experienced Global Affairs Canada consultants.

4.2.5 Review of the Global Affairs Canada’s information management for evaluations

This activity consisted of examining existing Global Affairs Canada systems for storing and sharing evaluation knowledge and a comparative review of four other agencies to identify good practices.Footnote 19

4.2.6 Review of 2008 meta-evaluation

The recommendations from the previous meta-evaluation in 2008 were reviewed in the light of evidence emerging from the other study components. This allowed an assessment of how far things had changed over the review period, and whether recommendations from the 2008 study had been followed up.

4.3 Limitations

There were a number of limitations arising from the evaluability assessment and the meta-evaluation design of the study.

This study was required to define quality in terms of adherence to an international standard: i.e. those proposed by the OECD DAC. While these are comprehensive and internationally recognized, there may be other ways to assess quality that depend on the stakeholders’ perspective. Indeed the guidance document notes that the DAC standards should not exclude the use of other standards, and should not supplant specific guidance for particular types of evaluations.

In any assessment of factors affecting evaluation quality and use, some areas will remain hard to assess when conducting the work remotely or from secondary material. These include the sometimes unpredictable relationships between evaluation managers, the evaluators and those being evaluated, and situations where conflicts or biases occur between actors that may not be reflected in documents. While this was difficult to address in the quality review, the study team was able to uncover information around such sensitive issues in the case studies, and indeed these were very useful in building a richer appreciation of the challenges faced in the execution of decentralized evaluations.

The lack of a system in Global Affairs Canada to monitor the implementation of management responses, as well as the small number of such reports available and their format, resulted in a very weak proxy for use. Additionally, while assessing the management response tool was a key part of the study’s assessment of evaluation use, it did not capture all the formal and informal outcomes of evaluation and, on its own, was not a complete measure of utility (see further discussion in section 5.4). The study was able to mitigate for this to some extent through the key interviews.

The study was not able to explore the OECD DAC ‘overarching’ standards around transparency, partnership, coordination or capacity development in detail in the review, since these criteria were not captured in the quality review.

Practical issues arose concerning access to documents and informants for the quality review and the case studies. Locating all the relevant reports proved challenging, and where documents were not found, replacements were made with the help of PCE. The smaller sample obtained for ToRs, work plans and management responses restricted coverage. Global Affairs Canada’s staff had in some cases been re-posted or retired, and it was therefore not always possible to find the most relevant individuals for interview. Many evaluations happened five years ago and interviewees found them hard to remember. Nevertheless, for the majority of case studies and management interviews, the appropriate staff were contacted and were able to recall relevant details.

The sample of respondents who completed the online survey was acceptable at 114, representing around 20 percent of the potential universe of over 500 staff invited to take part.Footnote 20 The profile of the respondents reflected fairly well the branches and the staffing categories. The quality review sample of 116 was sufficient for basic comparative analysis, but the use of more advanced correlation tools was limited by the sample size and the study findings notes that the results from this work should be treated with caution.

5 Findings

This chapter presents the main findings from the meta-evaluation against each of the evaluation questions and sub-questions set out in section 4.1.

5.1 Quality of Decentralized Evaluations

5.1.1 Representativeness of the quality review sample

The sample of 116 evaluation reports reviewed for quality was evenly distributed across the Branches according to the proportion of evaluations in the inventory. The most represented Branches were Asia, Africa and Americas, with fewer examples from Global Issues and Development (MFM), Partnerships for Development Innovation (KFM) and Europe Middle East and Maghreb (EGM). The majority of evaluations were either formative (46 percent including mid-term evaluations) or summative (41 percent including final project evaluations) and this maps well against the proportion in the inventory. Average project budgets were slightly larger in the sample reviewed, with a mean of $20 million, compared to a mean of $15 million in the inventory.Footnote 21 All Global Affairs Canada priority themes and OECD DAC sectors were covered by the sample. The most represented areas were related to democratic governance and economic development. The average evaluation budget where the data were given is $103,000. The distributions for sector, theme and evaluation budget unfortunately cannot be compared to the overall inventory since these data were not captured there. However from discussions with Global Affairs Canada, the coverage by theme and sector, and level of evaluations cost would appear representative.Footnote 22

5.1.2 What was the quality of decentralized evaluation documents when compared against OECD DAC standards?

To answer this question, the study examined the single overall quality rating given to each evaluation product (Figure 2). The review found that 59 percent of 116 evaluation reports met OECD DAC/PCE standards (based on an overall quality rating of 3 or 4), while 52 percent of 81 ToRs and 52 percent of 58 work plans met such standards. Of 28 management responses, 79 percent also had a satisfactory quality rating, but the quality criteria used were substantially less challenging (relating to such questions as the degree of commitment to follow up on recommendations), while the sample of such reports was much smaller and therefore possibly less representative.

Although the majority of products therefore were rated as satisfactory for quality, it should be noted that only a small percentage were rated as highly satisfactory, so that the majority in this group still have deficiencies in some areas that might limit their reliability or usefulness. The strengths and weaknesses of each product are explored in the next sections.

Figure 2: Distribution of quality ratings across evaluation products

Text version
Distribution of quality ratings across evaluation products
Evaluation productsHighly unsatisfactoryUnsatisfactorySatisfactoryHighly satisfactory
ToR13.6%34.6%45.7%6.2%
Work plan13.8%34.5%36.2%15.5%
Evaluation report7.2%33.3%50.5%9.0%
Management response14.3%7.1%42.9%35.7%

Factors explaining overall evaluation quality

The proportion of evaluation reports achieving an overall satisfactory quality standard varied year-on-year, but appeared to reflect a peak in quality in 2012 (Figure 3), although the data did not allow the analysis to test this definitively. In 2013 and 2104, for example, 71 percent (of 21 reports) and 100 percent (of seven reports) achieved satisfactory ratings. Overall a simple trend line superimposed on the graph indicates a modest improvement in quality over the period.Footnote 23

Figure 3: Mean rating of evaluation reports’ quality over the period 2006–14Footnote 24

Ratings vary from 1 to 4 with a mean quality level of 2.5

Text version
Mean rating of evaluation reports’ quality over the period 2006–14
200920102011201220132014
2.43.03.73.93.23.1

There was no major difference between the quality of evaluation reports by type or by Branch (Figure 4 and Figure 5), although there were slightly higher quality ratings for those commissioned by the partnerships and the multilateral Branches than by the three regional Branches.

Figure 4: Distribution of evaluation report ratings by evaluation typesFootnote 25

Text version
Distribution of evaluation report ratings by evaluation types
Evaluation typesHighly unsatisfactoryUnsatisfactorySatisfactoryHighly satisfactoryAverage rating
Formative4182552.6
Summative4142342.6
Other05912.7

Figure 5: Distribution of evaluation report quality by Branch

Text version
Distribution of evaluation report quality by Branch
BranchHighly unsatisfactoryUnsatisfactorySatisfactoryHighly satisfactoryAverage rating
Africa21015024
Americas2109325
Asia3917225
Europe, Middle East and Maghreb035128
Partnerships for Development Innovation1310329
Global Issues and Development010130

There was a strong positive relationship between the higher cost of a project and higher evaluation quality (Table 2).

Table 2: Average project budget for each evaluation report by quality rating achieved
Overall quality ratingAverage project budget $No. reports
Highly unsatisfactory8,857,5438
Unsatisfactory14,968,60235
Satisfactory21,205,34553
Highly satisfactory46,001,9178
Total20,064,039104

There was also a moderate but still significant relationship between the budgeted cost of a project and the evaluation budget committed for that project.Footnote 26

There was no clear pattern of association between report quality and the size of the evaluation team in the 86 cases where data were available. There was a link, however, between whether the team leader was also an evaluation specialist and the quality of the work plan and evaluation report. Three quarters of work plans and evaluation reports were satisfactory where the team leader was an evaluator, while where the team leader was not an evaluator just a quarter of work plans and a half of evaluations were rated as satisfactory.

For the 31 cases where information was available, the size of the evaluation budget did not show a clear link to either work plan or evaluation report quality.

5.2 OECD DAC Quality Aspects where Global Affairs Canada Evaluation Documents Excelled or Fell Short

First, the study has drawn together the evidence from the study’s different tools used in the meta-evaluation and made a judgment about whether the quality standard was met for each of the OECD DAC standards. The assessment was partial because of the lack of evidence in some areas.

For DAC standards related to overarching considerations around evaluation, there is not enough evidence to comment on four of the criteria, since the quality review and case studies were not able to explore the issues of transparency, partnership, coordination or capacity development in sufficient detail. The extent to which the evaluations describe if they followed relevant ethical principles in conducting their work (such as confidentiality, privacy, informed consent and respecting human rights), however, was met in only very few of the evaluation documents (e.g. in only 6 percent of evaluations).Footnote 27 Adherence to the quality control standard was mixed, being met in 61 percent of the ToRs but just 48 percent of the work plans.

For the DAC standards related to evaluation planning, the picture was reasonably good, with 7 of the 12 quality standards achieved in the majority of documents. The exceptions were for evaluability, and approach and methodology, where the quality the standard was met by only approximately a third of cases.

For DAC standards concerned with evaluation implementation and reporting, 9 of the 15 standards were met. Stronger areas of quality relate to standards on report clarity, intervention logic and stakeholder consultation. The documents performed less well around methodology and limitations.

Finally for quality standards related to evaluation use, the picture was weakest. While the management response tool generally met the quality standard that was applied, when the views expressed from interviews and the online survey were taken into account, the standard on usefulness was assessed as being partially met, and the standards on response and follow-up and dissemination as not being met.

The following section examines the quality of the four evaluation products against the different OECD DAC criteria.

5.2.1 Terms of reference

Quality was evenly split between been satisfactory and unsatisfactory (52:49 percent) across the 81 ToRs assessed; with 6 percent awarded a highly satisfactory rating. An analysis of performance over time suggested that the quality of ToRs was increasing since 2009, and a general decrease in the proportion of ToRs rated overall as not meeting OECD DAC standards.

The four main sections of the review template indicated that ToRs generally have higher quality in relation to section 3, covering management and processes, and section 2, covering purpose, objectives and scope. Over 80 percent of ToRs were rated as satisfactory for the quality of their rationale or purpose of the evaluation, for defining specific objectives, and for defining required deliverables clearly.

ToRs were weaker in the quality areas covered by section 1, structure (layout and annexes) and section 4, related to overarching factors (gender, ethics, limitations). The weakest areas were related to ethics (which it should be noted was not included in ESU guidance), defining the expected limitations to the evaluation, identification of findings and recommendations from previous evaluations, and describing the context.

5.2.2 Work plans

For the 58 work plans assessed, quality was again evenly split with 52 percent rated as satisfactory and 49 percent unsatisfactory. Work plans were most frequently satisfactory in regard to describing the purpose and objectives of the planned evaluation, along with describing adequate management arrangements for the evaluation process. The weakest area was with regards to specifying appropriate methods and tools, for which only 44 percent of work plans met OECD DAC standards.

Longitudinal data suggested that work plans improved significantly in 2013 and 2014 after a period of lower quality in 2012 and before. This may have represented the beginning of a positive trend, although the number of cases was relatively few to confirm this.

In terms of individual criteria, the assessment of work plans revealed a number of strengths. Over 70 percent were rated as satisfactory for the following criteria: good specification of objectives for the evaluation, defining an appropriate scope, ensuring an adequate level of stakeholder involvement, and describing an appropriate team and management process. Unsatisfactory quality criteria included an inadequate description of the context of the evaluation object, poorly specified data analysis methods, and not applying standards for ethics and gender. There also tended to be lower quality with regard to ensuring quality assurance processes and specifying appropriate data sampling.

5.2.3 Evaluation reports

The overall quality of the 116 evaluations reports assessed was slightly higher than ToRs and work plans, with 50 percent rated as satisfactory and 9 percent as highly satisfactory. This finding may relate to the greater scrutiny as well as level of effort and time applied to the final evaluation report compared to ToRs and work plans.

The four main sections of the review template indicate that the strongest quality areas were in terms of logical and relevant analysis leading to adequate findings, conclusions, recommendations, and lessons learned. The weaker quality areas related to lack of detailed and appropriate methods – for which only 39 percent of reports were found to be of the required standard. This has an implication in terms of the quality of evidence that was being used in the later stages of the evaluation process to develop robust findings.

Analysis of longitudinal data revealed a similar pattern to the observations about work plans, with an improvement in quality for 2013 and 2014 after a dip in 2012.

In terms of individual quality criteria, the assessment of evaluation reports found that over 70 percent of reports were rated as satisfactory with regard to identifying specific objectives, describing the evaluation object, assessing the quality of the monitoring and evaluation system, and developing detailed and robust findings. Quality was weaker with regard to integration of gender and ethics standards, incorporating evaluability assessments or presenting data from previous evaluations, and where there was a weak description of methods (or specified methods of limited appropriateness and/or significant limitations).

5.2.4 Management responses

Based on the availability of documents, a relatively small sample of 28 management responses was assessed for quality. Overall, where available, management responses were found to be strong with 79 percent meeting the desired quality standard. However, the format of the report was not a particularly effective tool for judging the application of OECD DAC criteria.Footnote 28

Parts of the management response were of higher quality, particularly in terms of giving an immediate response to recommendations (and whether to accept, decline or adapt them). But the commitments to specific actions, identification of responsible persons, and specification of target dates were somewhat lower, though still of good quality.

5.2.5 Associations between evaluation products and quality characteristics

In addition to undertaking frequency analysis, the ratings data was used to undertake a correlation analysis across and within the four products.Footnote 29

First, a comparison between the overall quality ratings for four products found some significant associations as shown in Table 3.

Table 3: Strength of correlation between evaluation products
RelationshipStrength of correlationCases
* Correlation is significant at the 0.01 level (2-tailed).
Terms of reference and work plansStrong - r2=0.617*52
Work plans and evaluation reportsStrong - r2=0.617*56
Terms of reference and evaluation reportsModerate - r2=0.617*69
Management response and ToRs, work plan, and evaluationNone to weak - r2 from 0.15 to 0.3423–28

The results indicated that there was a high probability of a link between the quality of ToRs, work plans and evaluation reports. If a good quality ToR was produced it was very likely to lead to a good quality work plan and that a good quality work plan was very likely to lead to a good quality evaluation. There was a slightly less strong link but still a significant one between the quality of the ToR and the quality of the eventual evaluation report.

While the first three evaluation products showed an important degree of association, no significant correlation was found between the quality of the management responses and the quality of any of the other products. Hence, evaluation quality, in terms of adherence to an industry standard, did not influence the use of the evaluation findings or recommendations, as measured by the management response tool. This is worrying if it implies that evaluation use, in terms of, for example, adoption of recommendations, does not take into account the reliability of those recommendations.

Second, a correlation analysis of all the evaluation report criteria against the overall quality rating for the evaluation report was undertaken. This revealed that the overall quality of an evaluation report is most closely related to four criteria: high quality evaluation questions, robust and appropriate methodology, logical and well-attributed findings, and clarity of analysis.

5.2.6 Principal components analysis

In addition to correlation analysis, a principal components analysis of the ratings data was undertaken in an attempt to identify the most important combination of criteria (or ‘components’) that were associated with high quality evaluation products. Overall, this analysis was handicapped by the small number of cases and the narrow rating scale, and hence revealed a complex set of associations in which a large number of components were produced.Footnote 30

Nevertheless, examining the relationships between the criteria contained in the ToR, work plans and evaluation reports suggested that the methodology criteria from across all three products (specifically methodological appropriateness and robustness from the evaluation report, data analysis methods in the ToR, and data collection and sampling in the work plan) had the greatest explanatory effect on the variance found in the ratings. This was quite a revealing finding and seemed to emphasize again the importance of having a robust methodology in all three products if the overall quality was to be high.

When the principal components analysis was re-run to explore the relationship between all the quality criteria found in just the evaluation report, the analysis found that just over half of the variance was explained by two components. The first component could be categorised as ‘methodology’ and accounted for 41 percent of variance. Six of the top 15 criteria were associated with this dimension (with strong loadings for methodological robustness, sources of evidence, limitations, and description of the design). The second component, accounting for 10 percent of the variance was an unclear mixture of quality criteria, from recommendations to report style and structure, and scope.

5.2.7 Qualitative analysis

The study also undertook a qualitative analysis of the five reviewers’ comments across the different evaluation products.Footnote 31 The analysis extracted characteristics of evaluation products that were most frequently associated with reports rated satisfactory or unsatisfactory. The results found that the descriptive terms associated with more highly rated evaluations included clarity, comprehensive coverage of appropriate details, sufficient team and budget, soundness of design and methods, and strong recommendations and conclusions. By comparison, features associated with less well-rated evaluations included weak context, insufficient management information, insufficient treatment of ethics and gender, inadequate data and analysis, and generic findings.

5.2.8 Summary of the quality review

  1. Between 50 and 60 percent of the three major evaluation products – ToRs, work plans and evaluation reports – were satisfactory when judged against the OECD DAC quality standards; few achieved the OECD DAC highly satisfactory standard. The study team did, however, highlight three examples of good practice.Footnote 32
  2. There was some indication of a modest improvement in quality over the period of the review though this was not definitive. The type of evaluation or the Branch conducting it did not influence quality substantially but higher budget projects were better evaluated. Evaluation team composition or size of evaluation budget did not affect quality, but if the team leader was also an evaluator, this improved quality.
  3. The evaluation products had a varied performance against different OECD DAC standards. They did well in most areas of evaluation planning and implementation, although they fell short in describing adherence to ethical standards, evaluability and methodology. Although management responses were of reasonable quality, the follow-up and dissemination aspects were poor.
  4. Good quality ToRs were associated with a good quality work plan and also with a quality evaluation report. There was no link between the quality of these products and the quality of the management response.
  5. The quality of an evaluation report was most closely related to four criteria: high quality evaluation questions, robust and appropriate methodology, logical and well-attributed findings, and clarity of analysis. Further analysis indicated that a sound methodology was the most important single factor associated with good overall quality of ToRs, work plans and evaluations.

5.3 Quality of Decentralized Evaluation Planning Processes

The conceptual framework in the work plan recognized that the evaluation planning process was embedded in the relationship between the evaluation commissioner, ESU and the evaluation team and their respective capacities, as well as the wider institutional environment in which the evaluation was being conducted. Evaluation quality and use depended to a large extent on this interplay.

Evidence for this section was drawn from the quality review, online survey, case studies, interviews with Global Affairs Canada staff in ESU and Branches (notes of these are kept on Itad’s confidential web file), and an analysis of the ESU tools and templates designed to support quality of decentralized evaluations.

5.3.1 Which quality aspects and factors related to the planning process were associated with overall evaluation report quality?

From the quality review, the analysis found a strong and significant correlation between ToR quality and work plan quality and also a strong and significant correlation between work plan quality and evaluation report quality (section 5.2.5). This was especially so where the work plan contained a good evaluability assessment, an appropriate team and clearly specified deliverables. There was also a moderate yet significant correlation between ToR quality and evaluation quality. This supports the logic that a sound ToR is highly likely to lead to a good work plan, and in turn to a good quality evaluation. It implies that for evaluation commissioners, investing in a carefully written and high standard ToR is worth doing.

A triangulation of different types of quantitative and qualitative analysis indicated that detailed and clear explanation of evaluation methods appropriate to the purpose, object and context of the evaluation was a key contributor to overall quality. Whether this was provided in the ToR or in the work plan, it was a critical dimension given that the specification of methodologies remained the weakest aspect.

The case studies indicated that better quality evaluations were generally planned well in advance, allowing for sufficient time to ensure a quality process and product (true in five out of eight case studies). Typically, evaluations were planned and budgeted for when a project was designed, ensuring resourcing and appropriate timing. In contrast, there were also cases where evaluations were delayed and could not serve their main purpose anymore, i.e. some of the decisions they were meant to inform had already been taken.

The composition and technical skills/experience of the evaluation team was also associated with the quality of the evaluation report, particularly when the teams mobilised had the rights skills and expertise, and especially when the team leader had evaluation skills. The standing offerFootnote 33 was often seen as a helpful way of sourcing quality consultants. There was a concern with the process of selecting consultants located in the country where the evaluation took place, which was often done separately. In some cases, this led to dysfunctional teams or where team member skills did not complement each other.

A weakness in the planning process of decentralized evaluations was the lack of a clear and specific purpose to the evaluation. In six out of eight case studies, the evaluation purpose was either too broad or unclear, or there was a second hidden purpose that undermined evaluation quality. It was found that broad evaluation purposes combined with too many evaluation questions made it difficult for evaluation teams to produce high quality reports.

Another weakness identified was the comparative lack of evaluation experience among some evaluation managers/program staff for branch-led evaluations. Such staff possessed a wide range of development experience, but many were new to the subject matter and had to manage the process with little assistance. The lack of evaluation skills was evident in the allocation of insufficient budgets for evaluations, in the planning of evaluations at inappropriate points in time, and in the development of ToRs with weak content, at times copying evaluation questions from templates without tailoring them to the specific evaluation. Some of this may also be due to time constraints, since evaluation management was just one of their many responsibilities.

Third, evaluation budgets and levels of effort tended to be insufficient to match scope and quality expectations (in five out of eight cases). With relatively broad purposes as outlined above, and limited budgets, some evaluations had no choice but to cut back on quality. Some interviews indicated that there was a lack of guidance in Global Affairs Canada on how to resource evaluations and that such guidance would be useful to ensure sufficient resourcing and better quality. When reviewing ToRs, ESU did not seem to pay much attention to the resourcing question either. Global Affairs Canada staff is also limited by the procurement vehicles and resulting processes that are made available through Public Service and Procurement Canada.

Finally, the study found that ESU involvement was often insufficient or not sufficiently tailored to improve the evaluation planning process and contribute to better evaluation quality (three out of five cases) given their limited mandate and role in decentralized evaluations. ESU capacity to support decentralized evaluation in a broader way was limited and very much focused on providing comments on evaluation products. Given the capacity challenges among program staff, a more hands-on involvement would have been welcomed by many evaluation managers. It was also mentioned that ESU did not take a situational approach to quality, leading to the inappropriate application of quality standards. From the perspective of evaluation commissioners, ESU involvement was sometimes perceived as bureaucratic rather than facilitating, and response times fell short of expectations, thereby limiting the utility of ESU support.

5.3.2 How did capacity considerations influence evaluation planning processes?

There were two main areas where capacity to conduct and support decentralized evaluations was relevant: within ESU and in the Program Branches.

While ESU played no management role in such evaluations, its function was to provide QA and technical assistance to ensure that evaluation processes and products were as credible and robust as possible. Annually, the ESU estimated that it engages with 50-60 decentralized evaluations, and over the 5-year period of the review (2009–14) 467 evaluations with associated deliverables have been documented.Footnote 34

Throughout the period of this evaluation, there were not enough staff in the ESU to meet the demand for its services based on current practices. Formally, ESU should have 3.5 full time staff.Footnote 35 While this structure had been achieved at certain points in the Unit’s history, ESU’s staff capacity was more diluted recently.Footnote 36 This was far from ideal given the workload. Study interviews and the web survey consistently indicated that stakeholders in both the Program Branches and PCE characterised ESU as ‘short of capacity,’ ‘overworked’ and having ‘too few resources’ to effectively deliver on its mandate, especially considering the expectation from evaluation commissioners that ESU provide a more situational and tailored approach to their work.

ESU also requires greater experience in conducting evaluations to effectively support Branch staff. While the web survey indicated that 63 percent of Branch staff felt the ESU’s technical capacity was high, key informant interviews revealed a more nuanced picture. One of the issues raised was the perception that there is a lack of practical evaluation expertise in the ESU. This meant some ESU staff were less comfortable using their judgment and applying the QA templates flexibly, which fostered a tick box approach. This was corroborated by the views within the ESU.Footnote 37

Providing advice in a timely manner has been a key challenge for the ESU. Despite staff being generally positive about the technical support provided by the ESU, the overall level of satisfaction with the Unit is low. Out of the 57 survey respondents who answered this question, 40 percent were satisfied with the Unit’s support services. Both the survey and key informant interviews revealed a fairly widespread frustration at the time it takes ESU to provide feedback. This may potentially affect the usefulness of evaluations, given the importance of timeliness to assuring use.

Branch staff capacity for evaluation work was of course mixed, but overall less than ideal. From the online survey, respondents were asked if they felt that they had the necessary knowledge and skills to manage a decentralized evaluation. For the 88 that answered, just over half (55 percent) felt their knowledge was poor or none, while 30 percent felt it was fairly good and just 14 percent felt fully competent.Footnote 38

ESU itself has also not had the capacity to provide sufficient evaluation training, which had once been considered a potential avenue for ESU to improve the quality and profile of evaluations across the department. Some training has been provided in the past, with good results, but this has not been systematic. Lack of resources has been an impediment to designing and rolling out more comprehensive evaluation training. This is not to say that ESU has not been able to support skills development of Branch staff, they have sought to do this through the engagements around evaluations. However, this is a very resource intensive approach. There is a clear need within Global Affairs Canada for the ESU to play more of a role in building internal capacity. A number of informants spoke of how they would like the ESU to not be solely focused on QA and to take on the wider more strategic role of advocating and building capacity for decentralized evaluation in Global Affairs Canada.

5.3.3 How did incentives and systems influence evaluation planning processes?

The provision of QA and advice were not only affected by capacity but also by systems and incentives.

The quality of ESU’s templates were generally good. The study reviewed the five templates that have been developed by ESU for structuring its QA process. In general the templates and tools were considered to be well-structured and comprehensive. In the case of the ToR template there was also good evidence to suggest it was appreciated by Branch staff. While there were small areas for improvement for each of the templates, the major challenge was in how they are applied.

ESU did not take a situational approach to quality. While the ESU had a common set of standards that it applied across the evaluations it reviews, the scope and budget of these varied considerably. Some were project-level evaluations costing around $100,000, others more complex program evaluations with budgets of $400,000. While it is understandable that ESU should seek to apply a minimum set of evaluation quality standards, a common issue that emerged from both the interviews and web survey was the perception that ESU was not flexible enough in how it applied quality standards across evaluations of different sizes. For many, the issue was around ESU matching their expectations around quality to the resources available for the evaluation. Responses from the online survey illustrated this: “ESU are more familiar with large scale program evaluations. They need to be more aware of what’s possible in a smaller evaluation” and “[ESU’s] application of the templates is too rigid and didn’t take into account the resources and time that were often available for conducting decentralized evaluations.”

For many Program Branch staff, there was a perception that the multiple rounds of revision around an evaluation product was inefficient and lengthened the QA process.Footnote 39 Interviews revealed a number of examples of ToRs undergoing three to four rounds of iterations.Footnote 40 This iterative process was a source of frustration for many who, while wanting to assure quality, also needed to meet their operational needs.Footnote 41 One informant said: “The fact that the ToR must be perfect before accessing supply arrangements means they [ESU] are effectively gatekeepers for evaluations. This has caused programs to abandon the idea of doing evaluations for many projects.” For example, one survey respondent commented that the ESU’s process for engaging with a decentralized evaluation was “cumbersome, onerous, and detached from what I needed to get out of it for the project” and that eventually, because of the time that was being spent on the process they made “a program decision to abandon the evaluation.” Another remarked that “the process is too long and rigorous and is not serving the operational needs of the program.”

As well as causing frustration among the users of the ESU services, the multiple revisions around outputs also created a significant amount of work for the ESU. Given that ESU was de facto serving as a gatekeeper to evaluations, the process needed to be more efficient and bounded.

There seemed to be a mismatch of expectations between what the ESU thought the users of its services wanted and what they actually wanted. This was especially the case during the development of the ToRs. ESU adopted an approach that was very hands on because they wanted to not only provide QA but also help build the evaluation capacity of Branch staff through the process. While this was in line with the Unit’s objective, which is to provide guidance in order to improve quality of decentralized evaluations against OECD DAC standards, the approach was very time intensive, and often, when a project officer approached the ESU for support, they wanted to finalize the ToR as soon as possible. Project staff were often not entering into an engagement with the ESU for the explicit purpose of building their capacity, they had more pressing operational needs. The ESU has not had the resources to conduct evaluation training, so they should be commended for trying to integrate capacity building into their support services. However, trying to combine the functions of QA and capacity development was not practical and created a bottleneck in the system.

More widely, the study interviews and web survey revealed that some PCE and Branch staff perceived that there was a lack of commitment from senior management to support evaluation work and particularly decentralized evaluations. Some 80 percent of surveyed staff identified that they perceived a lack of such support or interest especially over branch-led evaluation work. Furthermore, 72 percent indicated that senior management did not stress enough the importance of following up and implementing decentralized evaluation recommendations. This finding should be tempered by the responses that some senior management gave in interviews that indicated that there was in their view a serious commitment to strengthening evaluation.

5.4 Use of Decentralized Evaluations

Evidence for this section is drawn from the quality review, the case studies, the online survey, and an analysis of the use of decentralized evaluations in corporate evaluations.

The survey found that uptake and use of decentralized evaluations was mixed (Figure 6), with 57 percent of the 74 staff who answered saying that they found them useful, and 21 percent saying not useful. Branch staff did say that they found decentralized evaluations more useful than other types of evaluation, such as program or centralized evaluations, largely because decentralized evaluations were more relevant to their immediate operational needs.

“Most of the time, we don’t learn much from program evaluations (that we did not already know). Project evaluations are far more useful (if they are done in a timely fashion)”

Figure 6: Staff survey responses to usefulness of decentralized evaluations

Text version
Staff survey responses to usefulness of decentralized evaluations
Not useful at allNot very usefulSomewhat usefulVery usefulDon't know
7%14%26%31%23%

5.4.1 Which quality aspects or factors were associated with evaluation use?

From the quality review, it was difficult to associate quality criteria with use, because the management response was a weak proxy for use and the number of cases available for analysis was too small to allow a reliable assessment. All the same, the earlier correlation analysis has revealed that there was no obvious association between the quality of either ToRs, work plans or evaluation reports and management response qualityFootnote 42 (section 5.2.5).

From the case study evidence, several factors could be associated with greater evaluation use:

The main finding was that, given the lack of correlation between evaluation report quality and commitments in management responses, good quality reports were not a guarantee of high use.

5.4.2 Which elements of evaluations reports were used and which ones not?

The online survey asked staff which parts of the evaluation reports (findings, conclusions, recommendation or lessons) were most useful. For the 63 who answered, all of these elements were felt to be useful but conclusions and recommendations were the most useful (over 80 percent said useful or very useful for these).

Lessons were the least used, mainly because there was no comprehensive knowledge management system in Global Affairs Canada to facilitate broader learning beyond the individual intervention, and few evaluations were published on the Global Affairs Canada website. Overall, use was mixed for specific programs, but virtually non-existent for the wider organization and the international development community.

In terms of the use of the management responses, 62 staff answered this question in the survey with opinion evenly divided: 47 percent said not useful and 53 percent said useful. Some comments were critical of the lack an institutional culture to integrate evaluations into programs, and the absence of any systematic follow-up for the management responses.

5.4.3 How were evaluations used and for what purpose?

The overriding use of decentralized evaluations was for operational purposes. Program Branch staff undertook such evaluations largely to check on the level progress of implementation and achievement of objectives. Mid-term evaluations were commissioned to improve implementation and final evaluations to assist with the design of a new phase of a project or to make a decision about whether to extend an existing operation (Table 4). The case studies confirmed this, with two evaluations assisting with current implementation and two informing a new phase. Any learning therefore tended to be confined to the individual project and to those directly involved.

Table 4: How have you used decentralized evaluations in your work?
OptionsCountPercent
Total respondents: 64. Source Online Survey of Program Staff February 2016
Programme design/planning (new phase of the same program)3859
Improve the evaluated program (if ongoing at the time of the evaluation)3148
Inform strategy and policy3148
Inform funding decisions2438
Program design/planning (different program)2133
Other (please specify)1016
I have never used an evaluation report as part of my work1930

Beyond this, corporate evaluations used decentralized evaluations as a standard source of data. A specific analysis was undertaken of six recent studies, and conclude that although such decentralized evaluation is routinely used, there was little serious assessment of the quality of the evidence provided.

5.4.4 What was achieved when evaluations were used?

It was difficult to assess what was achieved when evaluations were used. This was very dependent on the nature of the evaluation recommendations and the level of uptake and use. In some cases, evaluations helped projects achieve more results, in other cases they helped achieve results more efficiently, and sometimes they helped make results more sustainable. Tracing the chain of causality between an evaluation and its use more thoroughly was beyond the time and resources available for this meta-evaluation.

A key hindering factor to evaluation use highlighted in the case studies was the lack of a comprehensive knowledge management system in Global Affairs Canada. Where evaluations were used, use remained limited to the specific interventions. Learning was rarely shared with other programs and no decentralized evaluations are published on Global Affairs Canada’s website to inform either internal or external audiences, so seriously limiting broader uptake and use. In this respect, the lack in investment in a comprehensive knowledge management system covering decentralized evaluations represents a significant divergence between Global Affairs Canada and the four peer agencies that the study compared Global Affairs Canada with (section 5.6).

5.5 Implementation of 2008 Meta-evaluation Recommendations

This section examines the extent to which the recommendations from the 2008 meta-evaluation were implemented and the underlying factors influencing Global Affairs Canada’s response.Footnote 44

5.5.1 2008 recommendations

Evidence from a variety of sources has been used to explore progress towards or challenges faced by Global Affairs Canada or PCE in reference to each recommendation; including this study’s assessment of PCE/ESU services and tools, its review of Global Affairs Canada’s information systems, and the online survey.

Alongside asking stakeholders about the level of implementation of the 2008 recommendations, the study also explored the underlying factors that either have enabled or prevented recommendations being taken forward. For those recommendations that are corporate in nature, evidence from a variety of sources has been obtained to provide the corporate context within which PCE and ESU have been functioning.

Table 5 provides a review of the extent to which each of the five main recommendations from the 2008 meta-evaluation have been addressed. The 2008 report recommended strengthening PCE and clarifying its function and responsibilities for decentralized evaluation work. Over the period that this study covered, no significant strengthening of staff occurred, and there was staff turnover due to the rotation of ‘mobile’ staff out of the Unit.Footnote 45 The Departmental Evaluation Policy framework issued in 2005Footnote 46 has not been updated in the past 10 years, and while the Treasury Board Policy on Evaluation in 2009Footnote 47 clarified the evaluation function, its application to decentralized evaluations was not specifically addressed. Since then, the policy has been replaced by the Treasury Board Policy on Results in 2016, which will undoubtedly bring new requirements and changes. Very limited progress has been shown around the 2008 study’s recommendations dealing with training, information management and communication.

Table 5: Review of 2008 meta-evaluation recommendations and their implementation
2008 meta-evaluation proposedStudy evidence showed
Corporate resources, processes and infrastructure to support the evaluation function would be placed under PCE control to prevent the loss of corporate memory and investments in the evaluation function.Position of PCE has yet to be fully recognized under the merged Global Affairs Canada. In terms of resources, PCE has the same budgetary and staffing position as before, with significant gaps in manpower. A number of staff have retired or been moved and this has affected corporate memory. The management and funding for decentralized evaluations remain under the control of branches and not PCE.
Roles and responsibilities of PCE and Branches would be clarified recognizing that Program Branches have an operational and project focus in their evaluation work and reaffirming PCE’s leadership responsibility for strategic and program level evaluations.The Treasury Board 2009 Directive clarified PCE’s function. The 2005 CIDA policy has not been updated, though a draft was prepared in 2013 which could not then be implemented as a result of the merger. PCE was recognized as having overall responsibility for setting evaluation standards and undertaking corporate and meta-evaluations. However, the role of PCE and ESU in setting standards and controlling quality were unclear in practice.
PCE would develop and offer appropriate training in the management and administration of evaluations for those involved in decentralized evaluations.Some limited training has been delivered, but in general this did not happen to a sufficient degree beyond QA provision of templates, specific guidance on drafting evaluation ToRs, and remote support of Branch evaluation managers.
PCE would take the lead in identifying and addressing the inadequacies in the Agency’s corporate information systems with respect to capturing evaluation data.Limited progress over the study period, though a group is now planning the development of corporate memory system for storing and retrieving evaluation documents. The design of this is still under way.
PCE would improve its communication to the Agency and build awareness about the findings of the meta-evaluation and the proposed changes to PCE services and tools.Improved PCE tools, but given the merger there has been rather limited progress on communicating evaluation products and services to Global Affairs Canada.

5.6 Review of Global Affairs Canada Information Systems

5.6.1 What are the strengths and weaknesses of Global Affairs Canada’s MIS?

The issue of how best to store and communicate decentralized evaluations has been a long-standing challenge for Global Affairs Canada.Footnote 48 Both the 2004 and 2008 meta-evaluations recommended that better systems needed to be put in place to track decentralized evaluations and improve learning from them.Footnote 49 The Corporate Memory System used to be the central repository for project and program reports and evaluations. Its functionality was reduced over the years as: (i) more information was stored at Branch level rather than centrally;Footnote 50 and (ii) the enterprise document and records management system (EDRMS) was introduced and assumed many of its roles. The corporate memory system no longer exists.Footnote 51

While ESU are doing their best with the tools that they have available, Global Affairs Canada’s current approach to managing and storing evaluations is not fit for purpose. Global Affairs Canada does not have a coherent system for storing and making accessible decentralized evaluations. What exists is a set of ad hoc efforts to bring them together. These include:Footnote 52

Unsurprisingly, there is strong dissatisfaction with Global Affairs Canada’s current approach to storing and managing decentralized evaluations. The survey indicated that 92 percent of respondents were dissatisfied with the way that decentralized evaluation products were stored and shared, and 86 percent found it difficult to access evaluation products. This is a message which has been repeated through two Global Affairs Canada studies since 2007.Footnote 54

Despite the view among staff that decentralized evaluations should be made public there are major challenges to realizing this within Global Affairs Canada. One of the easiest is simply to make the high quality reports available on the Global Affairs Canada external website. Key informants raised a number of challenges of doing this. The first practical issue relates to translation.Footnote 55 Second, there seems to have been a greater degree of caution within the Government of Canada compared to other peer agencies towards making evaluations public.Footnote 56

The lack of a centralized system for storing decentralized evaluations is curtailing their value to the wider organization. Currently, it is virtually impossible for someone to find a decentralized evaluation unless they know which project to go to, or they contact ESU directly. For an organization of Global Affairs Canada’s size and given the amount of resources that are invested in decentralized evaluations, this is a suboptimal approach. The learning from such evaluations is being limited to the project or program that commissioned it, and undermining evidence-based decision making across the wider organization.

5.6.2 How do other agencies address similar weaknesses?

To inform the study’s recommendations to Global Affairs Canada on how to improve its current approach to storing decentralized evaluations, a comparative assessment of how other agencies approach this issue was undertaken. Four agencies were reviewed: the United Kingdom’s Department for International Development (DFID), Australia’s Department of Foreign Affairs and Trade (DFAT), United Nations Women’s Agency (UN Women) and United Nations Children’s Emergency Fund (UNICEF).

Based on this comparative analysis, there are two broad approaches to how these agencies store and use decentralized evaluations:

  1. Managing a separate database for storing decentralized evaluations. This is the approach taken by UNICEF and UN Women.
  2. Integrating evaluation outputs into wider project management systems. This is the approach taken by DFID and DFAT.

A number of findings emerged from the comparative analysis:

First, the challenges that Global Affairs Canada are facing in creating a consolidated repository for decentralized evaluations are not unique. DFID and DFAT are also grappling with how best to make decentralized evaluations accessible to the wider organization. Both are hampered by the limitations of their existing project-level MIS and the organizational barriers that can often exist to modifying such core organizational data systems. As a way of side-stepping these challenges, both DFID and DFAT have opted to list all of their decentralized evaluations on their external website. While both recognize this is not an ideal solution and has limitations, it at least puts in one place all decentralized evaluations for staff and the public to access and use.

Second, setting up a dedicated evaluation database is the most comprehensive way to store evaluations, but it requires investment in more than system hardware. Both UNICEF and UN Women have opted to develop independent databases for their decentralized evaluations.Footnote 57 The databases are accessible for both internal and external stakeholders and include a range of evaluation outputs, including executive summaries, full reports and management responses.

Importantly, alongside investing in system hardware, both UNICEF and UN Women have instituted incentives for the systems to be used. One of the key challenges to creating a functioning evaluation database in a decentralized system is ensuring that evaluation products are uploaded. Both organizations have sought to address this through a range of strategies. Both have an externally contracted QA function that reviews all reports and assigns them a quality rating. This rating is then made public alongside the report. These quality scores are shared internally and used to raise standards. Alongside this, key performance indicators track practices such as the number of evaluations planned versus the number of evaluations uploaded. In UN Women, they also recognize the best evaluations of the year. In addition, both organizations have dedicated staff overseeing the database and the QA function. In these cases, there is clarity about who is responsible for the publication of evaluation, and organization structures and processes largely support this role.

5.6.3 What other opportunities exist to improve Global Affairs Canada’s Management Information Systems?

Based on an analysis of Global Affairs Canada’s current approach to storing and managing decentralized evaluations and the comparative review of other partners approaches, the study has identified a number of actions. Recognizing resource constraints and some of the practical challenges to reform the study has divided its proposal into three stages: immediate, medium term and, if resources are made available, the longer term. These include: (i) in the short term modifying search functions in the database platforms currently in use; (ii) in the medium term, placing evaluation reports in the intranet and regularly updating them; and (iii) in the longer term, create a dedicated searchable platform for staff and external stakeholders to access decentralized evaluations and make decentralized evaluation reports public.

6 Conclusions

The following paragraphs summarize the findings in relation to the main evaluation questions:

  1. There is room to improve the quality of decentralized evaluations at Global Affairs Canada. Between 50–60 percent of decentralized evaluation products show acceptable quality, although very few achieve a highly satisfactory standard. There was some evidence that more recent evaluation products showed a slight improvement in quality, although this trend was not definitive. The type of evaluation (formative or summative) or the Branch conducting it did not influence quality substantially, but higher budget projects were better evaluated. Evaluation team composition or size of evaluation budget did not affect quality, but if the consultant team leader was also an evaluator this improved quality. Evaluation reports were assessed as fair by this review (with 59 percent of sampled evaluation reports meeting OECD DAC standards). If this assessment is accurate, then over the period 2009–14, 41 percent of projects – representing a program spend of $2.8 billion out of the $6.9 billion (Table 1) – may not have met satisfactory quality standards. The quality of ToRs and work plans were lower with 52 percent meeting a satisfactory standard.
  2. In terms of which OECD DAC evaluation standards were being met and which were not, there was mixed performance. Standards that were met well included defining the objectives and rationale properly, and setting out sound management processes. Standards that were not met well included defining appropriate data collection and analysis methodologies, assessing evaluability and documenting if ethical standards were followed. Although management responses were of reasonable quality, as judged by the level of detail around response to recommendations, follow-up and dissemination aspects were poor beyond the immediate initiative.
  3. In terms of the quality of decentralized evaluation planning processes, there was good evidence that:
    1. The lack of systematic planning of decentralized evaluations across the Department was a contributing factor to several barriers to achieving high-quality evaluations including: clarity about the purpose of evaluations, ensuring adequate time for evaluation planning and conduct, ensuring appropriate time for engaging with technical experts in ESU, and ensuring buy-in from senior management.
    2. The quality of ToRs, work plans and evaluation reports were related, and good quality evaluation planning was likely to produce good quality reports. The data also indicated that defining evaluation methods appropriate to the purpose, object and context of the evaluation was a key contributor to overall quality.
    3. PCE’s role and responsibilities for supporting decentralized evaluation quality was unclear. While it had a QA and support role, and its tools and templates were perceived by those who used them as broadly useful, following ESU advice appeared to be optional as Branch staff were not always requesting help. ESU’s largely advisory role meant that it had limited influence on signing off on ToRs before procurement, and was not able to influence the quality of the final report.
    4. ESU was not sufficiently resourced to meet its QA role and its work was not as effective as it could be. There was a disconnect between what ESU was expected to do (i.e. improve the quality of decentralized evaluations) and what ESU had control over. While ESU struggled to always provide timely advice or to apply standards that met the situational needs faced by the evaluation commissioners or the time frames and resources available, it was also the case that Program Branch staff needed greater technical support, and a better centralized information system would have improved lesson learning.
  4. In terms of decentralized evaluation use:
    1. Decentralized evaluations were mainly undertaken for operational needs, and where they were completed on time, they were deemed useful by Branch staff. However, their level of quality was not closely associated with their level of use, for example the adoption of recommendations may not always have reflected the reliability of the evidence base for those recommendations. Furthermore, the management response tool was not a sound means to gauge utility, as the format takes a rather “tick box” approach, and also was not available for the majority of decentralized evaluations.Footnote 58
    2. The competing demand, on the one hand, for better decentralized evaluation quality, and on the other, for Branch-level needs for prompter timing and better use of evaluations, were not being met by the Global Affairs Canada approach for QA. The requirement to assure quality of every ToR and to achieve this by a process of iteration and then final ESU approval before contracting was not feasible given ESU staff capacity. It was also unrealistic to expect such intervention to be able to have a significant influence on evaluation quality considering the much larger role evaluation commissioners and private sector teams they hire have in ensuring high-quality products.
    3. Decentralized evaluations were perceived by evaluation participants to have less status and attention within Global Affairs Canada than corporate evaluations, even though they accounted for the bulk of evaluation spend. The 50–60 decentralized evaluations conducted per year were not strategically chosen or prioritized, and this reduced their potential value to the organization and beyond. While routinely used in corporate evaluations, the quality of decentralized evaluation evidence will need to be sufficiently assessed in these studies.
  5. In terms of the implementation of the 2008 meta-evaluation recommendations:
    1. The 2008 report recommended strengthening PCE and clarifying its function and responsibilities for decentralized evaluation work. Since then, no significant strengthening of staff has occurred. Very limited progress has been shown around recommendations dealing with training, information management and communication.
  6. Review of Global Affairs Canada information systems:
    1. The current system used by ESU to track, store and learn from decentralized evaluations is not an effective proxy for sharing evaluation across the department. Retrieving documents is difficult and aggregating findings and lessons is not happening. As a result, Branch staff commissioning new evaluations cannot easily draw on past experience to design the evaluation or choose the best team. Furthermore, wider corporate knowledge learning is not being supported. Global Affairs Canada is significantly behind other peer agencies in its current approach to evaluation transparency, even though some other agencies also face difficulties in this regard.
    2. Finally, an overarching conclusion is that the Departmental Evaluation Policy needs updating (as noted in section 5.5). It has not been revised since 2005, and there have been significant institutional and wider policy changes since then. For example, the policy needs to incorporate Global Affairs Canada’s adherence to the OECD DAC evaluation standards as well as to other international standards of good practice. Internally, the shift over the period of this review towards an emphasis on accountability rather than learning has reduced opportunities for the evaluation function to build greater understanding of the impact of aid programs,Footnote 59 and this needs to be rebalanced. The 2013 merger of CIDA and the Department of Foreign Affairs and International Trade Canada (DFAIT) and the creation of Global Affairs Canada in 2015 have affected PCE’s role and status, while the restriction on staff recruitment has prevented PCE and ESU from building the necessary capacity to meet its QA and support role for decentralized evaluations.

7 Lessons

  1. Sound evaluation methodology is a key determinant of the quality of decentralized evaluation reports.
  2. Investing in a high quality ToR is likely to lead to a high quality evaluation report.
  3. Having quality assurance done by a unit that limits its involvement to assessing evaluation products against a quality standard is not sufficient to improve quality of evaluations; a more integrated and comprehensive approach that includes contextual awareness, improved capacity within Branches and building on lessons learned is more likely to yield an improvement of quality.
  4. While OECD DAC evaluation standards provide an internationally agreed and comprehensive basis for guiding decentralized evaluation work, their application needs to be sensitive to the needs at field level and the project’s life cycle.
  5. In an environment where there is no explicit requirement or oversight on how senior management should respond to decentralized evaluation findings or recommendations, there is unlikely to be a link between a high quality evaluation report and its effective use.

8 Recommendations

The decentralized evaluation system in Global Affairs Canada is intended to provide both operational guidance for individual projects and a source of wider learning for the organization. There is room for the quality of such evaluations to improve further, and for their planning, implementation, storage and use to be supported more strongly and effectively. Therefore this study recommends the following.

  1. Strengthening support for decentralized evaluations.
    1. Strengthen the role of the evaluation focal point within Branches by giving them clear responsibility and oversight for promoting knowledge sharing on evaluation in their Branches.
    2. Clarify roles and responsibilities of all stakeholders (Branch staff, ESU, senior managers, the Development Evaluation Committee, etc.), ensuring that there is adequate support for the improvement of evaluation quality (i.e. that is integrated and comprehensive), and that this is reflected (and communicated) in an updated evaluation policy.
    3. Increase the training on evaluation skills for Branch staff and consider rotating staff with evaluation skills to region/country offices, particularly to Global Affairs Canada’s countries of focus.
    4. Provide sufficient support to PCE/ESU in terms of appropriate evaluation staff and budgetary resources.
  2. Strengthening the sharing and use of evaluations.
    1. Present a periodic statement as a standing item on the number and status of decentralized evaluations planned, commissioned and completed by Branch, theme and budget at the Development Evaluation Committee.
    2. Develop a comprehensive knowledge management strategy including fora for sharing learning from decentralized evaluations, and the publication of decentralized evaluation reports in line with international practice.
    3. Build a stronger culture for evaluation knowledge sharing and use through better communication and use of different media (web, social, networks), as well as rewarding and recognizing high quality / good practice. Work towards achieving publication of all completed evaluations in the medium term.
  3. Strengthening planning and conduct of decentralized evaluations.
    1. Strengthen the role, capacities and resources for decentralized evaluation work at Branch level. Achieve this by developing and implementing a training strategy to provide basic evaluation skills and more practical guidance to Branches.
    2. Strengthen ESU’s role as a knowledge broker rather than focusing on QA.Footnote 60 To achieve this, consider the option focusing the ESU on developing appropriate guidance and tools for decentralized evaluations, working more closely with Branches on selected evaluations, and developing better knowledge management systems. Build ESU capacity through staffing the unit with the right mix of evaluation skills and experience and organizational/contextual knowledge and expertise.
    3. Ensure that Branches plan decentralized evaluation on an annual basis with the guidance of ESU tools and staff.
  4. Strengthening information management of decentralized evaluations.
    1. In the short to medium term, strengthen accessibility to decentralized evaluations. For example, PCE could maintain a consolidated list on the intranet and develop functionality that updates staff on completed evaluations, management responses and the key findings. Explore the use of existing databases (e.g. CRAFT) to this end.
    2. In the long term, create a platform for staff and external stakeholders to access all decentralized evaluations.

Footnotes

Footnote 1

The Organization for Economic Cooperation and Development, Development Assistance Committee (OECD DAC) standards are available at http://www.oecd.org/development/evaluation/qualitystandards.pdf. These are internationally applicable standards for development evaluation and outline broad elements of quality evaluation. These standards are designed to “guide evaluation practitioners in providing credible and useful evidence to strengthen accountability for development results or contribute to learning processes.”

Return to footnote 1 referrer

Footnote 2

Unless stated otherwise all dollars are Canadian dollars.

Return to footnote 2 referrer

Footnote 3

Evaluation units in similar agencies are all aiming to fill the gap between gathering sound evaluation knowledge and feeding information needs users. See: ‘Evaluation units as knowledge brokers: testing and calibrating an innovative framework’; Olejniczak K. et al., Evaluation, 2016, Vol 22(2), 168–189, Sage.

Return to footnote 3 referrer

Footnote 5

A meta-evaluation is “a systematic and formal evaluation of evaluations, evaluation process and evaluation use.” This means that our study looked across a set of decentralized evaluations and examined how they were done and made an assessment of their quality in order to draw general conclusions. We did not look at the results or the findings of the evaluations reviewed.

Return to footnote 5 referrer

Footnote 6

In 2013 the Department of Foreign Affairs, Trade and Development (DFATD) was established as an amalgamation of the Department of Foreign Affairs and International Trade Canada (DFAIT) and the Canadian International Development Agency (CIDA in order to promote greater international policy coherence. In October 2016 DFATD was renamed Global Affairs Canada or Global Affairs Canada.

Return to footnote 6 referrer

Footnote 7

Decentralized evaluations are distinct from ‘corporate evaluations’. Corporate evaluations are led by the department’s development evaluation function (PCE) and focus on program-level assessment of relevance, effectiveness, efficiency and sustainability. Corporate evaluations are funded by PCE.

Return to footnote 7 referrer

Footnote 8

The Organization for Economic Cooperation and Development, Development Assistance Committee (OECD DAC) standards are available at http://www.oecd.org/development/evaluation/qualitystandards.pdf.

Return to footnote 8 referrer

Footnote 9

There are six such Branches, namely: EGM – Europe, Middle East and Maghreb Branch, MFM – Global Issues and Development Branch, NGM – Americas Branch, OGM – Asia Pacific Branch, KFM – Partnerships for Development Innovation Branch, WGM – Sub-Saharan Africa Branch.

Return to footnote 9 referrer

Footnote 10

CIDA (2005) Development Evaluation Policy, Performance and Knowledge Management Branch.

Return to footnote 10 referrer

Footnote 11

Evaluation An Evaluation of the Quality of Decentralized (Branch-led) evaluations Final Report, CIDA, Hay Consulting, June, 2008.

Return to footnote 11 referrer

Footnote 12

http://www.oecd.org/development/evaluation/qualitystandards.pdf.

Return to footnote 12 referrer

Footnote 15

Work Plan, Meta-Evaluation of Global Affairs Canada’s Decentralized Evaluations, Itad Ltd., January 2016.

Return to footnote 15 referrer

Footnote 16

Op. cit.

Return to footnote 16 referrer

Footnote 17

53 purposively, containing those cases with all four ‘deliverables’, and 72 evaluations which had at least one other ‘deliverable’.

Return to footnote 17 referrer

Footnote 18

116 Evaluation reports, but only 81 Terms of Reference, 58 work plans, and 28 Management Responses. This was due mainly to the difficulty PCE faced in locating all four deliverables. It is not certain however that all evaluations did produce accompanying work plans and evaluation reports and this is an important issue in terms of adherence to process if so.

Return to footnote 18 referrer

Footnote 19

For the external partners, interviews were held with evaluation staff in the UK’s Department for International Development (DFID), Australia’s Department for Foreign Affairs and Trade (DFAT), United Nations Women Agency (UN Women) and United Nations Children’s Emergency Fund (UNICEF).

Return to footnote 19 referrer

Footnote 20

Several follow up emails were sent by PCE to encourage response. In Itad’s experience, surveys of this kind often generate a similar response.

Return to footnote 20 referrer

Footnote 21

But the median values were similar at $9 million for the sample and $8 million for the inventory.

Return to footnote 21 referrer

Footnote 22

Coverage was also fair for TORs and work plans, but not so for the fewer management responses.

Return to footnote 22 referrer

Footnote 23

Although a simple regression analysis indicated the trend was not statistically significant.

Return to footnote 23 referrer

Footnote 24

Based on the quality review results analysed by year.

Return to footnote 24 referrer

Footnote 25

The other category includes other types of review such as organizational assessments.

Return to footnote 25 referrer

Footnote 26

r2= 0.38, but the sample is small at 30 cases.

Return to footnote 26 referrer

Footnote 27

This is not to say that the evaluations were unethical, only that they did not sufficiently document whether ethical principles were followed.

Return to footnote 27 referrer

Footnote 28

The management response format used by Global Affairs Canada consists essentially of a table listing the evaluation recommendations and follow up actions agreed to, with timing and responsibility. The limitations of this assessment are addressed in sub-section 4.3 of this report.

Return to footnote 28 referrer

Footnote 29

Unless stated otherwise all dollars are Canadian dollars.

Return to footnote 29 referrer

Footnote 30

This was due to the small sample size for such an analysis. For principal components analysis, samples normally need to use much larger samples. A minimum of 200 cases is regarded as necessary. As a rule of thumb, a bare minimum of 10 observations per variable is necessary to avoid computational difficulties.

Return to footnote 30 referrer

Footnote 31

This was done using NVivo software which is a software package designed to help organize and analyze non-numerical or unstructured data. http://www.qsrinternational.com.

Return to footnote 31 referrer

Footnote 32

Three examples were rated as satisfactory or highly satisfactory for their TOR, Work Plan and Evaluation report: Micronutrient Initiative Evaluation, MFM, 2012; National Languages Project, Asia, 2013; Volunteer Cooperation Program, KFM, 2014.

Return to footnote 32 referrer

Footnote 33

The standing offer was a procurement system managed by PCE that short-listed approved evaluation consultants. Under recent changes as a result of the DFAIT merger, the standing offer has been replaced by a broader Government of Canada Tasks and Solutions Professional Service method of supply.

Return to footnote 33 referrer

Footnote 34

The inventory of evaluations prepared by PCE contained a total of 1,118 documents for 467 projects from six Branches. They cover evaluations for Global Affairs Canada operations with a value greater than $2 million. The total number of operations in the period is 1,195 of which 608 had a value of over $2million. See: Report on the Inventory of Decentralized Evaluations – Summary, Global Affairs Canada, October 14, 2015.

Return to footnote 34 referrer

Footnote 35

These are a Deputy Director contributing 50 percent of his/her time to the Unit and three full time staff of varying seniority.

Return to footnote 35 referrer

Footnote 36

Currently, the Unit has 50 percent of a Deputy Director and one full time senior professional (EC6). Six other PCE staff then contribute small amounts of their time to take on specific activities.

Return to footnote 36 referrer

Footnote 37

ESU staff views were that while there were efforts to ensure staff were matched to evaluations that reflected their level of experience, in reality, because of staff shortages, staff were often allocated to jobs that ideally required more experience than the staff member had.

Return to footnote 37 referrer

Footnote 38

This covered both technical evaluation skills and practical knowledge around contracting, budgeting and hiring evaluators.

Return to footnote 38 referrer

Footnote 39

ESU guidance states that around each evaluation output “there may be several iterations of the comment and revision process” before the final version is agreed. DFATD (2015), Explanatory Note on the Decentralized Evaluation Process, Development Evaluation Division.

Return to footnote 39 referrer

Footnote 40

Key informant, sub-Sahara Africa program Branch.

Return to footnote 40 referrer

Footnote 41

Key informants, sub-Sahara Africa and Americas Branch and online survey.

Return to footnote 41 referrer

Footnote 42

The management response tool is a basic table with standard columns to be filled by those responsible for follow up. We judged quality by the level of detail in these columns.

Return to footnote 42 referrer

Footnote 43

This was the case in the Micronutrient Initiative (M013403001 in MFM) and the Statistics Canada evaluations (A034091 in KFM).

Return to footnote 43 referrer

Footnote 44

It was not possible to compare the findings from the 2008 study with this current study, since they used different methods The 2008 quality review looked only at evaluation reports, and did not assess other deliverables such as TORs, work plans or management responses. It also did not use the OECD DAC criteria for evaluation quality.

Return to footnote 44 referrer

Footnote 45

These were staff that served for a limited period (usually about 3 years) in one section before moving on to another posting.

Return to footnote 45 referrer

Footnote 46

Evaluation Policy, Strengthening Development Cooperation Effectiveness through Informed Decision–Making and Organizational Learning, Performance and Knowledge Management Branch, CIDA, March 2005.

Return to footnote 46 referrer

Footnote 47

Directive on the Evaluation Function, Treasury Board, Government of Canada, 2009.

Return to footnote 47 referrer

Footnote 48

For example, see CIDA (2007) Background paper on strategy for dissemination of CIDA’s evaluation knowledge; and CIDA (2008) Developing a dissemination strategy for evaluation knowledge: analysis of the effectiveness of CIDA’s current dissemination practices and materials.

Return to footnote 48 referrer

Footnote 49

CIDA (2007) Background paper on strategy for dissemination of CIDA’s evaluation knowledge.

Return to footnote 49 referrer

Footnote 50

Ibid.

Return to footnote 50 referrer

Footnote 51

CIDA (2008) Developing a dissemination strategy for evaluation knowledge: analysis of the effectiveness of CIDA’s current dissemination practices and materials.

Return to footnote 51 referrer

Footnote 52

In addition to these specific efforts, all evaluation reports should also be stored in their project folder on EDRMS. Therefore, in theory, if someone knows that a project commissioned an evaluation they can access it through the project folder. However this is not ideal as the system cannot easily be searched and EDRMS cannot be accessed by Global Affairs Canada staff overseas.

Return to footnote 52 referrer

Footnote 53

2015/16–19/20 Rolling Five-Year Development Evaluation Work Plan.

Return to footnote 53 referrer

Footnote 54

CIDA (2007) op. cit.

Return to footnote 54 referrer

Footnote 55

In the Canadian government all published reports need to be made available in English and French. This process is both time consuming and costly and as a result the Branches, who own the evaluations, may not be funded or have the resources to take on the responsibility.

Return to footnote 55 referrer

Footnote 56

For example, corporate evaluations, which are made publicly available, go through a lengthy review and sign-off process of up to two years before they are publicly disclosed. If this process was used for decentralized evaluations, given the number that are undertaken in a year, it is unlikely that the approval system could cope.

Return to footnote 56 referrer

Footnote 57

These are called the Evaluation Global Tracking System and the Global Accountability and Tracking of Evaluation Use, respectively.

Return to footnote 57 referrer

Footnote 58

Out of 488 decentralized evaluation reports identified in the PCE inventory, only 83 had a management response.

Return to footnote 58 referrer

Footnote 59

Interviews with senior management, PCE staff and external consultants.

Return to footnote 59 referrer

Footnote 60

Evaluation units in similar agencies are all aiming to fill the gap between gathering sound evaluation knowledge and feeding information needs users. See: ‘Evaluation units as knowledge brokers: testing and calibrating an innovative framework’; Olejniczak K. et al., Evaluation, 2016, Vol 22(2), 168–189, Sage.

Return to footnote 60 referrer

Date modified: