3.5. Monitoring and Evaluation

Introduction

A well-designed monitoring and evaluation process provides information to program managers and implementers that is critical to judging the effectiveness of particular interventions so that modifications can be made to optimize project impact. The goal of a monitoring and evaluation system is to increase the density and quality of information flow to improve decision-making at all levels, from the field through managers to donors and other stakeholders. Since those changes will be most helpful during a project rather than after, monitoring and evaluation should be an ongoing feedback mechanism used throughout the project’s implementation.

The value chain approach uses facilitation as a means of implementation. Measuring change in facilitation projects can be difficult, however. One challenge is that project services are not provided directly to intended beneficiaries. Instead, project-facilitated changes are designed to reach intended beneficiaries through their relationships with other market actors. This can complicate the process of accurately measuring project outreach, identifying beneficiary populations and monitoring changes over time. Another challenge is the measurement of spillover effects (diffusion), when firms not counted as part of project outreach imitate new techniques and business practices they observe among neighbors, friends and competitors. Click for the following resources to guide monitoring and evaluation of market facilitation projects:

Review basic information on monitoring and evaluation.

What Constitutes Good Performance Monitoring?

Performance monitoring consists of a number of related tasks. Chief among them is the selection of "key performance indicators" that allow managers to monitor project performance over time.
Monitoring key performance indicators alone does not provide sufficient information to evaluate and assess project performance. It typically needs to be supplemented with other quantitative and qualitative data collection methods so as to understand the background drivers for the trends and results revealed by the key performance indicators. Useful data collection methods include key informant interviews, focus group discussions, small-scale targeted surveys, market scanning, secondary research, or rapid assessments. By utilizing a "tool box" of quantitative and qualitative data gathering methodologies that complement and mutually reinforce each other, projects can "triangulate" to gain a greater understanding of their effectiveness.

Performance monitoring, however, entails more than data collection. Data collection needs to be embedded within a "system." Implied by the word "system" is a process for transforming data into useful information. Included in this process are a number of tasks that must be performed if the system is to operate efficiently and effectively; tasks that include, among other things, the reporting, management, analysis, dissemination, verification, and use of data. A breakdown in any of these tasks will compromise the validity and usefulness of the performance monitoring system.

In developing a performance monitoring system, value chain projects can follow a set of widely validated best practices that includes matching system design to resources and technical capacities, training, participation, pilot testing, and oversight and monitoring.

Once the performance monitoring system has been finalized, the details should be captured in a series of Performance Indicator Reference Sheets (PIRS) for each key performance indicator. The PIRS is a summary resource that describes how the performance monitoring system is operationalized.

What Constitutes A Rigorous Impact Assessment?

Impact assessment rigor is determined by the following four criteria: 

  1. Internal validity is the extent to which the impact assessment establishes a credible counterfactual. Internal validity can be suspect when certain types of biases in the design or conduct of the impact assessment could have affected observed results, thereby obscuring the true direction, magnitude, or certainty of the treatment effect. Selection bias is a primary source of bias. Selection bias occurs when there are systematic differences in observable (e.g., gender, education, climate, market access) and unobservable (e.g., ambition, risk orientation, entrepreneurial spirit) characteristics between the treatment and control groups.
  2. External validity is the extent to which the impact assessment findings are generalizable to other value chain projects.
  3. Construct validity is the extent to which the impact assessment design and data collection instruments accurately measure the project's causal model.
  4. Statistical conclusion validity means that the researchers have correctly applied statistical methods and identified the statistical strength/certainty of the results.

Impact assessment rigor further depends on a variety of other factors that need to be incorporated into the assessment design, implementation, and analysis, including triangulation, methodological transparency, sound data collection methods, and methodological appropriateness. 

For more on impact assessment methodologies, see the Impact Assessment Primer Series article #2 "Methodological Issues in Conducting Impact Assessments of Private Sector Development Programs", and Primer Series article #3 “Collecting and Using Data for Impact Assessment.”

What Are the Steps in Implementing an Impact Assessment?

Conducting a good impact assessment of a value chain project involves the following steps (the steps assume two research rounds—a baseline and follow-up):

  1. Select the Project(s) to be Assessed.
  2. Conduct an Evaluability Assessment.
  3. Prepare a Research Plan.
  4. Contract and Staff the Impact Assessment.
  5. Carry out the Field Research and Analyze its Results.
  6. Disseminate the Impact Assessment Findings.

The Private Sector Development Impact Assessment Initiative (PSD-IAI) team conducted a series of impact assessments on four USAID private sector development projects to demonstrate and refine this approach to impact assessment:

Another impact assessment of the USAID-funded Development of a BDS Market in Rural Himalayas project was conducted in 2007.

 

The PSD-IAI developed, tested, and published guidelines for credible impact assessments including an Impact Assessment Primer Series that explains many of the key concepts and operational steps summarized above:
 

  1. IA Primer Number 1: Assessing the Impact of New Generation Private Sector Development Programs
  2. IA Primer Number 2: Methodological Issues in Conducting Impact Assessments of Private Sector Development Programs
  3. IA Primer Number 3: Collecting and Using Data for Impact Assessment
  4. IA Primer Number 4: Developing a Causal Model for Private Sector Development Programs
  5. IA Primer Number 5: Causal Models as a Useful Program Management Tool: Case Study of PROFIT Zambia
  6. IA Primer Number 6: Planning for Cost Effective Evaluation with Evaluability Assessment
  7. IA Primer Number 7: Common Problems in Impact Assessment Research