A reflection on Impact Evaluation and the problem of Causality

October 27, 2021

CONTRIBUTORS

Henry Owoko

Monitoring, Evaluation & Research Officer

VIEW PROFILE

The concept of cause and effect plays an important role in our early years of development. Instinctively, a child will relate the action of touching a hot surface with being burnt. As we develop further, we examine a list of such causal explanations to make sense of our observed environment, eventually giving us a better understanding and control of our surroundings. However, this concept, established early in life, is too basic to provide a foundation for scientific theories. Take, for example, the case of a child touching the hot burner of a cooker. The child considers touching the cooker as the unique cause of being burnt without considering the more intricate causal mechanism that has to be in place to achieve the effect of burning. The inadequacy in this example is demonstrated when the cooker is switched off or not plugged in, therefore not hot. The action of touching will not, therefore, result in the expected effect of burning. 

However much rudimentary, our earlier experiences introduce us to the critical concept of causality, a concept that today takes center- stage in determining whether development policies and programs work. The idea of causation and its definition sparks a fundamental debate among researchers. Despite the lack of consensus in definition, like Ernst Mayr, I take the stance that the definition of causality, regardless of the field or subject of discussion, must contain three elements:

  1. An explanation of past event(s)
  2. Prediction of future events
  3. Interpretation of teleological (goal-directed) phenomena 

Impact Evaluators and discussants of this topic agree that a single effect can be instigated by several causal mechanisms within which a range of specific components contribute to the effect in question. These components are what we retrospectively refer to when explaining the outcomes in causal attribution.

Given the complex natural environment of development programs and policies, characterized by many mechanisms, inferring causality can be challenging. The task could be considerably simpler for the impact evaluators if a checklist were used to distinguish between causal and non-causal relations. Conversely, philosophers like Hume and Stuart Mill have made considerable philosophical criticisms against the use of such inductive inferences. This notwithstanding, inductively oriented causal criteria are still commonplace to date because it offers clarity in an otherwise convoluted practice. Hill A.B. proposes a similar criteria- he considers strength of association, level of consistency, specificity, temporality, biological gradient, plausibility, coherence, experimental evidence, and analogy in trying to separate causal from non-causal associations.

Indeed, P. W. Holland raises and addresses this question in what he refers to as “the fundamental problem of causal inference”. While the intention is to observe the outcome of a given unit with and without the treatment, it is impossible to do so. Only one scenario can be observed at a time- either you receive the treatment or you don’t. He argues that a scientific approach of overcoming this problem would involve making an untestable homogeneity assumption, that is, if unit X is similar to unit Y at time A, the scientist can then expose Unit X to treatment, measure the change in both units and make an inference. His second argument, which he refers to as a statistical solution to the fundamental problem of causal inference, considers the counterfactual as missing data. That since only one of the potential outcomes is observed for each unit, the unobserved outcomes are missing, thus, causal inference is, in fact, a missing data problem and therefore assignment mechanisms can be used to resolve it.

Today, national governments, multinational corporations, and international agencies spend billions of dollars in investments targeted at (sustainable) development. These investments are channeled towards policy and programmatic interventions designed to improve and reach essential milestones in health, education, etc. Despite the heavy investments, many development programs do not prove whether their interventions work; they do not measure whether they achieve the desired outcomes. Consequently, we often miss out on learning what works, an important lesson to consider when redistributing resources. By leveraging the impact evaluation methods, anchored in basic concepts of causality, development investors can make informed investment decisions and improve these programs.