CONTRIBUTORS
Doris Kwesiga Kambugyiro
Post-Doctoral Research Scientist
Let us assume the following hypothetical example
An imaginary organization working in an African country set up a project called “The Mental Health Fight (MHF)”, to prevent mental health conditions among the youth aged 15-24 years through tackling substance abuse. They have found out that over 20% of the youth in Africa today are dealing with anxiety and depression. Now in its last year of operation, the organization wants to conduct an impact evaluation of the project.
What is impact evaluation anyway?
Impact evaluation is a process that assesses whether an intervention (e.g., a project, program, policy, strategy, or service) has produced the intended changes, and if there were any unintended changes. As a result, many impact evaluations focus on quantitative methods, looking at change or differences. Quantitative questions for measuring impact evaluation could include: “What is the average increase in income among program participants compared to a control group?”, “By what percentage did the literacy rate improve in the targeted area after the intervention?”, “How many individuals have successfully completed the training program and are now employed?”, “What is the mean difference in health outcomes between the intervention and control groups?”, and “What is the cost per beneficiary achieved by the program?”
However, through impact evaluation, we want to know not only if the intervention produced intended changes, but also be able to answer questions related to how and why the intervention succeeded or not. The great news is that a lot more can be learned through impact evaluations, considering that not only are they learning and accountability tools but can also be useful in determining whether to scale up an intervention to other places or beneficiaries.
When measuring impact evaluation, “how” questions focus on the specific processes and activities that led to observed changes, while “why” questions go deeper into the underlying reasons and mechanisms behind the impact, aiming to understand the causal relationship between the intervention and the observed outcomes. Key “how” and “why” questions include: “How many beneficiaries participated in the program?” (process), “How did the program deliver its services?” (implementation), “Why did some participants experience greater positive outcomes than others?” (contextual factors), and “Why did the program achieve its intended impact?” (mechanisms of change).
To get this depth of information and accurately so, requires the use of qualitative methods. These focus more on the “why, how and for whom”, rather than on the “what and how much”.
The neglected qualitative angle
The not so great news is that qualitative methods in impact evaluations are often absent, or the last-minute squeeze in – the quick and dirty job to ensure the right boxes are ticked and a few token stakeholders interviewed, for the appearance of a balanced evaluation. For instance, a review of 32 evaluations of HIV/AIDS projects found the qualitative research was integrated in a “simplistic, last minute and perfunctory” way. Additionally, Walker and others (2024) cite literature indicating that qualitative methods are sometimes seen as “unscientific” and thus not good evidence.
Why is this happening, and what could the challenge be, considering the rich source of data that they are? One of the known and easily observable explanations is the limited time allocated to impact evaluations by those who commission them. An organization may set up an evaluation and they want it done within one month – including tool development, data collection and report writing. This is really inadequate but is often the case. As a result, the evaluation team will apply the fastest and simplest approaches, mostly quick field surveys and observations. Indeed, Biradavolu pointed out here that qualitative methods were often not seen as critical in impact assessment despite their importance. Anecdotal field experience demonstrates the laxity around qualitative aspects of impact evaluation.
Let us just include some qualitative methods…
This is probably also why many impact evaluations, when they do include the qualitative component, will typically use three common methods – key informant interviews, in-depth interviews or focus group discussions. Often times, no broader qualitative impact evaluation approach is used or indicated. The qualitative components are often lacking in innovation, despite the existence of multiple interesting ways to show how and why the project has had an impact, both positively and negatively, plus intended and unintended effects. Some of these include the Most Significant Change; Outcome Harvesting; Process Tracing; Contribution Analysis; Qualitative Comparative Analysis and many others. In other blogs, we can look at some of these and their practical application.
Related to the time constraint are the limited resources allocated to impact evaluation, which sometimes was not even planned for at the start of the intervention. These limitations make it more challenging to incorporate some of the qualitative methods that require a lot of time to undertake, plus participation of beneficiaries in some cases. More so, a number of them require a particular skillset that may not be readily available.
I am not suggesting that impact evaluations should be qualitative only. Not at all. Depending on the evaluation questions that one needs to answer, both quantitative and qualitative are frequently important to use in the same study, because they bring in different kinds of information, resulting in a more holistic picture.
What can we do differently?
Nevertheless, even when the qualitative aspects are included, can they be given adequate attention? For instance, there are some critical things like gender that can influence an interventions’ impact but which may best be studied in depth from a qualitative approach. Let us use the scenario I placed at the top of this blog as an example. It is known that gender (the social construction about roles) influences how women and men access services, make decisions and go through other aspects of daily life. An evaluation of the MHF project to treat mental health conditions among the youth would need to consider various things about the influence of gender on project outcomes. For example:
- Did both girls and boys who were project beneficiaries have similar positive or negative outcomes? Why was this the case?
- Where outcomes differed between girls and boys, what were the possible reasons?
- Did a similar number of girls and boys go through the entire intervention? Why or why not?
- Where girls had better outcomes than boys in one community but worse in another, what was the cause?
- What are the outcomes that were not intended by the project? Did these happen for both boys and girls?
Simply counting how many boys and girls were reached by MMH in each community is not enough data to assess and recommend that the project should for example be scaled up. The qualitative approach will give deeper insights into both the successes and failures of MMH, more so across different genders, ages and contexts if well done.
While I have only scratched the surface, my intention is to open up a conversation on how impact evaluations can be more thoroughly done, especially the qualitative aspects that can be a gold mine. Suggestions include planning for evaluations right from the start of an intervention, allocating the required resources and ensuring the evaluation team has adequate capacity and knowledge. As an organization asks for or embarks on an evaluation, they should also appreciate the breadth of methods and approaches and ensure some are tapped into. Once the qualitative impact evaluation is included, let us move from the common into the uncommon, exploring stories, making it participatory and not doing the “quick and dirty” thing.