Decolonising Data Collection in Humanitarian Work
Abstract
Decolonising monitoring, evaluation, and learning (MEL) is increasingly recognised as essential to producing evidence that genuinely reflects local realities. In practice, however, many MEL processes continue to rely on standard tools, externally set indicators, and donor-driven timelines that leave little room for communities to shape what is measured or why. These routines can unintentionally marginalise local knowledge, narrow the types of insights that are captured, and reinforce unequal power dynamics between international actors and the people affected by their programmes.
Meaningful change does not require overhauling entire systems. It can begin with small, deliberate shifts: involving communities from the earliest stages, recognising local knowledge as expertise rather than an add-on, and enabling young people to contribute beyond data collection roles. These everyday choices make space for more honest conversations, richer insights, and findings that resonate with the contexts they come from. Bringing reflexivity, humility, and a critical awareness of power into MEL practice opens the door to evaluations that are not only methodologically strong but also more just, responsive, and grounded in lived experience.
Introduction
Humanitarian organisations collect enormous amounts of data. Every project cycle - needs assessments, baseline surveys, midline and endline evaluations, accountability calls - generates a constant flow of information that adds yet another layer to this ever-expanding archive. Over time, an uncomfortable reality becomes clear: data collection, even when conducted with the best of intentions, can easily reproduce the power imbalances that development work aims to address.
This observation is not new. Scholars and practitioners have long argued that humanitarian research carries traces of colonial legacies, particularly in who defines research questions, who collects the data, and whose knowledge is taken seriously (Chilisa & Bowman 2023; Hassnain 2023). Yet seeing certain dynamics unfold in real time - interviews passing through multiple layers of translation, enumerators rushing through community visits, or donors requesting indicators that have limited resonance with local partners - makes those structural critiques feel a bit more real.
This article reflects on how well-meaning standard data practices can unintentionally reinforce unequal power relations, and on how we might move toward locally grounded, community-led, and youth-inclusive approaches to data collection.
How data collection reproduces power imbalances
Whose lens defines the questions?
A major issue lies in who defines the questions. In many organisations, research begins not with communities but with project documents and donor expectations. Terms of Reference are frequently drafted by funders or headquarters staff and set the boundaries for the questions we eventually pose to participants (Middernacht et al. 2023). This can result in what some scholars call hidden transcripts - perspectives that remain unreported not because they lack value, but because they do not neatly fit into predefined framings or indicators (Middernacht et al. 2023). This mismatch becomes apparent in many routine details: indicators that measure “improved livelihoods” even when participants define success in entirely different terms, such as “reduced stress” or “feeling safe”; baseline tools written in formal English and translated on the spot into several local languages; evaluations that invite communities to comment on interventions they never expressed an interest in evaluating.
Traditional MEL often assumes the evaluator is a neutral expert, arriving from outside with technical skills and ‘objectivity.’ Yet neutrality is itself a culturally and historically loaded concept. Evaluators carry their own assumptions about what counts as evidence, what progress looks like, often shaped by Western academic training (Vang 2019; Hassnain 2023). When the evaluator is an outsider or a foreigner, these dynamics risk magnifying. It is not uncommon for community members to wonder who the evaluators are and why specific questions matter. This uncertainty is not necessarily resistance; sometimes it simply reflects a lack of shared context. But moments like these also reveal something more structural: when people are asked to engage with an evaluation framework designed elsewhere - one that may not fully reflect how they define problems, measure progress, or experience everyday challenges - it shows how easily humanitarian organisations can lean on external expertise in ways that overlook people’s lived realities. “Who are these people and why do they want to know all this?” That quiet discomfort is itself an indicator of a potentially asymmetric dynamic.
Despite sincere efforts to conduct ethical and participatory research, humanitarian MEL can drift toward extractive tendencies. This may include situations where communities are repeatedly surveyed while receiving little or no feedback or clear information about any potential incentives or benefits, relying heavily on external consultants rather than supporting locally anchored research teams, or producing reports with little visibility, written strictly for donors rather than for those who shared their time and contributed their experiences. When data collection visits are compressed to meet timelines or targets, the relationship becomes more transactional than collaborative (Quantson Davis 2023). Communities then risk being framed primarily as data providers, reinforcing a subtle hierarchy between “those who know” and “those who are known” (Quantson Davis 2023). Thereby, people at the centre of the project become sources of data rather than partners in knowledge creation.
Where youth fit in: the missing perspective
In many humanitarian contexts, young people represent the majority of the population. They are often the ones who carry out field data collection, serving as enumerators or community facilitators. Interestingly, they are also increasingly vocal about extractive and top-down development practices, particularly through youth-led or grassroots networks. Yet their perspectives remain limited in shaping how data collection is designed or interpreted.
Young people experience the inequities in data collection from two vantage points: as data collectors, where they are frequently underpaid and overworked, and as participants, where typical survey formats rarely reflect their digital habits, communication styles, or aspirations.
A 20-year-old once told me, “You ask about livelihoods, but you’re missing the point. Nobody asks how tired we are or what we dream about.” The comment was simple, yet its sentiment captured a fundamental disconnect: our tools, while rigorous, might not be designed for or with young people. If MEL seeks to generate a fuller understanding of context, then involving youth has value beyond their role as data collectors. While it is not always practical to engage them in every stage of the research process, their proximity to community dynamics and lived experiences can offer perspectives that enrich both the design and interpretation of findings. Even modest forms of youth involvement - such as consulting them on the relevance of questions or inviting their reflections during analysis - can support more context-aware co-creation, and this potential should be kept in mind when planning future MEL activities.
Toward decolonised data collection: practical shifts
Decolonising MEL is not about discarding rigor or established methodologies. Instead, it involves being attentive to who defines, produces, and uses knowledge (Jordan & Hall 2023). Yet it is equally important to recognise that these ideals often sit uncomfortably within the real-world constraints of donor expectations, time pressures, and varying levels of community familiarity with research processes. Even so, concrete shifts are possible, if imperfect.
One powerful step is involving communities from the very beginning. Some organisations now begin evaluations by consulting participants before finalising methodologies - a simple but potentially transformative practice (Middernacht et al. 2023). This can take the form of community-focused inception workshops that ask whether the proposed research questions matter, or collaborative discussions to define what success means locally, even if those definitions differ from donor key performance indicators. In some settings, organisations make an effort to share the key elements of the Terms of Reference with communities in accessible ways - not for formal validation, but to ensure the evaluation approach aligns with their priorities. While this is not always straightforward, especially where participants may have limited literacy or little familiarity with evaluation processes, even brief consultations can lead to more relevant questions and more grounded insights.
Another necessary area of change involves incorporating Indigenous and local knowledge systems. Traditional MEL frameworks often privilege written, linear, academically framed forms of knowledge, yet communities may communicate understanding through metaphors, stories, rituals, or symbols (Eakins et al. 2023). Translating these into conventional evaluation outputs is not always straightforward, but in practice, it can mean using participatory methods such as story circles, community maps, visual prompts, or holding conversations in familiar community spaces rather than formal offices. Allowing participants to define key concepts before attempting to measure them often yields insights that no logframe would have captured. I once worked on a project where participants defined resilience as “being able to sleep peacefully, even when life is hard.” It was not in the project indicators - and it would have been difficult to quantify fully - yet it reflected the most valid form of impact.
A further shift involves questioning the evaluator’s own assumptions. Every evaluator brings biases - about what constitutes impact, which voices are credible, and what counts as rigorous evidence. Including a positionality statement in evaluation reports is a small but meaningful step toward transparency (Vang 2019), even if it can feel performative or insufficient on its own.
Power can also be rebalanced within MEL teams themselves. This means prioritising local researchers for lead roles, allowing communities to define what kinds of evaluators they want to work with, or involving community facilitators in analysis rather than limiting them to the data collection phase (Roholt et al. 2023). For youth specifically, this means creating career pathways instead of temporary enumerator roles, enabling youth to co-design youth-focused indicators, and supporting youth-led networks to conduct research on their own terms. These steps are not always easy to institutionalise, and sometimes face internal resistance, but they mark a shift in who holds knowledge and influence.
Finally, MEL needs to innovate beyond the traditional ‘clipboard’ mentality. Our tools still reflect the logic of a previous era: rigid surveys, linear data processes, and an overreliance on quantitative indicators (Picciotto 2023). More adaptive approaches already exist. Community radio can create effective feedback loops; stories, drawings, and photos can be meaningful data sources when culturally appropriate. Research can also take place through channels that youth already use, rather than forcing everything into a Kobo form.
Conclusion
Decolonising MEL is not a single shift but an ongoing process of reflection and adjustment. It does not require abandoning evidence or structure; rather, it encourages more intentionality in how knowledge is produced and used. Extractive tendencies often arise not because practitioners aim for them, but because of tight timelines, rigid frameworks, or inherited assumptions about what constitutes valid data.
But there are practical, doable shifts: co-designing questions, valuing local knowledge, hiring local researchers, integrating youth meaningfully, and letting communities interpret their own realities.
Ultimately, decolonising data collection is as much a relational effort as a methodological one. It asks us to show up differently: with humility, curiosity, and a willingness to let go of control. And honestly, the work becomes richer - and more human - when we do.
Bibliography
Arqaam. “Why Decolonisation Matters in M&E – Introducing Our New Power-Critical Toolkit.” Arqaam Monitoring & Evaluation, n.d., www.arqaam.org/2023/08/29/why-decolonisation-matters-in-me-introducing-our-new-power-critical-toolkit/.
Chilisa, Bagele, and Nicole R. Bowman. “The Why and How of the Decolonization Discourse.” Journal of MultiDisciplinary Evaluation, vol. 19, no. 44, 2023.
Eakins, Deanna, et al. Indigenous Evaluation Toolkit: An Actionable Guide for Organizations Serving American Indian/Alaska Native Communities. Seven Directions, University of Washington, 2023.
Hassnain, Haseeb. “Decolonizing Evaluation: Truth, Power, and the Global Evaluation Knowledge Base.” Journal of MultiDisciplinary Evaluation, vol. 19, no. 44, 2023, doi.org/10.56645/jmde.v19i44.803.
Jordan, Lauren S., and Julia N. Hall. “Framing Anticolonialism in Evaluation: Bridging Decolonizing Methodologies and Culturally Responsive Evaluation.” Journal of MultiDisciplinary Evaluation, vol. 19, no. 44, 2023.
Middernacht, Zoe, Abril Narros Lluch, and Willemijn de Iongh. Decolonising Monitoring and Evaluation: From Control to Learning. INTRAC, 2023.
Nadeau, Megan, et al. “Creating and Implementing an Indigenous Evaluation Framework Process with Minnesota Tribes.” Canadian Journal of Program Evaluation, vol. 38, no. 1, 2023, pp. 35–56.
Picciotto, Robert. “Evaluation Transformation Implies Its Decolonization.” Journal of MultiDisciplinary Evaluation, vol. 19, no. 44, 2023.
Quantson Davis, Rebekah. “Liberated or Recolonized: Making the Case for Embodied Evaluation in Peacebuilding.” Journal of MultiDisciplinary Evaluation, vol. 19, no. 44, 2023.
Roholt, Ross VeLure, Amy Fink, and Mohamed M. Ahmed. The Learning Partner: Dialogic Approaches to Monitoring and Evaluation in International Development. Knowledge Management for Development Journal, vol. 17, 2023.
Start Network. Community-Led Approaches to MEAL Research Grant: Report, 2023.
Tolmer, Zoë. “Decolonizing M&E: What Does African Monitoring and Evaluation Look Like?” UNDP Strategic Innovation, 2024.
Vang, Kaylee. “Exploring Social Justice-Oriented Evaluations.” Congressional Hunger Center, 2019.

