We propose analytically and operationally flexible approaches that anticipate the need to adapt. We recognize that humanitarian and development evaluations are dynamic, dense, and even unpredictable; they require evaluation teams to re-direct efforts and change plans in the midst of implementation—we plan accordingly.
Our approach to conducting evaluations is based on clear communication focused on truly understanding and meeting your needs.
With every evaluation, KonTerra aims to fulfill the following objectives:
These dimensions are critical to uphold international standards and to ensure credibility. With every evaluation, we seek to identify specific measures that can enhance independence and impartiality—such as ensuring that team composition is balanced enough to give perspective and specialized enough to provide the required technical expertise. Our teams abide by UNEG ethical standards, and EHA Guidelines, and conduct rigorous evidence-based analysis.
We plan for communication and learning that focuses on maximizing utility of the process. If appropriate, we perform an evaluability check at the beginning of the inception phase. Early engagement with key stakeholders is critical for data discovery, data quality assessment, and data ecosystem mapping. We use efficient and effective communication to successfully negotiate a consolidated set of evaluation questions and indicators for use in each evaluation.
We aim to ensure a useful evaluation process and product. This means that data is utilized and that interactions with stakeholders are managed well to maximize their value. When field work is required, we identify opportunities to engage directly with affected people through national consultants and institutions. To achieve a user-friendly result, we consider the users, purpose/scope of the evaluation, and available resources; we use a principle of fit optimization to guide the design and scoping process—and throughout the various methodological adaptations that may be required over the course of implementing each evaluation.
We utilize multiple levels and types of triangulation, including different types of evidence, but also triangulation of analysis based on evaluation criteria, questions, and objectives. We aim to consider different perspectives on operational performance. Data quality and evidence substantiation will be assessed along with the overall data ecosystem that underpins the evidence—we incorporate evaluability and scoping exercises as relevant. Triangulation between evaluation team members also encourages knowledge sharing between evaluators with different areas of expertise and on key issues like gender.
We design methodologies sensitive to culture, gender, religion, race, nationality, and age. By establishing a base of evidence about who benefits (and who does not), the risks of perpetuating discriminatory structures and practices can be better addressed. Integrating gender equity and human rights as a focus in our overall methodological approach allows evaluation teams to demonstrate how effective interventions are carried out.