Close

Collections

Capacity Sharing as a Mechanism for Amplifying and Empowering Diverse Voices in International Development Partnerships

This presentation discusses some of the strategies and challenges associated with amplifying all voices in an international program evaluation involving a multi-national team. 

0 comments 0 reposts

Profile picture of Wilella Burgess

Wilella Burgess onto AEA 2024

Somalia ABE Evaluation Video

USAID funded a 5-year accelerated basic education program in targeted Somali state aimed at supporting Somalia's efforts to increase educational access for out of school children and youth. A multi-national team, led by Purdue's Evaluation and Learning Research Center in collaboration with the Resilient Africa Network and the Somali Research and Development Institute examined the outcomes of this program, specifically pertaining to : (1) student access, retention, safety,  and learning outcomes; (2) effects of student, family, community, and program differences on learning outcomes of diverse learners; and (3) lessons learned that can inform education.

0 comments 0 reposts

Profile picture of Wilella Burgess

Wilella Burgess onto AEA 2024

An interdisciplinary perspective on private sector engagement in cross-sector partnerships? The why, where, and how

Abstract Private sector engagement (PSE) is increasingly acknowledged in both literature and practice as a necessary mechanism to sustainably address development challenges. Despite increased practitioner and academic interest in these partnerships, there have been negligible attempts to systematically investigate cross-sector partnerships to distill best practices from the multiple environments in which they are employed. This manuscript presents a robust review of the social science and business literatures on cross-sector partnerships, yielding an interdisciplinary, evidence-based framework detailing archetypes of three prominent partnership characteristics of purpose, context, and relationship enablers. This work integrates a wide range of best practices and values pertinent to businesses and society, enabling researchers, practitioners, and partnership managers to characterize and evaluate partnerships systematically. The introduced framework also enables partners to situate and evaluate their partnership activities to optimize outcomes for each partner and impact on the challenge at hand.

0 comments 0 reposts

Profile picture of Laura a Warner

Laura a Warner onto AEA 2024

Partnerships in "Private-Eyes" Languages: A Research Based Guide for Improving Partnership Success

0 comments 0 reposts

Profile picture of Laura a Warner

Laura a Warner onto AEA 2024

Investigating the robustness and relevance of an evidence-based sense-making construct to bridge the research-practice gap in cross-sector partnerships

Abstract

Cross-sector partnerships (CSPs) are important for tackling development challenges across public, private, and non-profit sectors. Despite their growing prevalence as partnership models of choice for grand challenge efforts, there is little evidencebased understanding about the dominant features of these engagements. This makes it difficult to develop CSP engagement models that are useful across development problems and settings. We posit that CSPs are intrinsically cross-disciplinary endeavors and require collaboration models that enable interdisciplinary problem orientation and solution casting. To facilitate sense-making in partnership efforts, a CSP engagement model must therefore integrate perspectives on partnership from major disciplines and practitioner experiences. Using automated content analysis of peerreviewed publications and manual content analysis of practitioner interviews, we explored the robustness and relevance of partnership capacity theory (PCT), an interdisciplinary CSP engagement model, as an evidence-based approach to CSP with best-practice grounding. We found PCT comprehensively characterizes collaborative CSP dynamics and offers a foundational view of CSP best practices.

0 comments 0 reposts

Profile picture of Laura a Warner

Laura a Warner onto AEA 2024

Bar ama Baro-Teach or Learn": Somalia's Accelerated Quality Learning Program Final Evaluation Executive Summary

0 comments 0 reposts

Profile picture of Laura a Warner

Laura a Warner onto AEA 2024

Investigating the robustness and relevance of an evidence-based sense-making construct to bridge the research-practice gap in cross-sector partnerships

Cross-sector partnerships (CSPs) are important for tackling development challenges across public, private, and non-profit sectors. Despite their growing prevalence as partnership models of choice for grand challenge efforts, there is little evidence-based understanding about the dominant features of these engagements. This makes it difficult to develop CSP engagement models that are useful across development problems and settings. We posit that CSPs are intrinsically cross-disciplinary endeavors and require collaboration models that enable interdisciplinary problem orientation and solution casting. To facilitate sense-making in partnership efforts, a CSP engagement model must therefore integrate perspectives on partnership from major disciplines and practitioner experiences. Using automated content analysis of peer-reviewed publications and manual content analysis of practitioner interviews, we explored the robustness and relevance of partnership capacity theory (PCT), an interdisciplinary CSP engagement model, as an evidence-based approach to CSP with best-practice grounding. We found PCT comprehensively characterizes collaborative CSP dynamics and offers a foundational view of CSP best practices.

0 comments 0 reposts

Profile picture of Wilella Burgess

Wilella Burgess onto AEA 2024

An interdisciplinary perspective on private sector engagement in cross‐sector partnerships: The why, where, and how

Private sector engagement (PSE) is increasingly acknowledged in both literature and practice as a necessary mechanism to sustainably address development challenges. Despite increased practitioner and academic interest in these partnerships, there have been negligible attempts to systematically investigate cross-sector partnerships to distill best practices from the multiple environments in which they are employed. This manuscript presents a robust review of the social science and business literatures on cross-sector partnerships, yielding an interdisciplinary, evidence-based framework detailing archetypes of three prominent partnership characteristics of purpose, context, and relationship enablers. This work integrates a wide range of best practices and values pertinent to businesses and society, enabling researchers, practitioners, and partnership managers to characterize and evaluate partnerships systematically. The introduced framework also enables partners to situate and evaluate their partnership activities to optimize outcomes for each partner and impact on the challenge at hand.

0 comments 0 reposts

Profile picture of Ann Bessenbacher

Ann Bessenbacher onto AEA 2024

Capacity Sharing as a Mechanism for Amplifying and Empowering Diverse Voices in International Development Evaluation Partnerships

0 comments 0 reposts

Profile picture of Ann Bessenbacher

Ann Bessenbacher onto AEA 2024

Unlocking Collective Expertise: Using Collaborative Evaluation Process to Strengthen Evaluation Capacity of Clients.

 

Unlocking Collective Expertise: Strengthening Program Evaluations through Collaboration

Hi, my name is Damilola Seyi-Oderinde. A little over a year ago, I joined the ELRC as a Research Associate. Before this time, I worked as a Professional Counselor and Lecturer. However, I quickly learned that my previous skillset was insufficient for facilitating a collaborative program evaluation. I am sharing this blog post as an additional resource to gain deeper insight into our process of using a collaborative evaluation approach to enhance my evaluation capacity and that of our clients.

STEM program evaluations often struggle to produce insights that drive practical decision-making. The root of this issue lies in evaluations conducted without the early involvement of key stakeholders, which leads to recommendations that go unused and reports that gather dust. A collaborative evaluation approach is essential to address this, emphasizing collective expertise and shared ownership throughout the evaluation process. As program evaluators, you understand that program success hinges on a robust evaluation framework that fosters continuous learning and adaptation. However, you may often face the challenge of being perceived as an outsider, which can create tension and undermine the effectiveness of the evaluation. This is where collaborative evaluation makes a significant difference. Actively involving stakeholders fosters a deeper understanding of the program’s context and elevates diverse perspectives, ensuring that evaluations are more inclusive and meaningful.

Facilitating program evaluation can be challenging due to technical complexity, rapidly evolving fields, and interdisciplinary demands. In our case, our key challenge was non-existent or very little program evaluation knowledge. These obstacles necessitated an evaluation approach that could adapt to practical needs while providing our client with capacity-building opportunities.

How did we do it?

Four-Phase Collaborative Evaluation Model to enhance clients' capacity

To meet these challenges, we implemented a Four-Phase Collaborative Evaluation Model designed to be structured yet adaptable. The phases are as follows:

1. Program Definition and Documentation
In this initial phase, we engage key personnel and leadership to establish a shared understanding of the program’s goals. Reviewing relevant documents and creating a Theory of Change provided a strong foundation for subsequent planning.

Key Ingredients:

i. Establishing mutual trust

- Communication & transparency

- Relationship building

ii. Learning Posture

- Active listening

- Flexibility & Responsiveness

2. Evaluation Planning
Here, we refine evaluation questions and success indicators, aligning them with stakeholder expectations. The focus is on building capacity and ensuring sustainability. This phase emphasizes real-time adjustments to address changing needs, fostering a collaborative problem-solving environment.

Key Ingredients:

i. Mutual understanding of objectives

- Alignment of expectations

- Prioritizing Outcomes

ii. Flexibility to adapt and respond to partners’ needs

  • Collaborative problem-solving
  •  Real-time Adjustment

iii. Capacity building

- Knowledge transfer

- Sustainability focus

3. Data Gathering and Analysis
This phase involves collecting data from stakeholders and analyzing it to uncover initial insights. We prioritize shared ownership by incorporating stakeholders into the feedback process, which helps build consensus and supports continuous program improvement.

Key Ingredients:

i. Shared ownership and Decision-making

  • Equal Stakeholder participation
  • Consensus building

ii. Iterative feedback

  • Continuous improvement
  • Feedback Loops

iii. Contextual sensitivity

  • Sensitivity to social context
  • Tailored Approaches

4. Learning and Adaptation
In the final phase, we had a meeting to present findings and facilitate knowledge exchange. The emphasis is on mutual learning and planning adaptations based on evaluation insights, with flexibility built for ongoing adjustments.

Key Ingredients:

i. Reflexivity

  • Self-awareness
  • Positionality

ii. Mutual Learning

  • Knowledge exchange
  • Shared reflection

iii. Facilitator Neutrality

- Objective guidance

- Clear Documentation & Follow-up

Reflections and Key Takeaways of an Emerging Evaluator

Throughout this journey, it became evident to me that

1. Enhancing your ability to balance expert knowledge with stakeholder input is crucial for an effective evaluation.

2. Co-developing evaluation plans empowers stakeholders and leads to outcomes that are more relevant and actionable.

3. The use of dynamic feedback loops greatly enhances the evaluation process by turning data collection into actionable insights that drive meaningful change.

4. Dynamic feedback loops facilitated data collection and analysis, providing more actionable insights.

The Power of Collaboration

Collaborative evaluation unlocks the full potential of collective expertise, resulting in stronger programs that are better equipped to adapt and evolve. By embracing collaboration at every stage, we can foster impactful, sustainable improvements that go beyond traditional evaluation practices.

 

0 comments 0 reposts

Profile picture of Damilola R Seyi-Oderinde

Damilola R Seyi-Oderinde onto AEA 2024

Navigating complexity: Crafting relevant stories through pragmatism and adapting to the unexpected

Crafting compelling stories that communicate the objectives, methods, results, and recommendations of an evaluation in a way that is easy to understand and memorable is a crucial aspect of evaluation reporting. It often involves the use of real-life examples, anecdotes, and visuals to make the evaluation more relatable and impactful. Traditional research/evaluation approaches often fall short in capturing the complexities, nuances, and intricacies of programs. Complex programs with multiple components often present complex evaluation challenges.

This was the case with the Indiana STEM Teacher Residency project, which aimed to recruit and train culturally competent and highly qualified teachers in STEM subject matters for the diverse context of the Indianapolis Public Schools (IPS) district.

However, our team had to adopt a nimble and flexible approach to evaluate the project due to numerous unanticipated challenges such as the pandemic, low participant recruitment rate, personnel turnover, and the complexity of the context itself. Guided by Crane et al (2018) principles of pragmatism, we utilized theoretical flexibility, methodological comprehensiveness, and operational practicality. Theoretical flexibility enabled us to utilize formative, summative, and developmental techniques, facilitating the use of participatory methodologies that allowed us to account for not only the process and impact measures but also the contextual factors. Methodological comprehensiveness involved the use of multiple methods in both the quantitative and qualitative realms, allowing us to examine different facets and levels of program implementation and beyond meta-narratives, making every voice count. Operational practicality facilitated our ability to communicate real-time and actionable feedback to stakeholders, make considerations for real-life limitations and constraints, and manage the complexity of the project.

In summary, our evaluation effort blended diverse methodologies, arrived at a comprehensive understanding of program intricacies, and effectively communicated findings. It also aided in managing complexity, adapting to unforeseen challenges, and offering real-time feedback, turning obstacles into opportunities. This approach served as the linchpin that enabled us to weave our methodologies and enhance our narrative's relevance and value to stakeholders and the broader educational community.

 

0 comments 0 reposts

Profile picture of Damilola R Seyi-Oderinde

Damilola R Seyi-Oderinde onto AEA 2023

Lessons Learned Using Evaluation to Weave Stories....about the results of a large, interdisciplinary, multinational development project

2 comments 0 reposts

Profile picture of Laura a Warner

Laura a Warner onto AEA 2023

Evaluating Institutional Change Efforts: Evaluators as Story Collectors & Story Analysts

Instruction Matters: Purdue Academic Course Transformation (IMPACT) is a faculty development program that aims to educate instructors about research-based teaching practices that enhance their ability to create student-centered learning environments.

Moreover, IMPACT is based on the understanding that instructor knowledge, implicit beliefs, and motivation for teaching and learning shape course design, implementation, and impact on student learning and success.

Instructors who are supported through peer communities and evidence-based resources will develop motivation to transform their instructions through student-centered practices to achieve equity in student learning and success. 

To acces the ELRC's new logic model (2021) and theory of change as described above, please view the logic model in this post. 

For IMPACT annual reports from 2013 to 2023, please click this link.

0 comments 0 reposts

Profile picture of Alex J France

Alex J France onto AEA 2023

Lending Credibility to Your Evaluation Story with Data Quality and Assurance Approaches

Data quality and data assurance are essential components of all evaluation and research projects.  When working as a single evaluator or part of a unified team, ensuring common understanding of evaluation goals and approaches and fidelity in data collection methodologies can be simple.  However, these activities require greater thought and intentionality as teams get larger and more dispersed.  This poster presents findings and lessons learned from a large, multi-disciplinary, multi-national evaluation in Somalia.  The project included in-country partners and employed data collection enumerators from local study communities.  Language barriers combined with security challenges precluded visits by the US team to aid in enumerator training, observe data collection, or otherwise participate in in-country activities.  Thus, we needed to develop and enact strategies and tools that would minimize risks associated with miscommunication, misunderstanding, or misinterpretation.

0 comments 0 reposts

Profile picture of Ann Bessenbacher

Ann Bessenbacher onto AEA 2023

Educational Equity in Somalia-(Re)Shaping Evaluation to Inform Policy

0 comments 0 reposts

Profile picture of Laura a Warner

Laura a Warner onto AEA 2022

Readiness, Willingness & Ability Assessment: Creating Opportunities to Foster Multi-Stakeholder Institutional Change

To help instigate institutional change focused on diversity, equity and inclusion, we developed
and piloted a new tool, grounded in organizational psychology and behavior change theory, that
promotes a sound approach to better understand the inclination to institutional change and the
likelihood of using improved practices to instigate change effectively from multiple stakeholder
point of views. The Readiness, Willingness, and Ability (RWA) tool is grounded in the Theory of
Planned Behavior (TPB) (Ajzen, 1985) to examine individuals’ perceptions of the degree that
their colleagues, leadership and institution are positioned for change. The RWA domains were
not designed to act as a psychometric assessment of TPB, instead, it is rooted in TPB and
accommodates needs for practical administration and ease of understanding and application as
an institutional evaluation tool. In this session, we will present our tool, findings from five
institutions and next steps for its use.

1 comments 0 reposts

Profile picture of Lindley McDavid

Lindley McDavid onto AEA 2022

Multi-national evaluation team led by ELRC met in Nairobi, Kenya to analyze Somali education data and formulate recommendations for education programming and education policy.

0 comments 0 reposts

Profile picture of Laura a Warner

Laura a Warner onto AEA 2022

Impact-ful Redesign: Using R Markdown for Evaluative Reporting

0 comments 0 reposts

Profile picture of Kyle Steven Habig

Kyle Steven Habig onto AEA 2022