Collections
Damilola R Seyi-Oderinde created this post
@
on
Unlocking Collective Expertise: Using Collaborative Evaluation Process to Strengthen Evaluation Capacity of Clients.
Unlocking Collective Expertise: Strengthening Program Evaluations through Collaboration
Hi, my name is Damilola Seyi-Oderinde. A little over a year ago, I joined the ELRC as a Research Associate. Before this time, I worked as a Professional Counselor and Lecturer. However, I quickly learned that my previous skillset was insufficient for facilitating a collaborative program evaluation. I am sharing this blog post as an additional resource to gain deeper insight into our process of using a collaborative evaluation approach to enhance my evaluation capacity and that of our clients.
STEM program evaluations often struggle to produce insights that drive practical decision-making. The root of this issue lies in evaluations conducted without the early involvement of key stakeholders, which leads to recommendations that go unused and reports that gather dust. A collaborative evaluation approach is essential to address this, emphasizing collective expertise and shared ownership throughout the evaluation process. As program evaluators, you understand that program success hinges on a robust evaluation framework that fosters continuous learning and adaptation. However, you may often face the challenge of being perceived as an outsider, which can create tension and undermine the effectiveness of the evaluation. This is where collaborative evaluation makes a significant difference. Actively involving stakeholders fosters a deeper understanding of the program’s context and elevates diverse perspectives, ensuring that evaluations are more inclusive and meaningful.
Facilitating program evaluation can be challenging due to technical complexity, rapidly evolving fields, and interdisciplinary demands. In our case, our key challenge was non-existent or very little program evaluation knowledge. These obstacles necessitated an evaluation approach that could adapt to practical needs while providing our client with capacity-building opportunities.
How did we do it?
Four-Phase Collaborative Evaluation Model to enhance clients' capacity
To meet these challenges, we implemented a Four-Phase Collaborative Evaluation Model designed to be structured yet adaptable. The phases are as follows:
1. Program Definition and Documentation
In this initial phase, we engage key personnel and leadership to establish a shared understanding of the program’s goals. Reviewing relevant documents and creating a Theory of Change provided a strong foundation for subsequent planning.
Key Ingredients:
i. Establishing mutual trust
- Communication & transparency
- Relationship building
ii. Learning Posture
- Active listening
- Flexibility & Responsiveness
2. Evaluation Planning
Here, we refine evaluation questions and success indicators, aligning them with stakeholder expectations. The focus is on building capacity and ensuring sustainability. This phase emphasizes real-time adjustments to address changing needs, fostering a collaborative problem-solving environment.
Key Ingredients:
i. Mutual understanding of objectives
- Alignment of expectations
- Prioritizing Outcomes
ii. Flexibility to adapt and respond to partners’ needs
- Collaborative problem-solving
- Real-time Adjustment
iii. Capacity building
- Knowledge transfer
- Sustainability focus
3. Data Gathering and Analysis
This phase involves collecting data from stakeholders and analyzing it to uncover initial insights. We prioritize shared ownership by incorporating stakeholders into the feedback process, which helps build consensus and supports continuous program improvement.
Key Ingredients:
i. Shared ownership and Decision-making
- Equal Stakeholder participation
- Consensus building
ii. Iterative feedback
- Continuous improvement
- Feedback Loops
iii. Contextual sensitivity
- Sensitivity to social context
- Tailored Approaches
4. Learning and Adaptation
In the final phase, we had a meeting to present findings and facilitate knowledge exchange. The emphasis is on mutual learning and planning adaptations based on evaluation insights, with flexibility built for ongoing adjustments.
Key Ingredients:
i. Reflexivity
- Self-awareness
- Positionality
ii. Mutual Learning
- Knowledge exchange
- Shared reflection
iii. Facilitator Neutrality
- Objective guidance
- Clear Documentation & Follow-up
Reflections and Key Takeaways of an Emerging Evaluator
Throughout this journey, it became evident to me that
1. Enhancing your ability to balance expert knowledge with stakeholder input is crucial for an effective evaluation.
2. Co-developing evaluation plans empowers stakeholders and leads to outcomes that are more relevant and actionable.
3. The use of dynamic feedback loops greatly enhances the evaluation process by turning data collection into actionable insights that drive meaningful change.
4. Dynamic feedback loops facilitated data collection and analysis, providing more actionable insights.
The Power of Collaboration
Collaborative evaluation unlocks the full potential of collective expertise, resulting in stronger programs that are better equipped to adapt and evolve. By embracing collaboration at every stage, we can foster impactful, sustainable improvements that go beyond traditional evaluation practices.
Damilola R Seyi-Oderinde onto AEA 2024
@
on
AEA 2024
The ELRC will be presenting at the American Evaluation Association 2024 conference in Portland, Oregon Oct. 21-26. Please see the posts in this collection for more information on their presentations.
Ann Bessenbacher
@
on