There’s nothing like a visiting international guest to get our sector talking and reflecting. This week we have been talking and thinking about evaluation, as Mark Cabaj from Canada has been in town igniting some fantastic discussion and debate.

Mark is President of the consulting company From Here to There and is an Associate of Tamarack – An Institute for Community Engagement.

Mark has had an amazing career, from experiencing first-hand the end of communism in Europe and working as an Investment Advisor in Poland’s Foreign Investment Agency, the Foreign Assistance Coordinator for Grants in the Ministry of Privatisation, and the Mission Coordinator for the creation of the United Nations Development Program’s first regional economic development initiative in Eastern Europe.

In Canada, Mark coordinated the Opportunities 2000 project, an initiative that won provincial, national and international awards for its multi-sector approach to poverty reduction (read the final evaluation report for the project [pdf]). He was also Vice President of the Tamarack Institute and the Executive Director of Vibrant Communities Canada.

Mark’s current focus is on developing practical ways to understand, plan and evaluate efforts to address complex issues, particularly in developmental evaluation, an approach to assessment which emphasises real time feedback and learning in emerging, fast-moving environments.

For us at ten20, Mark’s visit has prompted a lot of thinking and a number of conversations, starting with the question: Why is evaluation increasingly on our minds? Our musings are that the interest in evaluation starts with the acknowledgement that we are not operating in isolation – and that no one individual or organisation can solve the complex social issues that we are committed to.

We know that we must align effort around shared goals if we are serious about making change happen. Committing to this work, we know through scar tissue, is not easy and requires us to rapidly adapt our strategy, practice and mindset. And this is where evaluation comes in, creating the need for rapid feedback loops to drive continuous improvement.

Of course, this is easier said than done. Along the way we have commissioned evaluation for the collective impact initiatives we support and also evaluation of our own impact as a philanthropic organisation.

Through this we have some observations about the role of the funder in evaluation. We have shared them with other funders and learned from them about their experiences. Here is a snapshot of some of our conversations.

1. DON’T only measure immediate end outcomes (e.g. reading levels at school entry). DO also, in addition, measure the extent to which community has built the capacity to own and drive the change (e.g. extent of parental engagement and broader community participation in lifting literacy). This is a valid ‘success’ in its own right and will ultimately drive sustainability and lasting impact.

2. DON’T outsource evaluation to an ‘expert’. DO consider funding external expertise but importantly also fund the capacity in community to own and drive evaluation that’s meaningful and connected into strategy, learning and continuous improvement.

3. DON’T as a funder dictate target outcomes against which you will measure progress. DO sit at the table with community and let them set the measures of their success.

4. DON’T use evaluation to lock in the traditional power dynamics of funder to grantee that encourage a success vs failure mentality. DO use evaluation to deepen relationships, understanding and learning as partners in change.

5. DON’T restrict evaluation to be about the effectiveness of the grantee only. DO undertake evaluation to interrogate your own funding practice, taking accountability for the role of the funder in the work..

6. DON’T commission evaluation in isolation as this leads to development of multiple measurement frameworks, duplication and fragmentation and an inability of the system to compare apples to apples. DO fund alongside others and jointly commission development of shared measurement systems.

Leave a Reply