Thursday, May 29, 2014

What next after monitoring and evaluation?

Opinion and Analysis

Monitoring and evaluation at the workplace needs to focus on on learning and sharing best practice, celebrating successes and seeing how to do even better. Photo/FILE

Monitoring and evaluation at the workplace needs to focus on on learning and sharing best practice, celebrating successes and seeing how to do even better. Photo/FILE 
By Mike Eldon, meldon@symphony.co.ke

Hands up all those who know what “M&E” stands for. Almost everyone in the public sector, and those in the development world, will know what this is.
Almost no one in the private sector will though. Including me until a few years ago when I started interacting more intimately with government bodies, NGOs and donors. Oh, and by the way, for those whose hands remained lowered, it stands for “monitoring and evaluation”.

 
It’s not that those in the business world don’t monitor or evaluate. On the contrary, they do a better job of both and are much more likely to take the process to its logical conclusion – that is, they will do something about what is off track to try and ensure the desired results are still delivered on time, to budget and at the right quality. In the private sector they tend to call it performance management.
This is why I wish the government and development communities would use that term rather than M&E, which too often covers some of the necessary while falling far short of the sufficient.
There’s no chance of that rebranding though, since the existing term is too deeply entrenched. What a shame, for as I got to learn about how M&E has been working — and often not working — in the public sector I heard much about what a tough sell it is, even at a basic level.
The resistance to it has been for one obvious reason: no one likes to be monitored or evaluated. And no one relishes the transparency and accountability that accompanies it.
Why? Almost everyone — ironically including the best performers — fears they will be found wanting, and that their bosses will hammer them for falling short.
It’s worse if they are ranked and the ranking is made public. What if they emerge near the bottom of the class? Such fears have sometimes led those being evaluated managing to set themselves unambitious targets that they are more likely to achieve.
M&E involves lots of work in deciding what data to collect, and then gathering and processing it all. It also requires discipline to enter what is being monitored accurately and on time, imposing a heavy burden of time and effort.
Another challenge is that too much of what is being M&E’d is not meaningful enough. It’s what you might call safe and lazy M&E.
Take training 100 people involved in a project to deliver its success, and assume they must attend a two-day workshop to do so. A safe measure is to ask if the workshop activity took place and if they did all attend.
In M&E jargon, the percentage graduating constitutes an “output” measure, where the best that can be said is that something did happen, something that was easy to measure.
But what about the “so what?” of having attended the workshop? Did it develop the needed knowledge, skills and attitudes? It is only if the answer is positive that M&E professionals would say that an acceptable “outcome” has been achieved.
But there’s yet another step. For they may have acquired what it takes but still not apply it to deliver the intended “impact”.
The more M&E systems evolve from assessing mere outputs to studying outcomes and impacts, the more challenging it is to come up with suitable measures of what has been achieved

No comments :

Post a Comment