Getting the definition of evaluation right is not simply a matter of having a popularity vote about it.
The fact that so many don’t see a clear difference between evaluation and other pursuits (such as research, monitoring, audit, organization development, management consulting) doesn’t mean that there isn’t one.
The fundamental difference is that evaluation asks and answers questions about the quality, value, and/or importance of things (design, implementation, outputs, outcomes, impacts, the project/program/policy/etc as a whole, and so on).
That means asking not just what the results are, but how good they are – and on what basis we draw that conclusion.
If we’re not doing that, we’re not actually doing evaluation.
That has serious implications for our practice, and how well we can convey the value added of our entire profession.
One piece of this is in evaluation-specific methodology (ESM) – the methodologies that are distinctive to evaluation. These are the ones that go directly after values.
Examples of evaluation-specific methodologies include:
- needs and values assessment
- merit determination methodologies (blending values with evidence about performance, e.g. with evaluative rubrics)
- importance weighting methodologies (both qualitative and quantitative)
- evaluative synthesis methodologies (combining evaluative ratings on multiple dimensions or components to come to overall conclusions)
- value-for-money analysis (not just standard cost-effective analysis or SROI, but also strategies for handling VfM analysis that involves a lot of intangibles, for example
I would not count the following as evaluation-specific: statistics or any of the standard research methods (interviews, observations, surveys, content analysis, or even causal inference methodologies).
We evaluators clearly draw on these and use them a lot, but they are not distinctive to evaluation because they are not specifically about the “values” piece.
In other words, you could use these (non-evaluative qualitative and quantitative research methods) and still NOT be doing evaluation.
But if you are using ESM (evaluation-specific methodology), you sure ARE evaluating, i.e. drawing conclusions about quality, value, or importance.
And in fact, if you don’t use any ESM, you basically aren’t doing real, genuine evaluation. Either you are skipping the whole evaluative conclusions piece, or you are getting to it by logical leap (e.g. “I looked upon it and saw that it was good”). ESM is what allows us to get systematically and transparently from evidence about performance to evaluative conclusion, by weaving in the values (“how good is good”) piece.
It’s true that several disciplines use evaluation-specific methodologies (e.g. industrial & organizational psychology uses cost-effectiveness analysis). That doesn’t make them “not evaluation-specific” any more than statistics becomes psychology just because psychologists use it.