Prioritising among competing options is a feature of so many processes that I end up facilitating. It's certainly a key question in some current work.
- Which of the wonderful new project ideas for a radical disruption of the food system should be worked up in a scoping or R&D phase by this fabulous social enterprise? [Strategic planning day, mid March]
- Which issues should go forward to the Citizens' Assembly in the experimental action-learning phase of NHSCitizen? [Citizens' Jury, early March]
- Which of the recommendations in a consultant's report should be the focus of a forthcoming workshop? [Planning meeting with a client team, mid February]
So there are a couple of things for the process designer to ponder here. One is the tools the group might use to help it discern its priorities (dotting? multi-criteria analysis? diamond ranking?). Another is the criterion (or indeed multiple criteria) they choose to apply.
Tools I like
So what are the tools that can help a group prioritise? There are a few that I like.
The first one (pros and cons) is a prelude to making a decision. Its role is to slow down the deliberative process so that people don't assume a 'position' too early. Nice and simple too: stick up some flip chart paper, one sheet for each of the options under consideration (write the options on so it's really clear). Divide each sheet into two columns, one for pros (or 'what I like' or 'strengths') and one for cons (aka 'what I don't like', 'weaknesses'). Give everyone pens and sticky notes, and then there's a bit of a free-for all as people record their ambivalent views against as many of the options as they want. The group quickly gets a snapshot analysis of every option, without anyone needing to be forced to be 'for' or 'against' an option too early in the process.
The second is diamond ranking. This is particularly good when there are lots of options, a large group and you want some conversation around the ranking. It's not so good if you need a continuous ranking of all the options, but great if you just need precision about the top and the bottom of the ranking: for example as a way of going from a long-list to a short-list.
And of course there's dotting, sometimes known as dotmocracy. I'm quite a stickler for not calling this 'dot voting' as voting is, to me, a decision-making process whereas I prefer to see dotting as a way of aiding and informing decision-making: it's the conversation around the snapshot which is important. The group decides what the result means. There's more here.
Most complex - and appealing, in my experience, to academics, engineers and those who want the prioritisation to be based as nearly as possible on objective analysis - is the use of multi-criteria tools. There are plenty of ways of doing this, using paper-and-pen or more technology-based approaches. This kind of analysis is common, for example, on interview panels and can be useful if the decision-makers need to demonstrate how they came to their decision.
My logical brain agrees that multi-criteria analysis should be a good way of analysing options - especially where traceability and transparency are important - but my heart absolutely sinks at the idea of facilitating it. I'd love to hear from facilitators who enjoy using this kind of approach so I can stretch my skills here. I see a number of difficulties with multi-criteria analysis:
- The time it takes to analyse all the options against the criteria. People flag. They get bored and the quality of attention dips. So perhaps it is an approach best used when there are only a small number of options or criteria.
- The complexity of the process can introduce inequalities. Some participants may have a keener appreciation of the significance of the relative weighting criteria. It feels like a process where a sharper logician stands more of a chance of getting the outcome they want than an equally legitimate stakeholder without a PhD.
- Not all criteria are equal. The process design choices have to include a (prior) decision about weighting criteria relative to one another (or not); decisions about how to translate judgements into points for qualitative criteria; agreement on sources of information for those criteria which have objective or quantitative content. That's a lot of process to agree as a group, before applying the tool itself.
- People game it or ignore it anyway, if it doesn't give the result they want.
Where do the criteria come from?
A crucial question for the facilitator / process designer, is where do the criteria come from, that the group then applies? Do the criteria get handed to the group by some other group or person, ready-baked? Does the group begin with a blank sheet of paper and devise its own criteria? Or a bit of both?
Pre-cooking saves time (unless the group doesn't like or understand the criteria, or like or understand the people who came up with them, in which case it doesn't). Some groups hate starting with a blank sheet of paper. This is definitely a design choice that needs to be made with the group, even if the group's preference is for someone else to hand them a set of criteria to apply.
Coming up with criteria and being clear on how to apply them is actually harder than it looks. Sharing assumptions (and flushing out crossed purposes) about what the criteria mean is crucial. I have been in groups where it turned out that some people thought a high 'score' for a particular criterion was a good thing, and others thought it was bad thing. Does everyone in the group agree that a particular criterion is a pass/fail test, or do some people see it as a consideration rather than a deal-breaker?
I've also been in groups where people have been invited to 'dot' their preferences, without clear agreement about what the prioritisation question is. So some people were putting their dots against "the most important" option, and others against "the options we should talk about this afternoon".
Prioritising = DEprioritising
In my experience, this is the thing groups find hardest to do. Mostly, people want to be nice to each other. They don't want to disrespect other people's ideas: at least, not in front of each other. They don't want the fight that they fear will come when they say: we aren't going to do this. So they merge ideas so as to retain them. They end up with as many priorities at the end of the process as they began with, just grouped under fewer headings. This may ease the pain in the short term, but if there really is a limit on how many options can be taken forward, their lack inability to deprioritise will come back to bite them.
Helping
How can a facilitator help in this situation? I think there's a responsibility to reflect back to group where its deliberations have led, and invite the group to discuss how successful it has been in meetings its stated aim of deciding among competing options.