Identify criteria for assessing strategies.
- What criteria does the group regard as most important?
- Is cost a primary consideration?
- How effective is each strategy in mitigating the problem?
- To what extent is a strategy politically acceptable?
- What criteria—and whose—should be paramount in choosing between strategies?
Identify data needed to apply criteria to strategies.
- Are the value preferences of participants sufficient to allow the group to choose
- Will detailed information be required in order for the evaluation process to be
considered valid by participants and the public at large?
- How will data be gathered and at what cost?
Apply the criteria to the strategies.
- Determine whether some criteria should be weighted more heavily than others and,
if so, what weights should be applied.
- Apply ‘sensitivity analysis’ to determine whether slight changes in weightings would
change the outcome of the evaluation (i.e., modify the weights attached to particular criteria to see whether the outcome of the analysis changes).
Conflict arises over what criteria should be used.
The credibility of which strategy is chosen depends in large part on the perceived validity of the process used to assess alternatives. The deliberation about criteria needs sufficient time: First, to discuss why a certain criterion might be more important than another. Second, because criteria can sometimes be combined by creating a new criterion. Third, it may be useful to repeat the analysis several times by using different criteria to illustrate to the group the difference it makes.
There is subjectivity in group ratings.
Some assessment processes—such as rating on a 1–5 or 1–10 scale for each criterion—risk being highly subjective. When using rating systems of this sort, it’s helpful to assign an operational definition to a numerical rating so that the group clearly understands what a rating means. Sometimes the group has agreed in advance that both the highest and lowest individual rating will be dropped from the analysis. It can be useful to have two rounds of rating; the first, followed by discussion and deliberation, and the second, a “final” rating.
There is insufficient time and resources for analysis of strategies.
Limiting the analysis to 8–10 strategies is possible, but the basis for choosing the “top ten” must be founded on a principle or set of criteria with which there is strong agreement in the group (e.g., probable costs, ease of implementation, most likely to have immediate impact, etc.)
The fallacy of misplaced precision exists.
In including or omitting particular strategies in some kind of group voting procedure, it is possible to assume that small differences in the results have more practical significance than they really do. Generally, if it is difficult to draw a line between what’s included and what isn’t, the analysis is not likely to be credible. The analysis can sometimes be made more credible by re‐examining and adjusting the weights of evaluative criteria, by further discussion of the criteria or the strategies, or by changing the voting procedure.
To identify the high priority reefs, the Hawaii Coral Reef Working Group (CRWG) used three primary criteria: biological value; degree of threat; and conservation viability. At a meeting of reef specialists, agency staff, and conservation groups, participants used the three criteria to rank 43 sites that The Nature Conservancy had identified as being biologically significant. Priority sites were first voted on by island groups, and then voted on again in a plenary group, where nine sites across the state were identified as top priority. At a subsequent meeting of the Coral Reef Working Group, the nine sites were again ranked in terms of four different criteria: readiness; urgency; cross‐LAS potential; and potential for effective management.