Collaborative Problem Solving

Evaluate Options


Evaluate strategies and choose between them using criteria the group selects.

Once the technical analysis of the strategies has been completed and the potential impact of each has been assessed, the group must evaluate and choose which will most effectively address the issue on which the group is focused. To be regarded as credible, the process of choosing must be based on criteria that the group itself selects.

The selection of criteria—measures such as effectiveness, cost, ease of implementation, community acceptability, staff availability, and partnership potential—is a critical group choice. Once criteria are fully developed, refined, and combined, they can be assigned different weights by the group’s participants to show their relative significance.

The process for applying the criteria to the list of possible strategies has to be done in ways that reveal real differences between approaches. The group is likely to spend some time deliberating which criteria and which review process will be most credible in selecting a strategy that will be regarded as effective and sustainable.

The selection and application of criteria for choosing between strategies is an exercise in articulating the group’s values.

Identify criteria for assessing strategies.

  • What criteria does the group regard as most important?
  • Is cost a primary consideration?
  • How effective is each strategy in mitigating the problem?
  • To what extent is a strategy politically acceptable?
  • What criteria—and whose—should be paramount in choosing between strategies?

Identify data needed to apply criteria to strategies.

  • Are the value preferences of participants sufficient to allow the group to choose
    between strategies?
  • Will detailed information be required in order for the evaluation process to be
    considered valid by participants and the public at large?
  • How will data be gathered and at what cost?

Apply the criteria to the strategies.

  • Determine whether some criteria should be weighted more heavily than others and,
    if so, what weights should be applied.
  • Apply ‘sensitivity analysis’ to determine whether slight changes in weightings would
    change the outcome of the evaluation (i.e., modify the weights attached to particular criteria to see whether the outcome of the analysis changes).

Conflict arises over what criteria should be used.
The credibility of which strategy is chosen depends in large part on the perceived validity of the process used to assess alternatives. The deliberation about criteria needs sufficient time: First, to discuss why a certain criterion might be more important than another. Second, because criteria can sometimes be combined by creating a new criterion. Third, it may be useful to repeat the analysis several times by using different criteria to illustrate to the group the difference it makes.

There is subjectivity in group ratings.
Some assessment processes—such as rating on a 1–5 or 1–10 scale for each criterion—risk being highly subjective. When using rating systems of this sort, it’s helpful to assign an operational definition to a numerical rating so that the group clearly understands what a rating means. Sometimes the group has agreed in advance that both the highest and lowest individual rating will be dropped from the analysis. It can be useful to have two rounds of rating; the first, followed by discussion and deliberation, and the second, a “final” rating.

There is insufficient time and resources for analysis of strategies.
Limiting the analysis to 8–10 strategies is possible, but the basis for choosing the “top ten” must be founded on a principle or set of criteria with which there is strong agreement in the group (e.g., probable costs, ease of implementation, most likely to have immediate impact, etc.)

The fallacy of misplaced precision exists.
In including or omitting particular strategies in some kind of group voting procedure, it is possible to assume that small differences in the results have more practical significance than they really do. Generally, if it is difficult to draw a line between what’s included and what isn’t, the analysis is not likely to be credible. The analysis can sometimes be made more credible by re-examining and adjusting the weights of evaluative criteria, by further discussion of the criteria or the strategies, or by changing the voting procedure.

To identify the high priority reefs, the Hawaii Coral Reef Working Group (CRWG) used three primary criteria: biological value; degree of threat; and conservation viability. At a meeting of reef specialists, agency staff, and conservation groups, participants used the three criteria to rank 43 sites that The Nature Conservancy had identified as being biologically significant. Priority sites were first voted on by island groups, and then voted on again in a plenary group, where nine sites across the state were identified as top priority. At a subsequent meeting of the Coral Reef Working Group, the nine sites were again ranked in terms of four different criteria: readiness; urgency; cross-LAS potential; and potential for effective management.

Related Examples

Related Tools/Resources

Please enable JavaScript for full site functionality. Click here to learn how.