Find Mediators Near You:

Evaluating Consensus Building Efforts: According To Whom? And Based On What?

This article originally appeared in the January 1999 issue of Consensus, a newspaper
published jointly by the Consensus Building Institute and the MIT-Harvard
Public Disputes Program.

For two decades,
specialists from a wide range of fields have
scrutinized the growing practice of public
dispute resolution at the most basic level,
trying to decide whether it is a “good
idea.” Many are also interested in finding
out how we could “do it better.”

These questions are certainly worth answering.
But after twenty years, attempts to evaluate
consensus building efforts in the public sector
have failed to produce agreement on even the
right criteria to use for evaluation.

Take, for example, the debate over attempts to
evaluate negotiated rule-making efforts at the
federal level that reached its apex in the Duke
Law Journal
in 1997.

Assistant Professor Cary Coglianese of Harvard
University’s John F. Kennedy School of Government
carried out what most quantitatively inclined
political scientists would consider to be a
methodologically sound study. He reviewed a dozen
US Environmental Protection Agency (EPA)
negotiated rule makings – known as
“reg-negs” in an attempt to determine
whether these efforts saved time or reduced
litigation costs compared to traditional EPA
rulemaking efforts.

Coglianese found that, on average, reg negs
had not saved time and that the rules produced
through the reg neg process ran a higher risk of
legal challenge than did EPA rules produced under
traditional procedures. Based on these findings,
Coglianese recommended that the federal
government stop encouraging the use of negotiated
rulemaking.

Philip Harter of the Washington-based
Mediation Consortium responded that Coglianese
had missed the point, that his methods were
faulty, and as a result his findings and
conclusion were invalid. Harter, one of the
leading practitioners of regulatory negotiation,
argued that Coglianese had not looked closely
enough at the specific circumstances surrounding
each negotiation.

Qualitatively, Harter said, the EPA rules
produced through reg negs were not comparable to
most EPA regulations. EPA had chosen particularly
controversial issues for regulatory negotiation,
hoping that the reg neg process could resolve
conflicts that had stymied the traditional
rulemaking process.

Harter also pointed out that some of the reg
negs studied by Coglianese were not
representative of reg neg’s full potential
because there were not conducted according to the
process guidelines that dispute resolution
professionals had advocated since the early
1980s. In Harter’s view, Coglianese unfairly
condemned a procedure whose outcomes differ
significantly depending on how well it is
implemented.

Harter also questioned Coglianese’s evaluation
criteria as well as his methods. Though saving
time and reducing litigation are valid goals,
other goals may be even more important: making it
possible for more stakeholders to participate
more directly in the policy making process;
taking advantage of stakeholder’s knowledge and
experience to create more effective rules that
meet stakeholder interests; and doing a better
job of taking scientific and technical
information into account.

Negotiated rulemaking is in its
“adolescence,” Harter argued, and will
continue to evolve as agencies and interest
groups gain experience and confidence in using it
to reach consensus on public policy issues.

Coglianese and Harter are both sophisticated
and skilled evaluators of consensus building
efforts. Their sharp disagreement raises several
issues relevant to the evaluation not only of
negotiated rulemaking, but of public consensus
building in general.

Evaluating what?

First, what should we be trying to evaluate?
There are few unambiguous indicators of a
“good” process or a “good”
outcome. Process management frequently requires
convenors, participants and neutrals to make
procedural trade-offs between in-depth
exploration of options and timely
decision-making.

At first glance, evaluating outcomes would
seem to be easier. Two basic criteria – fairness
and efficiency – are widely accepted as valid
measures of success, at least at a conceptual
level. But evaluating the “fairness” of
outcomes immediately raises the questions:
according to whom and as compared to what?

If stakeholders, neutrals and disinterested
evaluators don’t all agree, it’s probably because
they are using different standards of fairness,
such as accord with precedent, scientific or
technical merit and distributive justice. How
should these competing standards be weighed?
There is no easy answer.

Evaluating the efficiency of outcomes isn’t
any easier. Let’s say that stakeholders judge the
outcome to be “efficient” – in the
sense that all stakeholders believe that they
could not have gotten more without making at
least one other stakeholder worse-off. We still
do not know whether these stakeholders could have
created more joint gains if they had more
information or were able to work together more
effectively.

Perhaps focusing on “real world”
comparisons will move us forward. Even if the
consensus building process was not unambiguously
fair or efficient in some absolute sense, maybe
it was fairer or more efficient than the
stakeholders’ next best alternative. But as the
Coglianese-Harter debate illustrates, the answer
“as compared to what?” is rarely
clear-cut, even in settings such as regulatory
negotiations where there are well-established
decision-making procedures.

Given the thicket of conceptual and
methodological problems as well as the difficulty
of defending any one set of evaluation criteria,
it is no surprise that there is no agreement on
how to evaluate public dispute resolution or
consensus building efforts.

Nevertheless, some guidelines are worth
noting. First, evaluators should not only lay out
their evaluative criteria, but also explain why
they have chosen those particular criteria. For
example, if evaluators choose to assess the time
and costs it takes to reach agreement, they
should also examine other, less quantifiable
costs and benefits that have been claimed for the
process (such as the impact of relations and the
level of organizational learning) and explain why
they are not also examining them.

Second, evaluators need to acknowledge the
imperfections of the methods they use. Whether
their primary method is participant interviews,
review of written documentation or statistical
analysis of outcomes, evaluators must highlight
the limitations of the methods they have chosen.

For example, it is not enough to say that all
the process participants were evaluated using a
standard interview questionnaire. Differences in
the level of participants’ involvement in the
process, their recollection of events and their
satisfaction with the process and the outcome may
all color their responses to questionnaires and
should be reviewed in the evaluators’
presentation of the methods selected.

On a more positive note, the wide range of
legitimate evaluative criteria presents a
tremendous opportunity – as well as a formidable
challenge – for would-be evaluators. To this
writer’s knowledge, there not yet been a
peer-reviewed, published evaluation of a set of
public dispute resolution cases using:

• Multiple process and outcome
criteria;

• Qualitative and quantitative
indicators and methods well-tailored to those
criteria;

• A well-chosen control group of
cases from the same arena that were resolved
using traditional administrative, political
and/or judicial methods

The field would benefit greatly from such
studies. Practitioners and evaluators should work
together to assemble their findings in a way that
could improve both the theory and practice of
consensus building in the public sector.

If such studies already exist, Consensus would
like to know about them. If you have authored or
know of one, please let us know.

                        author

David Fairman

David Fairman has facilitated consensus building and mediated resolution of complex public and organizational disputes on economic development and human service programs and projects, environmental and land use planning and regulation, and violent intergroup conflicts. Recent and current projects include facilitation of a back-channel political dialogue on options for a… MORE >

Featured Members

ad
View all

Read these next

Category

Online Dispute Resolution in Latin America

This chapter is from "Online Dispute Resolution Theory and Practice," Mohamed Abdel Wahab, Ethan Katsh & Daniel Rainey ( Eds.), published, sold and distributed by Eleven International Publishing. The Hague,...

By Gabriela Szlak
Category

Employment Disputes

At the employment law program mentioned in my previous post, we were honored to hear a talk from Phyllis Cheng, the director of the California Department of Fair Employment and...

By Joe Markowitz
Category

Realizing Our Professional Goals

When I hear from people new (and not so new) to the ADR field – asking for input on realizing their professional goals – I often hear about a range...

By Cinnie Noble
×