Find Mediators Near You:

AI, ADR and Anxiety

This post started as a response to Jen Reynolds’s comment about my Avatar Mediation post.  It has grown into this new post about AI generally, growing anxiety about it and the state of the world, and how we can manage this anxiety.

AI Risks . . . and Potential Benefits

Jen wrote, “I hope that we don’t get so hung up on the anthropomorphic romanticizing of this technology that we forget the corporate overlords and others who benefit from extracting/mining data.”

I agree that there are serious and growing risks from the control and abuse of the gobs of data on the internet.  Business interests in market-oriented economies and governments in countries like Russia and China have tremendous power over the internet and AI.  Although there is some regulation of AI, I expect that we will live in a world where it will develop much faster than governments will be able to regulate new threats.  That’s already the current situation, and I expect it will get worse.

I think it’s also important to focus on the anthropomorphizing of AI, which I imagine will become seductive and eventually invisible, being taken for granted.  We already can see pieces of this, which can readily be assembled into increasingly plausible avatars.  When we go onto websites, we often encounter bots with drawings of humans saying, “Hi, I’m [fill in the name].  How can I help you?”  Digital assistants like “Siri” and “Alexa” have become normal parts of many people’s lives.  Videos, including deepfakes, replicate humans or create characters that people relate to.  Social media has apps for that too.  Characters don’t need to be realistic likenesses of human as advertisements frequently use cartoons, which presumably are very effective in motivating sales.  AI is pretty crude now but will become increasingly sophisticated and integrated.

After a spooky “conversation” with a chatbot, tech columnist Kevin Roose worried that “the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways.”

So I assume that in some ways we will inhabit a brave new world where we should be concerned about both control of AI and influence by AI.

Of course, that’s the part of the glass that’s empty.  There’s a part that’s full too.  Historically, technology has produced amazing improvements in people’s lives and AI probably will too.

Washington Post tech analyst Will Oremus described both parts of the glass in his article, “Lifesaver or job killer? Why AI tools like ChatGPT are so polarizing.”

For every success story in tech’s latest AI boom, there’s a nightmare scenario.

If you listen to its boosters, artificial intelligence is poised to revolutionize nearly every facet of life for the better.  A tide of new, cutting-edge tools is already demolishing language barriers, automating tedious tasks, detecting cancer and comforting the lonely.

A growing chorus of doomsayers, meanwhile, agrees AI is poised to revolutionize life – but for the worse.  It is absorbing and reflecting society’s worst biases, threatening the livelihoods of artists and white-collar workers, and perpetuating scams and disinformation, they say.

The latest wave of AI has the tech industry and its critics in a frenzy.  So-called generative AI tools such as ChatGPT, Replika and Stable Diffusion, which use specially trained software to create humanlike text, images, voices and videos, seem to be rapidly blurring the lines between human and machine, truth and fiction.

As sectors ranging from education to health care to insurance to marketing consider how AI might reshape their businesses, a crescendo of hype has given rise to wild hopes and desperate fears.  Fueling both is the sense that machines are getting too smart, too fast – and could someday slip beyond our control. “What nukes are to the physical world,” tech ethicist Tristan Harris recently proclaimed, “AI is to everything else.”

New York Times columnist Ezra Klein wrote that AI “changes everything.”  He said, “There is no more profound human bias than the expectation that tomorrow will be like today.  It is a powerful heuristic tool because it is almost always correct.  Tomorrow probably will be like today.  Next year probably will be like this year.  But cast your gaze 10 or 20 years out.  Typically, that has been possible in human history.  I don’t think it is now.”

Obviously, this uncertainty can create profound anxiety, especially considering the powerful effects of AI technology.

“The ADR Glass”

My speculations about possible avatar mediation questioned some assumptions and focused mostly on the empty part of the glass.  We would like to think that AI could not reproduce human skills such as communication and empathy that are necessary for good mediation.  Of course, AI can’t truly reproduce real human cognitions and emotions – but the simulations are likely to be increasingly good approximations.  And avatar mediation could reproduce the worst aspects of human ADR processes.

Many of us have been disappointed with aspects of human ADR that don’t fulfill our idealistic aspirations.  For example, some mediators are lousy listeners and press parties to settle as the mediators suggest.  Some businesses force employees and consumers to sign adhesion contracts with one-sided arbitration clauses.  Human biases inevitably color the processes, sometimes quite adversely.

People often focus on the empty part of the glass because of the loss aversion bias.  We generally pay more attention to potential problems than benefits.

So it’s important to remember that every day, lots of people benefit from mediation, arbitration, and other dispute resolution processes, albeit imperfectly.  As the saying goes, the perfect is the enemy of the good.  ADR processes are imperfect but people often prefer them to the alternatives.  And sometimes, ADR practitioners do a damn good job.  If avatar mediation is developed, it too might be quite good at times.

So the human mediation glass is partly empty and partly full.  Presumably, so will be machine mediation if it is developed.

Growing Anxiety

Just spitballin’ here, but anxiety about AI may be feeding into a more general anxiety in the US and probably elsewhere.

Political polarization has markedly increased as manifest in contemporary culture wars.  Wikipedia defines them as “wedge issues in the United States includ[ing] abortion, homosexuality, transgender rights, pornography, multiculturalism, racism and other cultural conflicts based on values, morality, and lifestyle.”  People on both sides are anxious because the other side threatens their deeply held values.  “Civilians” in the culture wars are turned off by both sides.

The covid pandemic radically upset normal life for about two years and we still are feeling lingering effects.  The Russian armed aggression against Ukraine disrupted feelings of confidence since World War II that major powers would not launch unprovoked wars.  The economy isn’t behaving “normally,” i.e., consistent with economic theory and experience in recent decades.  Etc. etc.  So there’s a lot of uncertainty about the future, which feeds our anxiety.

Dealing with Anxiety

New York Times columnist David Brooks wrote a column about what he called “The Self-Destructive Effects of Progressive Sadness.”  He wrote that a “well-established finding of social science research is that conservatives report being happier than liberals.”  Even if this is true as a generalization, committed political partisans on all sides may display three indicators of maladaptive patterns he identified.  These include a “catastrophizing mentality,” “extreme sensitivity to harm,” and a “culture of denunciation.”  Indeed, many people in all kinds of intense conflicts often display these patterns.

He suggested addressing these symptoms by focusing on what people actually can control.  “People who provide therapy to depressive people try to break the cycle of catastrophic thinking so they can more calmly locate and deal with the problems they actually have control over. … Just about everything researchers understand about resilience and mental well-being suggests that people who feel like they are the chief architects of their own life[] are ‘vastly better off than people whose default position is victimization, hurt and a sense that life simply happens to them.’”

Mr. Brooks believes that the “woke” era is winding down.  I think that conflicts about “wokeness” are symptoms of our culture wars, which I expect will continue to ramp up.  Which is why I think his analysis and suggestions for dealing with sadness and anxiety are especially valuable.

Returning to focus on AI and ADR, it’s important to recognize our own reactions and fears, have as accurate and balanced an understanding of what’s happening as possible, acknowledge the uncertainties, and focus on what we can control.

This post was originally published by Indisputably

author

John Lande

John Lande is the Isidor Loeb Professor Emeritus at the University of Missouri School of Law and former director of its LLM Program in Dispute Resolution.  He received his J.D. from Hastings College of Law and Ph.D in sociology from the University of Wisconsin-Madison.  He began mediating professionally in 1982 in California.… MORE

Featured Members

ad
View all

Read these next

Category

“Hearing” the Other Side in Employment Mediations

JAMS ADR Blog by Chris PooleWhy do some employment mediations get off to a bad start? Abby Silverman: Sometimes it is timing, it is too early or too late in the...

By Abby Silverman
Category

Paths Forward for Online Mediation: Final Report of Online Mediation Task Force – Introduction

READ THE ENTIRE FINAL REPORT Introduction “Welcome to the online mediation revolution”  Forrest (Woody) Mosten, Task Force Chair What follows is a well-intended, if not noble, effort intended to assist...

By James (Jim) Melamed, Forrest (Woody) Mosten
Category

Arbitrator Selection

JAMS ADR Blog by Chris PooleMuch has been written in recent years about whether arbitration has lived up to its billing as a “better, faster, cheaper” alternative to litigation. No...

By Kim Taylor
×