I’m so exhausted from the "The world is doomed because of AI" articles I've seen over the last few months. They are a dollar a dozen, and they all seem to default to an all-or-nothing choice of abandoning AI or dying. So, when I saw one from The Conversation, AI is an existential threat – just not the way you think, I thought, "Okay, this might be different!" But nope, it was just another fear-inducing piece, grabbing at headlines without offering much substance. AI was an existential threat, in exactly the way I thought.
AI has risks, but I'm tired of the pointless back-and-forth between the "AI is the end of the world" camp giving saying we bomb AI out of existence and the "AI will solve everything" crew who conveniently ignores all of the problems it causes and makes worse today.
In this article, I'm going to break down the worries they raised and offer up some down-to-earth ways to handle the ethical dilemmas of this ever-advancing tech. So, grab your favorite beverage, and let's dive into how we can make AI a force for good while keeping humanity intact.
The Existential Threat Argument
While some fear AI poses an apocalyptic threat to humanity, the author argues these catastrophic risks are exaggerated - AI lacks the capabilities for sci-fi doomsday scenarios. However, the author believes AI does present a more philosophical “existential threat” by eroding essential human abilities like judgment, critical thinking, and serendipity as more tasks become automated. So although AI won’t violently end the human species, the author warns its increasing use could “diminish” humanity by deskilling people across areas considered core to human existence and way of life.
I was disappointed. The critique fell flat for me, and it didn’t read substantially differently from other doomsday scenarios. Ever since the headline, “AI Poses ‘Risk of Extinction’, Industry Leaders Warn”, articles like this show up every few hours it seems like. Or taking a trot over to LinkedIn, unscrupulous folks talking constantly about how AI is changing everything, and Utopia is Coming (soon). But this piece, and pieces like it, don’t do anything to move the conversation forward, and instead just continues to mire us in pointless back and forth between two extremes.
[AI]…can degrade abilities and experiences that people consider essential to being human. - Nir Eisikovits
The Conversation/SciAm article does highlight an important potential harm of AI. It’s scary to think of a future where humanity is unable to make decisions without the support of AI. It’s also fairly like harm as we see examples of this all around us today. How many people don’t know a single phone number, because their phone remembers them all? Or how about navigating to a new city you’ve never visited without using Google Maps? Or getting the answer to that question that came up in casual conversation without asking Google? It’s reasonable to assume that this pattern will repeat itself as AI’s capabilities expand.
Where I think the article fails is that it assumes that nothing can be done, other than to walk away from AI. It offers up two solutions - walk away from AI, thereby avoiding the issue altogether, or face annihilation. When framed like that, there is no choice. The only option is to walk away from AI.
It’s also completely unreasonable. We can’t walk away from AI, and AI isn’t going to cause us to be an existential threat to ourselves. Neither of those outcomes is reasonable. What the author should have done is to articulate the problem, and then talk about what to do about it, and how to minimize and mitigate it. But, it didn’t.
So I will.
Embracing the Pragmatic Path
First, we acknowledge a very uncomfortable truth — no matter what we do, in the future, some people will be unable to function without the support of artificial intelligence. We acknowledge the harm we know will be there and that we can’t completely eradicate the harm caused, no matter what we do.
Once we’ve acknowledged that we can’t save everyone, we need to start looking at the roots of the problem. What the article above doesn’t say is why the author believes this will happen, other than “AI can make decisions for us.” But if we dig a little bit deeper, the underlying harm from the statement “AI will make the decisions for us”, is that we won’t have a say in the final decision AND we won’t have any insight into how a decision was made.
Let’s take each piece of this and break it down
We Won’t Have a Say in the Final Decision
The anxiety in this statement comes from the idea that humans won’t have any input into the decisions, or that they accept the recommendation of the AI without thinking critically about it. Over time, we’ve lost the ability to make decisions without the recommendations because we will never actually be making a decision or, leading into the second point …
We Won’t Have Insight Into How a Decision Was Made.
If we only ever see the outcome of the AI’s final decision or recommendation, we don’t know what it evaluated to arrive at that decision or recommendation. That lack of insight and accountability would make it so that we wouldn’t be able to learn how the AI decided the first place, and can’t evaluate whether it was making the decision or recommendation ethically.
These Aren’t New Problems
Despite the novel application to a doomsday scenario, these aren’t new issues — they are things we are dealing with today, right now. For example, when you get recommended new videos on YouTube, do you know why it was recommended to you? Did you have the opportunity to reject the recommendation? You don’t - those recommendation algorithms are proprietary and while you can make some guesses about why some of the recommendations are there, you don’t know why that one video about underwater basket weaving always shows up in your recommendations.
Looking at a more impactful example, many Applicant Tracking Sytems (ATSs) comes with the ability to automatically reject unqualified candidates (except in New York—that’s a topic for another day). If you’ve ever gotten one of these rejections, you know there’s never any information detailing why you were rejected. You get a professionally appropriate rejection sentence (just one, usually). Or worse, you get rejected and screened by the ATS and it doesn’t say anything, so you are left to guess what happened.
The rise of misinformation and disinformation shows that we already have a substantial problem with critical thinking. That’s not without consequence - 7 million people died from COVID-19, and a non-zero number of those individuals died because they didn’t apply critical thinking to what they were seeing and reading. That harm was directly preventable.
These are just a couple of examples, but I could list dozens more here. We pretend that these problems we see with AI are brand new and novel, but they aren’t. We are already living with them today.
Charting a Course of Action
We fall back on our AI Ethics Principles, and we use those to help minimize the harm we’ll experience as a society as more and more of our work can be handled by AI.
Ensuring there’s always a “Human in the Loop” will help mitigate the issue where decisions are made without our input. It gives us the ability to interact with the AI, clarify and correct — or even outright reject the decisions or recommendations. This also supports other principles, like AI as Human Augmentation instead of AI as Human Replacement, as well as Transparency and Accountability.
We can’t solve a problem where folks don’t want to spend the time to think critically about the decisions and recommendations for AI. But we can help mitigate it by ensuring important AI decisions and recommendations have friction - that they can’t be made quickly and easily and thoughtlessly. Alternatively, we could set a threshold for when a decision is too risky, we can’t use AI at all. This is the approach that the EU AI Act uses (again, future article).
An intelligent application of Transparency and Accountability principles would help mitigate the harm of “black box decisions” - both today and in the doomsday scenarios. The Transparency and Accountability Principles exist to make sure we always know how an AI arrived at a decision, and that we hold it accountable for the bad decisions it makes.
The Wrap Up
These are pretty straightforward requirements that make the “doomsday by lack of decisions” seem a lot less scary. Not only do they help with our doomsday scenario, they have the added benefit of making things better for us right now — instead of only making it better for our hypothetical future.
We don't have to fall into the trap of all-or-nothing choices when it comes to AI. While there are legitimate risks, we can't let ourselves be swayed by extreme views that lead us nowhere. We must embrace a more pragmatic approach to AI ethics—one that acknowledges the challenges and focuses on finding real-world solutions.
Did you like what you read? Consider subscribing for more content from your friendly neighborhood AI ethicist.
Already subscribed? Share with a friend!