Is AI Really an Existential Threat to Humanity?

Google’s Blaise Agüera y Arcas on what’s wrong with the current debate about the technology.

An old computer emerges from a shadow background; the screen has a sinister 8bit face.

Mother Jones; Getty; Unsplash

Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.

Artificial intelligence, we have been told, is all but guaranteed to change everything. Often, it is foretold as bringing a series of woes: “extinction,” “doom,”; AI is at risk of “killing us all.” US lawmakers have warned of potential “biological, chemical, cyber, or nuclear” perils associated with advanced AI models and a study commissioned by the State Department on “catastrophic risks,” urged the federal government to intervene and enact safeguards against the weaponization and uncontrolled use of this rapidly evolving technology. Employees at some of the main AI labs have made their safety concerns public and experts in the field, including the so-called “godfathers of AI,” have argued that “mitigating the risk of extinction from AI” should be a global priority.

Advancements in AI capabilities have heightened fears of the possible elimination of certain jobs and the misuse of the technology to spread disinformation and interfere in elections. These developments have also brought about anxiety over a hypothetical future where Artificial General Intelligence systems can outperform humans and, worst case scenario, exterminate humankind.

But the conversation around the disruptive potential of artificial intelligence, argues AI researcher Blaise Agüera y Arcas, CTO of Technology & Society at Google and author of Who Are We Now?, a data-driven book about human identity and behavior, shouldn’t be polarized between AI doomers and deniers. “Both perspectives are rooted in zero-sum,” he writes in the Guardian, “us-versus-them thinking.”

So how worried should we really be? I posed that question to Agüera y Arcas, who sat down with Mother Jones at the Aspen Ideas Festival last month to talk about the future of AI and how we should think about it.

Blaise Agüera y Arcas speaks at the Aspen Ideas Festival. Daniel Bayer/Aspen Ideas Festival

This conversation has been edited for length and clarity.

You work at a big tech company. Why did you feel compelled to study humanity, behavior, and identity?

My feeling about big AI models is that they are human intelligence, they’re not separate. There were a lot of people in the industry and in AI who thought that we would get to general purpose, powerful AI through systems that were very good at playing a really good game of chess or whatever. That turned out not to be the case. The way we finally got there is by literally modeling human interaction and content on the internet. The internet is obviously not a perfect mirror of us, it has many flaws. But it is basically humanity. It’s literally modeling humanity that yields general intelligence. That is both worrisome and reassuring. It’s reassuring that it’s not an alien. It’s all too familiar. And it’s worrisome because it inherits all of our flaws.

In an article you co-authored titled “The Illusion of AI’s Existential Risk,” you write that “harm and even massive death from misuse of (non-superintelligent) AI is a real possibility and extinction via superintelligent rogue AI is not an impossibility.” How worried should we be?

I’m an optimist, but also a worrier. My top two worries right now for humanity and for the planet are nuclear war and climate collapse. We don’t know if we’re dancing close to the edge of the cliff. One of my big frustrations with the whole AI existential risk conversation is that it’s so distracting from these things that are real and in front of us right now. More intelligence is actually what we need in order to address those very problems, not less intelligence.

The idea that somehow more intelligence is a threat feels to me like it comes more than anything else from our primate brains of dominance hierarchy. We are the top dog now, but maybe AI will be the top dog. And I just think this is such bullshit.

AI is so integral already to computers and it will become even more so in the coming years. I have a lot of concerns about democracy, disinformation and mass hacking, cyber warfare, and lots of other things. There’s no shortage of things to be concerned about. Very few of them strike me as being potential species enders. They strike me as things that we really have to think about with respect to what kind of lifestyle we want, how we want to live, and what our values are.

The biggest problem now is not so much how do we make AI models follow ethical injunctions as who gets to make those? What are the rules? And those are not so much AI problems as they are the problems of democracy and governance. They’re deep and we need to address them.

In that same article, you talk about AI’s disruptive dangers to society today, including the breakdown of social fabric and democracy. There also are concerns about the carbon footprint required to develop and maintain data centers, defamatory content and copyright infringement issues, and disruptions in journalism. What are the present dangers you see and do the benefits outweigh the potential harms?

We’re imagining that we’ll be able to really draw a distinction between AI content and non-AI content, but I’m not really sure that will be the case. In many cases, AI is going to be really helpful for people who don’t speak a language or who have sensory deficits or cognitive deficits. As more and more of us begin to work with AI in various ways, I think drawing those distinctions is going to become really hard. It’s hard for me to imagine that the benefits are not really big. But I can also imagine sort of conditions conspiring to make things work out poorly for us. We need to be distributing the gains that we’re getting from a lot of these technologies more broadly. And we need to be putting our money where our hearts are.

Is AI going to develop industries and jobs as opposed to making existing ones obsolete and replaceable?

The labor question is really complex and the jury is very much still out about how many jobs will be replaced, changed, improved, or created. We don’t know. But I’m not even sure that the terms of that debate are right. We wouldn’t be interested in a lot of these AI capabilities if they didn’t do stuff that is useful to us. But with capitalism configured the way it is, we are requiring that people do, “economically useful” work, or they don’t eat. Something seems to be screwy about this.

If we’re entering an era of potentially such abundance that a lot of people don’t have to work and yet the consequence of that is that a lot of people starve, something’s very wrong with the way we’ve set things up. Is that a problem with AI? Not really. But it’s certainly a problem that AI could bring about if the whole sociotechnical system is not changed. I don’t know that capitalism and labor as we’ve thought about it is sophisticated enough to deal with the world that we’ll be living in in 40 years’ time.

There has been some reporting that paints a picture of companies that are developing these technologies as divided between people who want to take it to the limit without much regard for potential consequences, and then those who are perhaps more sensitive to such concerns. Is that the reality of what you see in the industry?

Just like with other cultural wars issues, there’s a kind of polarization that is taking place. And the two poles are weird. One of them I would call AI existential risk. The other one I would call AI safety. And then there’s what I would almost call AI abolition or anti-AI movement—that on the one hand often claims that AI is neither artificial nor intelligent, it’s just a way to bolster capital at the expense of labor. It sounds almost religious, right? It’s either the rapture or the apocalypse. AI—it’s real. It’s not just some kind of party trick or hype. I get quite frustrated by a lot of the way that I see those concerns raised from both sides. It’s unfortunate because a lot of the real issues with AI are so much more nuanced and require much more care in how they’re analyzed.

Current and former employees at AI development companies, including at Google, signed a letter calling for whistleblower protections so that current and former employees can publicly raise concerns about the potential risks of these technologies. Do you worry that there isn’t enough transparency in the development of AI and should the public at large trust big companies and powerful individuals to sort of rein it in?

No. Should people trust corporations to just make everything better for everybody? Of course not. I think that the intentions of the corporations have often not really been the determinant of whether things go well or badly. It’s often very difficult to tell what the long-term consequences are going to be of a thing.

Think about the internet, which was the last really big change. I think AI is a bigger change than the internet. If we’d had the same conversation about the internet in 1992, should we trust the companies that are building the computers, the wire, and later on the fiber? Should we trust that they have our interests at heart? How should we hold them to account? What laws should be passed? Even with everything we know now, what could we have told humans 1992 to do? I’m not sure.

The internet was a mixed blessing. Some things probably should have been regulated differently. But none of the regulations we were thinking of at the time were the right ones. I think that a lot of our concerns at that time turned out to be the wrong concerns. I worry that we’re in a similar situation now. I’m not saying that I think we should not regulate AI. But when I look at the actual rules and policies being proposed, I have very low confidence that any of them will actually make life better for anybody in 10 years.

DONALD TRUMP & DEMOCRACY

Mother Jones was founded to do things differently in the aftermath of a political crisis: Watergate. We stand for justice and democracy. We reject false equivalence. We go after, and go deep on, stories others don’t. And we’re a nonprofit newsroom because we knew corporations and billionaires would never fund the journalism we do. Our reporting makes a difference in policies and people’s lives changed.

And we need your support like never before to vigorously fight back against the existential threats American democracy and journalism face. We’re running behind our online fundraising targets and urgently need all hands on deck right now. We can’t afford to come up short—we have no cushion; we leave it all on the field.

Please help with a donation today if you can—even just a few bucks helps. Not ready to donate but interested in our work? Sign up for our Daily newsletter to stay well-informed—and see what makes our people-powered, not profit-driven, journalism special.

payment methods

DONALD TRUMP & DEMOCRACY

Mother Jones was founded to do things differently in the aftermath of a political crisis: Watergate. We stand for justice and democracy. We reject false equivalence. We go after, and go deep on, stories others don’t. And we’re a nonprofit newsroom because we knew corporations and billionaires would never fund the journalism we do. Our reporting makes a difference in policies and people’s lives changed.

And we need your support like never before to vigorously fight back against the existential threats American democracy and journalism face. We’re running behind our online fundraising targets and urgently need all hands on deck right now. We can’t afford to come up short—we have no cushion; we leave it all on the field.

Please help with a donation today if you can—even just a few bucks helps. Not ready to donate but interested in our work? Sign up for our Daily newsletter to stay well-informed—and see what makes our people-powered, not profit-driven, journalism special.

payment methods

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate