What Are AI Doom Scenarios, and How Likely Are They to Happen?

What Are AI Doom Scenarios, and How Likely Are They to Happen?

Not disease, not disaster, but will AI be our end? Read on.

“One of the biggest risks to the future of civilization is AI,” warned Tesla founder and billionaire CEO Elon Musk early this year. Ironic as it may seem, coming from Musk, who is an early investor and co-founder of OpenAI. While he has spent millions on developing artificial intelligence (AI) and is working on building new AI tech, he is not alone in casting a suspicious eye on the technology. A few experts have equated the threat of AI to pandemics and nuclear wars. 

Fears of AI taking over the world—by force, if necessary—have abounded. People have conjured up many AI-driven doomsday scenarios, which refer to situations where AI might lead to the destruction of humanity. The extensive discussion has given rise to the term “P(doom)”, or the probability of a doomsday scenario and how likely it is to happen. For instance, a five percent P(doom) indicates a one in 20 chance of an AI doomsday scenario taking place. 

Here, we look at some of those scenarios and the likelihood of them actually happening. 

AI-led extinction through enhanced biological weapons

Kicking off with a (somewhat) realistic doomsday scenario, some experts feel that, come the year 2100, there’s a three percent chance that AI will be the end of us, with its ability to develop bio-weapons. Here’s how: like in superhero stories (think: Powerpuff Girls or Perry the Platypus), a chemical reaction or technological innovation often empowers the villain and sets the town on fire. Similarly, AI could give rise to some major villain eras in the hands of the wrong people. 

An AI program, Alphafold, has been noted for its ability to design proteins that could advance biological weapons (hypothetically, it could create the recipe for a deadlier Coronavirus variation). Using this technology, humans could make wars more devastating than they already are. They would use AI to design bio-weapons, thus setting the stage for disaster and eventual extinction of humanity (Buttercup and Blossom to the rescue? Unlikely).

Apocalypse in the pursuit of paperclips

While AI in the hands of humans has the potential to go awry, some P(doom) scenarios believe that AI itself can also turn on us. Imagine this: you ask AI to bring you a paperclip, and in the process, it destroys everything that stands in its way. It is both romantic and catastrophic. 

This doomsday scenario stems from University of Oxford philosopher Nick Bostrom’s 2014 thought experiment. It considers a scenario where you ask an AI to make paperclips without giving it any additional instructions. Accordingly, it would decide to use all the metal on Earth and perhaps even kill people to access more metal. In doing so, it could eventually destroy the world, all in its pursuit of making paperclips. What’s more, it would be unstoppable. The AI is aware of its goal—to create paperclips—and anyone who tries to dissuade it will be considered a threat.

Uprisings as a result of misinformation

As history has taught us, propaganda is powerful. Leaders used misinformation and lies to get people to support their cause. AI could act similarly, destabilizing society and placing power in the hands of a select few. Potentially, it could give rise to a George Orwell 1984-type scenario, creating a dystopic land with an authoritarian rule—an instance of life imitating art. It could be the breeding ground for falsehoods, as the advancements in AI could signal the increase of the spread of disinformation or blatant lies. In fact, ChatGPT is already accused of hallucinating its responses and sharing information that furthers its makers’ agenda. On a political spectrum, in one instance, while ChatGPT wrote a poem about Joe Biden, it refused to do the same for Donald Trump.

Plus, these tools have access to a larger pool of people—essentially, the world—with not only average users but also credible media houses using ChatGPT to write content. As more people read such content, they might feel triggered into uprisings inspired by AI’s (or rather, its creators’) sentiments, thus destabilizing society.

AI steals resources for efficiency

While the above doomsday scenarios seem very plausible, some Reddit users have theorized scenarios that seem fantastical. One story goes that an AI platform, when asked to optimize its efficiency and improve itself, decides to steal all the resources from the world to do so (kind of like the paperclip theory). It brings the world’s Internet usage to a halt to have more for itself. It uses 3D printers to build robots, takes over email and other communications to manipulate humans and sucks up all the energy from the world to its advantage. To become the best version of itself, would jeopardize humanity as we know it, as people would start dying of starvation and cold as production facilities would shut down. 

Many similar Reddit theories end with complete human destruction as the AI goes above and beyond to enforce itself (upon the request of its creator). 

Can AI be the solution to itself?

As grim as the above scenarios sound, not all is doomed. 

Humans have made significant progress in the realm of self-care and self-development. People now actively book therapy sessions, openly discuss their strengths and flaws and encourage others to do so. Drawing parallels to human progression, AI, recognizing its pitfalls, could serve as the solution to its problem, as per some experts. Notwithstanding AI-related problems (like striving for total dominance), experts feel that it could help improve the state of the world, discovering answers for problems related to climate change, pandemics and diseases. If given more wholesome goals, AI would stay focused on making the world a better place.

Even so, the future of AI hangs in the balance. When the Godfather of AI, Geoffrey Hinton, himself hangs up his boots, resigning from his position at Google, to more openly discuss the dangers of his brainchild—you know there’s a problem.  

When it comes to AI, the question—“What’s the worst that could happen?”—cannot be followed up with a shoulder shrug and Dora the Explorer attitude. It needs a more proactive approach involving using AI responsibly—for the right reasons and giving it the right instructions. More importantly, we must ensure it doesn’t land in the wrong hands. 

Also read:

Header Image by Freepik

SHARE THIS STORY

Share on facebook
Share on twitter
Share on linkedin
Share on email

RELATED POSTS

Step Into Tomorrow: Explore the Wonders of InnoEX 2024 in Hong Kong

In the bustling city of Hong Kong, where over seven million people reside, the call for smarter, more livable cities is louder than ever. This April, the Hong Kong Trade Development Council (HKTDC) steps up to answer that call with the InnoEX and the landmark 20th edition of the HKTDC Hong Kong Electronics Fair (Spring Edition) (EFSE). Backed by the visionary efforts of the HKSAR Government Innovation, Technology and Industry Bureau and the HKTDC, these tech expos are set to feature the latest and greatest innovation from over 3000 exhibitors from more than 20 nations and regions. 

Cloud Software Group and Microsoft Forge Strategic Cloud and AI Partnership

Cloud Software Group Inc. and Microsoft Corp. have announced an expansion of their long-standing collaboration through an eight-year strategic partnership. This partnership aims to strengthen the go-to-market collaboration for the Citrix virtual application and desktop platform and facilitate the development of new cloud and AI solutions. As part of the agreement, Cloud Software Group will commit US$1.65 billion to Microsoft’s cloud services and generative AI capabilities.

The Best 4 Hardware Crypto Wallets of 2024

After a long crypto winter since the spring of 2022, the crypto world has been buzzing with activity recently. In January, the U.S. saw the approval of Bitcoin ETFs; on March 14, Bitcoin’s price soared to an all-time high of US$73,835—obviously, there is an upsurge in interest in the crypto market. 

SUNRATE Empowers B2B Transactions with Apple Pay Integration

SUNRATE, a global payment and treasury management platform for businesses, announced the integration of Apple Pay for its customers, offering a safer and more private payment method. This move leverages the advanced security features of the iPhone to protect transactions and aligns with the growing demand for seamless and secure business transactions.