AI Gone Wrong: 5 Biggest AI Failures Of All Time

AI Failures

From Tesla’s fatal car crash to false facial recognition matches, here are five times AI failed to deliver.

AI has progressed remarkably in the last couple of decades. What was once deemed possible only in the realm of fiction such as Men in Black or The Matrix, is now a reality.

Today, from health, fashion, and property, to food and travel, AI is rampant across industries. Its popularity has surged so much that it has even given life to some unusual applications.

However, it’s not all peaches and cream. According to a survey of global organizations that are already using AI, a quarter of surveyed companies reported up to 50% failure rate of their AI projects.

Often, AI meant to solve problems can end up becoming the source of new problems. Here, we take a look at the biggest AI failures that made headlines for all the wrong reasons.

Tesla cars crash due to autopilot feature

Elon Musk’s Tesla found itself in trouble after a Tesla Model S crashed north of Houston, killing two people in April this year. The car had missed a slight curve in the road, leading it to ram into a tree.

As per preliminary investigations and witness statements, the driver’s seat was empty during the crash. As a result, it is believed that Tesla’s Autopilot or Full Self Driving (FSD) system was engaged during the crash.

Tesla’s AI-based Autopilot feature can control steering, acceleration, and in some cases, braking. According to Musk, the AI is designed to learn from drivers’ actions over time.

However, the feature has come under increased scrutiny in recent months due to several crashes involving the vehicle. Recently, in Michigan, a Tesla Model Y, which was on Autopilot, crashed into a police vehicle. Currently, U.S. safety regulators are investigating 30 Tesla crashes since 2016, where advanced driver assistance systems were believed to have been in use.

Several safety advocates have criticized Tesla for not doing enough to prevent drivers from relying heavily on its Autopilot features, or for using them in situations that the feature is not designed for.

Amazon’s AI recruiting tool showed bias against women

Amazon started building machine learning programs in 2014 to review job applicants’ resumes. However, the AI-based experimental hiring tool had a major flaw: it was biased against women.

The model was trained to assess applications by studying resumes submitted to the company over a span of 10 years. As most of these resumes were submitted by men, the system taught itself to favor male candidates. This meant that the AI downgraded resumes with words such as “women’s” (as in the case with “women’s chess club captain”). Similarly, graduates from two all-women’s colleges were also ranked lower.

By 2015, the company recognized the tool was not evaluating applicants for various roles in a gender-neutral way, and the program was eventually disbanded. The incident came to light in 2018 after Reuters reported it.

AI camera mistakes linesman’s head for a ball

In a hilarious incident, an AI-powered camera designed to automatically track the ball at a soccer game ended up tracking the bald head of a linesman instead.

The incident occurred during a match between Inverness Caledonian Thistle and Ayr United at the Caledonian Stadium in Scotland. Amid the pandemic last October, the Inverness club had resorted to using an automated camera instead of human camera operators.

However, according to reports, “the camera kept on mistaking the ball for the bald head on the sidelines, denying viewers of the real action while focusing on the linesman instead.”

Microsoft’s AI chatbot turns sexist, racist

In 2016, Microsoft launched an AI chatbot called Tay. Tay engaged with Twitter users through “casual and playful conversation.” However, in less than 24 hours, Twitter users manipulated the bot to make deeply sexist and racist remarks.

Tay leveraged AI to learn from its conversations with Twitter users. The more conversations it had, the “smarter” it became. Soon, the bot began repeating users’ inflammatory statements, including “Hitler was right,” “feminism is cancer,” and “9/11 was an inside job.”

As the debacle unfolded, Microsoft had to pull the plug on the bot within a day after its launch. Later, Peter Lee, Microsoft’s Vice President of research, issued an apology, stating, “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.”

False facial recognition match leads to Black man’s arrest

In February 2019, Nijeer Parks, a 31-year-old Black man living in Paterson, New Jersey, was accused of shoplifting and trying to hit a police officer with a car in Woodbridge, New Jersey. Although he was 30 miles away at the time of the incident, the police identified him using facial recognition software.

Parks was later arrested for charges including aggravated assault, unlawful possession of weapons, shoplifting, and possession of marijuana, among others, and spent 11 days in jail. According to a police report, the officers arrested Parks following a “high profile comparison” from a facial recognition scan of a fake ID left at the crime scene.

The case was dismissed in November 2019 for lack of evidence. Parks is now suing those involved in his arrest for violation of his civil rights, false arrest, and false imprisonment.

Facial recognition technology, which uses machine learning algorithms to identify a person based on their facial features, is known to have many flaws. In fact, a 2019 study found that facial recognition algorithms are “far less accurate” in identifying Black and Asian faces.

Parks is the third known person to be arrested due to false facial recognition matches. In all cases, the individuals wrongly identified were Black men.

Ultimately, while AI has grown in leaps and bounds in recent years, it is far from perfect. Going forward, it will be crucial to address its many vulnerabilities for it to truly emerge as a technological driving force for the world.

Header image by Rock’n Roll Monkey on Unsplash

SHARE THIS STORY

Share on facebook
Share on twitter
Share on linkedin
Share on email

RELATED POSTS

9 Signs of Poor Management and How to Address Them

9 Signs of Poor Management and How to Address Them

No organization is perfect, but some are definitely better than others when it comes to management. Unfortunately, bad management can have a ripple effect throughout an entire company, leading to low morale, high turnover and decreased productivity.

Navigate the Changing Startup Landscape with Cyberport Venture Capital Forum 2022

Navigate the Changing Startup Landscape with Cyberport Venture Capital Forum 2022

The past couple of years have been extremely difficult for the global startup ecosystem. According to U.S.-based financial services company Moody Analytics, global economic growth estimates have fallen to 2.7% as of September this year, compared to 4.2% in January. Not only has the world undergone a tremendous transformation due to the COVID-19 pandemic.

Off-Chain vs. On-Chain Crypto Transactions: Which One Is Better?

Off-Chain vs. On-Chain Crypto Transactions: Which One Is Better?

Knowledge is power; when it comes to cryptocurrency, the more you know, the better financial decisions you will make. There is a lot to learn within the cryptocurrency space, such as how the different layers of blockchain technology work and which blockchain has the fastest transaction speed. Another important thing you need to know regarding blockchain transactions is the differences between off-chain and on-chain transactions—and which one can save you more money!

5 Essential Tricks to Master Microsoft Excel

5 Essential Tricks to Master Microsoft Excel

Microsoft Excel (MS Excel) is a must-have for any business. It is a powerful spreadsheet software application that allows users to store, organize and analyze data. While Excel is fairly easy to use, it can be tricky to become an expert. Don’t know where to start? We have compiled a list of easy-to-follow hacks that will help you become a pro at Excel!

Top Three Weirdest Mobile Game Advertisements

Top 3 Weirdest Mobile Game Advertisements

Whether on YouTube or Instagram, I’m sure you all must have seen at least one mobile game advertisement. One common trope with today’s mobile game advertisements is that they tend to mislead their audience.
You might have come across these weird ads while scrolling through social media: In the ad for the strategy game “Game of Sultans”, a sultan is fat-shaming his daughter; in the match-three puzzle game “Lily’s Garden” ad, women use washing machines as vibrators.