From Tesla’s fatal car crash to false facial recognition matches, here are five times AI failed to deliver.
AI has progressed remarkably in the last couple of decades. What was once deemed possible only in the realm of fiction such as Men in Black or The Matrix, is now a reality.
Today, from health, fashion, and property, to food and travel, AI is rampant across industries. Its popularity has surged so much that it has even given life to some unusual applications.
However, it’s not all peaches and cream. According to a survey of global organizations that are already using AI, a quarter of surveyed companies reported up to 50% failure rate of their AI projects.
Often, AI meant to solve problems can end up becoming the source of new problems. Here, we take a look at the biggest AI failures that made headlines for all the wrong reasons.
Tesla cars crash due to autopilot feature
Elon Musk’s Tesla found itself in trouble after a Tesla Model S crashed north of Houston, killing two people in April this year. The car had missed a slight curve in the road, leading it to ram into a tree.
As per preliminary investigations and witness statements, the driver’s seat was empty during the crash. As a result, it is believed that Tesla’s Autopilot or Full Self Driving (FSD) system was engaged during the crash.
Two men dead after fiery crash in Tesla Model S.
— Matt Dougherty (@MattKHOU) April 18, 2021
“[Investigators] are 100-percent certain that no one was in the driver seat driving that vehicle at the time of impact,” Harris County Precinct 4 Constable Mark Herman said. “They are positive.” #KHOU11 https://t.co/q57qfIXT4f pic.twitter.com/eQMwpSMLt2
Tesla’s AI-based Autopilot feature can control steering, acceleration, and in some cases, braking. According to Musk, the AI is designed to learn from drivers’ actions over time.
However, the feature has come under increased scrutiny in recent months due to several crashes involving the vehicle. Recently, in Michigan, a Tesla Model Y, which was on Autopilot, crashed into a police vehicle. Currently, U.S. safety regulators are investigating 30 Tesla crashes since 2016, where advanced driver assistance systems were believed to have been in use.
Several safety advocates have criticized Tesla for not doing enough to prevent drivers from relying heavily on its Autopilot features, or for using them in situations that the feature is not designed for.
Amazon’s AI recruiting tool showed bias against women
Amazon started building machine learning programs in 2014 to review job applicants’ resumes. However, the AI-based experimental hiring tool had a major flaw: it was biased against women.
The model was trained to assess applications by studying resumes submitted to the company over a span of 10 years. As most of these resumes were submitted by men, the system taught itself to favor male candidates. This meant that the AI downgraded resumes with words such as “women’s” (as in the case with “women’s chess club captain”). Similarly, graduates from two all-women’s colleges were also ranked lower.
By 2015, the company recognized the tool was not evaluating applicants for various roles in a gender-neutral way, and the program was eventually disbanded. The incident came to light in 2018 after Reuters reported it.
AI camera mistakes linesman’s head for a ball
In a hilarious incident, an AI-powered camera designed to automatically track the ball at a soccer game ended up tracking the bald head of a linesman instead.
The incident occurred during a match between Inverness Caledonian Thistle and Ayr United at the Caledonian Stadium in Scotland. Amid the pandemic last October, the Inverness club had resorted to using an automated camera instead of human camera operators.
However, according to reports, “the camera kept on mistaking the ball for the bald head on the sidelines, denying viewers of the real action while focusing on the linesman instead.”
I love it so much, the camera is like "ball ball ball bald head, there's a bald head, zoom in on the bald head"
— James Felton (@JimMFelton) October 29, 2020
Microsoft’s AI chatbot turns sexist, racist
In 2016, Microsoft launched an AI chatbot called Tay. Tay engaged with Twitter users through “casual and playful conversation.” However, in less than 24 hours, Twitter users manipulated the bot to make deeply sexist and racist remarks.
Tay leveraged AI to learn from its conversations with Twitter users. The more conversations it had, the “smarter” it became. Soon, the bot began repeating users’ inflammatory statements, including “Hitler was right,” “feminism is cancer,” and “9/11 was an inside job.”
As the debacle unfolded, Microsoft had to pull the plug on the bot within a day after its launch. Later, Peter Lee, Microsoft’s Vice President of research, issued an apology, stating, “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.”
False facial recognition match leads to Black man’s arrest
In February 2019, Nijeer Parks, a 31-year-old Black man living in Paterson, New Jersey, was accused of shoplifting and trying to hit a police officer with a car in Woodbridge, New Jersey. Although he was 30 miles away at the time of the incident, the police identified him using facial recognition software.
Parks was later arrested for charges including aggravated assault, unlawful possession of weapons, shoplifting, and possession of marijuana, among others, and spent 11 days in jail. According to a police report, the officers arrested Parks following a “high profile comparison” from a facial recognition scan of a fake ID left at the crime scene.
The case was dismissed in November 2019 for lack of evidence. Parks is now suing those involved in his arrest for violation of his civil rights, false arrest, and false imprisonment.
Facial recognition technology, which uses machine learning algorithms to identify a person based on their facial features, is known to have many flaws. In fact, a 2019 study found that facial recognition algorithms are “far less accurate” in identifying Black and Asian faces.
Parks is the third known person to be arrested due to false facial recognition matches. In all cases, the individuals wrongly identified were Black men.
Ultimately, while AI has grown in leaps and bounds in recent years, it is far from perfect. Going forward, it will be crucial to address its many vulnerabilities for it to truly emerge as a technological driving force for the world.
Header image by Rock’n Roll Monkey on Unsplash