Friendly neighborhood robots or harbingers of dystopia? The difference lies in ethics.
In the past decade, artificial intelligence (AI), once a mere figment of science fiction, has become a part of daily life. In fact, it has become so commonplace that one can find it on a shelf in one’s home or fitting snuggly in the palm of their hand in the form of virtual assistants like Apple’s Siri, Amazon’s Alexa and Google assistant. But what exactly is AI? And how can we trust it to help manage our lives?
What is AI?
AI or artificial intelligence is a branch of computer science that creates a simulation of human intelligence. This means that a machine can be programmed to perform tasks that would otherwise require human involvement. This is achieved by combining appropriate hardware and the software programmed to control it. AI can be taught to “think” by being fed massive amounts of labeled information. This information is known as training data. Training data helps the machine recognize patterns and learn to take actions based on this acquired “experience”. Through this process, AI can “learn” to perform tasks as simple as sorting emails, or as complex as driving cars.
As AI makes its way into more and more aspects of our lives, it raises concerns about what AI should and should not be able to do. This is where ethics come into play.
What are AI Ethics?
Ethics are a set of guidelines that dictate a code of conduct, separating right from wrong. These guidelines determine how artificial intelligence interacts with human society, taking into account aspects of morality, humanity and even culture.
What makes ethics so important with regards to AI?
Are ethics preventing our robots from turning evil and rising against humanity like Hollywood movies have led us to believe? Yes, indeed.
One of the earliest mentions of ethical laws regarding intelligent machines comes from the science fiction writer, Isaac Asimov. Asimov formulated the following laws to be followed by the robots of his fictional universe:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm
- A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
While Asimov’s “laws” are fictional, they still present a valid template to guide the ever-evolving path of artificial intelligence. They have been the basis of many social and moral dilemmas that exist in artificial intelligence.
While machines cannot harbor malice on their own, AI is still susceptible to acquiring biases or being negatively adopted by humans with ill intent, like hackers or terrorists. AI “learns” through the human-generated training data that it is provided and is, therefore, influenced by the social and cultural biases that may be present within it. An AI system hampered by biased programming or threatened by negative adoption does not serve humanity at large.
Technology that replaces manual human labor poses a threat to the livelihoods of those that it replaces. This has the potential to widen wealth gaps and limit economic development to those that have ownership of AI technology.
While AI’s capability of learning how to make smart and complex decisions helps ease the drudgery of human life, it can also pose the risk of developing a thought process that its creator is unable to control. Such a machine would be too unpredictable to guarantee human safety and benefit.
With technology evolving at a rapid pace, machines are more lifelike than ever before. A social humanoid robot named Sofia, manufactured by Hong Kong-based Hanson Robotic, functions on artificial intelligence. The robot has participated in several high-profile interviews and is capable of holding complex conversations. Sofia, despite not being a natural human being, was granted citizenship of Saudi Arabia on October 25th of 2017, becoming the first-ever robot to possess any citizenship of any nation. This act of granting a robot citizenship, making it equal to a human citizen in the eyes of law, has blurred the lines between man and machine, sparking debates around sentience, humanity and robot rights.
The realm of AI ethics is, therefore, a dynamic one that requires constant discussion and evolution to keep pace with the advances in technology.
Banner image from flickr