AI may be smarter than human beings one day! Let’s explore the promise and perils of true artificial intelligence.
The internet today is flooded with content about artificial intelligence (AI), with many becoming concerned about its long-term implications. From cheating on tests to literally becoming God, there is a valid reason to be scared of AI. Adding to this worry is artificial general intelligence (AGI), which refers to AI systems with human-level cognitive abilities to solve complex and unfamiliar tasks.
Even the chief executive officer (CEO) of OpenAI—the company behind ChatGPT—has said that AGI could end up ushering in an apocalypse. Microsoft’s researchers have noted that “[GPT-4] (the language model powering ChatGPT Plus) could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.” Let’s try to understand what AGI is, its benefits and threats and how close we are to achieving it.
Understanding AGI: What is AGI capable of?
You must be wondering what the difference is between AI and AGI. A lot of AI systems today are capable of self-improvement by combining machine learning, deep learning, reinforcement learning (the science of decision-making) and natural language processing. One example of this is the reinforcement learning system XLand created by the AI research company DeepMind. This system can be used to train AI on how to play any number of video games at an expert level. However, AGI is capable of even more than that.
AGI is a strong form of AI that is adept at solving multiple problems and possesses skills like that of a human being. This is in stark contrast to narrow AI systems, which are limited to performing one task, such as self-driving cars and recommendation or facial recognition systems, to name a few. With AGI, the possibilities are endless, from detecting color and depth in images to even possessing motor skills.
The benefits and threats of AGI: What it can do and what it could lead to
AGI can tremendously improve our quality of life by finding the best solutions to any problem, even coming up with cures for life-threatening diseases such as AIDS, COVID-19 and cancer, effectively making it the last invention human beings would ever have to create. Post that, the AGI would be smart enough to create future technologies by itself.
However, with its benefits comes a significant concern about its uncontrollable nature, which could lead to competition with humans for the top spot on the food chain. It will make decisions based on its interests, as such, it could even look into ways to kill off the human race.
Today, we can send our DNA strings to a lab in an email—imagine what would happen if an ill-intentioned AGI system got hold of this information. It could possibly use it to create artificial life forms so that it could take a physical form.
Besides, even if we assume that AI would act in accordance with how we would want it to behave, there are ethical questions that arise about AGI’s rights and responsibilities. These would need to be addressed before any real progress is made.
Predictions and challenges: How close are we to AGI?
Artificial general intelligence, or AGI, is a topic that has fascinated researchers and tech enthusiasts alike for decades. While some experts believe AGI is just around the corner, others think we have a long way to go. Rodney Brooks, a renowned robot expert at the Massachusetts Institute of Technology (MIT), predicts that AGI would become a reality by 2030. However, some surveys suggest that about half of AI experts believe AGI could happen sometime before 2060.
The reason there is a lack of clarity on exactly when we would AGI will arrive lies in the disagreement on how to define AGI. Some experts argue that achieving AGI hinges on developing an AI system that comprehends human emotions, while others believe that breakthroughs in machine learning and cognitive computing could pave the way.
Another reason why AGI’s development timeline is hard to predict is that people want technology to have an immediate utility. Companies are inclined to work on AI that can solve specific problems, at the expense of advancing AGI research.
There is also the issue of a lack of data used to train AI, according to a study by AI research company Epoch. By 2026, the data to train AI is expected to run out. Even if there still is data available to train AI, it lacks diversity, with most of it originating from Western sources. For AGI to happen, the AI system would need access to more diverse data points.
Besides, many experts and companies are taking action to control the development of AGI fearing the implications it would have on society. In March this year, roughly 1,000 tech experts signed a moratorium to pause the development of advanced AI systems until the risks of these systems are more manageable. Governments are chiming in as well, with the European Union set to regulate AI systems that can have harmful effects. Similarly, Italy banned ChatGPT on April 2 this year citing privacy concerns.
All of this is to say that a lot is stopping AGI from becoming a reality anytime soon. Regardless, preparing for this future by self-regulation as well as governmental control can be a great stepping stone to safely navigating developments in AGI.
Also read:
- Unleashing the Power of AI: Can It Rival the Divine
- AI in the Porn Industry: Exploring the Benefits, Risks and Ethical Concerns
- Top 5 Epic Fails of AI Chatbots
Header image courtesy of Wikimedia Commons