Technological advancement like artificial superintelligence has always been met by apprehension; especially when it ventures into the unknown or when the venture is to create innovations that will change how we experience life. That was the case with airplanes, electricity, space travel, the creation of atomic bombs and a whole host of other discoveries. However, nothing has come close to the apprehension, as well as the excitement that is associated with Artificial superIntelligence. And how ASI could destroy humanity beyond 2019.
As with most things science, pop culture has helped fuel the speculation. From Star Wars, to Stanley Kubrick’s Space Odyssey, the Matrix, Terminator, Resident Evil and more recently Transcendence, Her and other such like movies, the suggestion has been that as machines acquire more cognitive abilities, it is inevitable that they will seek to supplant human beings in the hierarchical order and that would spell the end of humanity.
Dismissing such an apocalyptic conclusion of events is easy. However, when you realize how far AI has come, and that a majority of the researchers in the field believe we will achieve Artificial General Intelligence in the next two decades, the issue of having machines or systems that are more intelligent than humans raises concern.
There is even further cause of worry by the fact that many in the field while agreeing artificial intelligence has an unlimited potential to learn and gain knowledge and refuse to rule out superintelligence, they have no idea at the moment of how to control or prevent the dangers that AI could pose to humanity.
What Is Artificial Superintelligence?
To fully understand artificial superintelligence and the negative impact it can someday have, an understanding of the various levels of artificial intelligence is essential.
The basic definition of artificial intelligence is a machine that can simulate or mimic the intellectual abilities of human beings primary learning. That is the ability to get new information and the rules governing the use of this information, reasoning (using the rules and data to reach definite conclusions, and problem-solving. Other skills depending on the purpose of the AI may include understanding language and speech.
There are three levels of AI;
1. Artificial Narrow Intelligence (ANI)
ANI is the simplest form of AI and the one currently in existence. The difference with the rest of the AI still under research and development is that ANI specializes in a single task or field.
ANI is also present with varying abilities, for example, a calculator and the computer games AI against which you play. It is also known as eak’ AI because it perceives and acts on its situation without the concept of the outside environment.
Still, even among ANI, there are sophisticated systems such as the Google search engine, SIRI, and algorithms for large online stores like Amazon and Alibaba as well as online movie stores like Netflix.
Even advanced sperm filters, marketing algorithms such as the infamous Cambridge Analytica system and the ANI’s used in stock exchanges.
The difference between these modern ANI and the earlier forms is that they can go backward in time and study past behavior. The ANI will then store the information along with whatever has been preprogrammed in them. They will then use this information in current and future instances to make more accurate decisions.
There is further ongoing work in the ANI field that should see them able to diagnose and recommend medical care to patients, prepare taxes, offer legal counsel and drive cars.
2. Artificial General Intelligence (AGI)
Artificial General Intelligence (AGI) is the next step up of artificial intelligence. It means artificial intelligence with human-level capacity across all areas.
While most ANI’s can replicate or even surpass the human ability in one area, they cannot match it or perform any function, in a different space.
The generalist aspect of the human mind is what makes it unique and powerful. An AGI would need to be able to learn as the human brain does through its own experiences with the world instead of just the data programmed in it.
It is projected that the world would see the advent of AGI’s in the 2030s
3. Artificial Superintelligence (ASI)
Artificial Superintelligence (ASI) is the last step in AI, and it will have the ability to surpass or outperform the most intelligent humans in every intellectual factor and by an extreme margin.
Its learning and knowledge will be infinite. It will also have the ability to create better machines than humans which various thinkers believe would result in an intelligence explosion; an event that is feared will leave humans left behind in the evolution curve.
The attainment of artificial superintelligence is the most feared stage of artificial intelligence because it is highly unlikely that any intelligent form will want to lose its autonomy to a less intelligent one.
Artificial Intelligence Negative Impacts
Even before getting to the artificial superintelligence level, there are already several negative impacts being experienced as a result of AI.
1. Massive Job Losses
Every technological innovation has always resulted in a massive loss of employment. Most of the time these have been compensated by the opening up of new fields in the market place; and most of the time requiring little training.
However, artificial intelligence is set to redefine the whole working culture. It does not require supervisors, and as it spreads in all industries and aspects of society, many people will find themselves jobless or working far fewer hours.
From drivers, security people, teachers to doctors, lawyers, pilots, and stock traders, most professionals will be out of work with some already experiencing the effect.
It is also not feasible to suggest all these people will suddenly become programmers and even then AI itself will be doing that.
It will cause upheavals as people look for ways to earn livelihoods and adapt to changing work culture.
2. Pressure on Society’s Support Systems
Most people who have been rendered redundant and who lack any skills to adapt to the job market usually end up on welfare.
However, when a large number of people are rendered obsolete, an already stretched system will not be sufficient any more; and then everyone will feel the impact as taxes rise to help support the social programs.
3. Invasion of Privacy
The use of AI has already led to many issues with privacy. From spying people’s online activities to their daily activities when interacting with other AI-powered systems like Cortana, Echo, and other assistant software.
The already mentioned Cambridge Analytica is a case in point in which AI was used not only to spy on people’s preferences but further use the information obtained to target the individuals with political messages meant to change the outcomes of elections worldwide.
It is a serious ethical and criminal issue, and its discovery after many incidents had happened.
4. The Need for Trial and Error Before Perfecting the System
Every innovation requires some level of testing, and there is risk associated with the initial use in public. However, with all other technology, the creators fully understood the system and knew their potential dangers resulting in the placement of safeguards.
The problem with recent forms of AI is that they are still prone to glitches. Even in the field, the creators do not yet understand how deep learning occurs and how the AI will react to new case scenarios.
Such a situation has happened before in one case, Microsofts chat robot in the persona of a girl turned into a profane Hitler-loving robot after encountering new user behavior in the form of baiting.
A fatal example was the case of a self-driving Tesla where the AI autopilot, camera, and front-facing radar all failed to detect a tractor-trailer mistaking it for a variety of things resulting in an accident.
In Tesla’s defense, however, the driver was repeatedly warned to take over the wheel. Unfortunately, the driver was unresponsive. Here is a link to the full article regarding this incident.
Such risks can only increase with the widespread use of AI as the creators miss out on vulnerabilities and need more years to fix emerging issues.
Dangers of AI
Even as AI advances, the negative impact can only get more significant due to several AI dangers. Some of the risks are inherent due to the way AI works, human error or intentions.
Here is a look at ways in which AI can become dangerous.
1. The AI is programmed for destructive activities
As is often the case with any scientific field, all nations are involved in a race to be the first to possess the most advanced form of AI.
This has inevitably led to the integration of AI into the military. As the AI systems gain more autonomy, we could get to a point where war begins when an AI system responds to either rogue commands or wrongfully interpret a threat.
We could see a scenario when this happens, and no one’s able to shut them down as the AI follows a set goal of defense.
2. Powerful AI systems
Even if Artificial Intelligence’s programming only allows it to do one thing, in the wrong hands, this could have a devastating impact. From fraud through identity theft, cyber attacks, market crashes and even war are all possible.
Unlike other war weapons that require massive capital investment and can be affected by physical distance, all one needs, in this case, is access to an AI system or a good programmer.
The increase in devastating cyber-attacks across nations and corporations, even security organizations like the FBI, mean such a threat is always lurking and maybe just a matter of time.
3. Lack of Fully Aligning One’s Goals to the AI
The lack of aligning one’s goal to the AI’s or the AIs wrongful interpretation of inputs can result in a fatal result on both the user as well as the rest of the public.
Take the simple case where you have an AI-powered vehicle with advanced abilities. If you request to get somewhere fast, the AI will try to find the most efficient way to do which may clash with other people resulting either in traffic accidents, and you may have little to no control in the midst of all these events.
Writing efficient orders relies on how specific the goal can be, which, unfortunately, is not how human function and speech work. On the other hand, AI cannot fully translate human intention through speech.
4. Lack of Regulation and Ethical Framework in Developing AI
As already mentioned, developing AI has the advantage of being less capital intensive compared to other technological innovations. It means a group of skilled programmers can create an A.I. in the privacy of their own basement.
Given that most of the frontrunners are keeping their breakthroughs secret until they unveil their new product, little is known about any safeguards and threats posed by each advanced AI introduced for public use.
This lack of regulation means humanity could find itself using AI with capabilities users do not understand, and even the creators have no control or safeguards.
The Existential Threat To Humanity
The main danger that most scholars in the field of tech and science like Elon Musk, Bill Gates and Stephen Hawking are apprehensive about is what may happen the moment AI gets to the Artificial General Intelligence level and Artificial Superintelligence.
These two levels of artificial intelligence are no longer an object of science fiction but only a matter of time. It is surprising that little research has been spent on the kind of safeguards that can be put in place.
These AI safeguards would be there to avoid the ultimate danger AI may pose to humanity. Like the TERMINATOR, for example, minus the time travel of course.
The argument behind the existential threat posed by AI to humans is made on three primary fronts.
First, a super intelligent AI may reach the level of self-awareness; and start having its own goals different from what was programmed in it.
Given its intellectual capacity, it seeks to achieve these goals which may not align with humanity’s goals. Another argument based on orthogonality states that a machine even with superhuman intelligence will still stick to its original purpose.
The only challenge is that such AI will develop self-preservation as a subgoal; And Like humans, the AI would seek to fulfill this through any means necessary.
They would regard any obstacles including being switched off as threats to its goal creating a source of conflict.
The final argument borrows the self-preservation premise. This time ASI would use self-preservation to pit themselves in competition for the limited resources. One resource, in particular, is energy. The ASI could then seek to eliminate humans to avoid competition for these limited resources.
Whatever your inclination towards AI, it is essential to know that surveys done among researchers in the field has the most pessimistic ones saying Artificial Superintelligence (ASI) will be reached in the next 75 years. The optimistic researchers would have that technology invented much sooner.
It could ultimately be man’s last innovation should intelligence explosion occur. And the consequences may have a catastrophic impact on humankind’s existence.