Is artificial intelligence capable of attacking humanity? - study

Academic researchers and technology experts from around the world have debated whether super-intelligent AI can be contained or prevented from attacking humanity. The results are concerning.

Will AI be capable of overpowering humanity? (photo credit: Wikimedia Commons)
Will AI be capable of overpowering humanity?
(photo credit: Wikimedia Commons)

Any sci-fi fan can tell you that a good apocalypse begins and ends with artificial intelligence and a god complex.

But the main question is how close we really are to bringing aspects like Skynet from the Terminator franchise or Brainiac from DC Comics into the world, and if it is possible to control a being whose intelligence far exceeds that of the brightest and smartest humanity has to offer.

A number of scientists, philosophers and technology experts at international educational institutions published an article in which they analyzed the danger, stating among other things that if such an entity were to arise, it would not be possible to stop it. The authors of the study, who come from academic institutions and technological bodies in the US, Australia and Madrid, published their findings in the Journal of Artificial Intelligence Research. The article presents a complex picture regarding the relationship between humanity and its smart devices.

The field of artificial intelligence has fascinated and frightened the human race for decades, in fact since the invention of the computer, but until recently the potential dangers were confined to science fiction movies and books. The researchers pointed out that in recent years artificial intelligence has developed at an incredible pace with new techniques such as machine learning and reinforcement learning - which have been successfully applied in a large number of fields.

"Whether or not we're aware of it, AI significantly affects many aspects of human life, the way we experience products and services, from choice to consumption," the researchers write. "Examples of this include improved medical diagnosis through image processing, personalized recommendations for movies and books, e-mail filtering and more. Smart devices, which we use daily, activate a multitude of AI applications."

Artificial intelligence (credit: PIXABAY/WIKIMEDIA)
Artificial intelligence (credit: PIXABAY/WIKIMEDIA)

As an example, the researchers said that smartphone devices use not just AI, but run self-learning software and store almost infinite information about the user.

The AI ​​wins the battle

One of the most important aspects in the progress of artificial intelligence toward an advanced super-being is characterized by breakthroughs in the machine's calculation capabilities, the ability to design algorithms and of course the development that humans have made in technologies related to the world of communication. "The ability of machines to defeat human opponents in various game situations such as chess, poker, trivia and more is emblematic of a trend fueled by exponential growth in computer processing power, allowing it to defeat the best human minds," researchers said.

It is noted that due to this technological progress, we are experiencing a resurgence in the discussion of artificial intelligence as a potential disaster for humanity. "These risks range from machines that cause significant disruptions to the labor market, where it is possible to reach the use of drones and other autonomous weapons that can be used against humans," they emphasize.

The researchers claim that the greatest risk is super-intelligent artificial intelligence, which surpasses in its wisdom the best human brain in every possible field. To understand the source of the danger, we need to return to the three laws of the author Isaac Asimov, which for years represented the guidelines designed to prevent machines from attacking us.

Advertisement

Asimov's laws state:

  1. A robot may not harm a person or allow a person to harm another person.
  2. A robot must obey the orders given to it by humans, as long as it does not conflict with the first section.
  3. A robot must protect its existence as long as the protection does not conflict with the two previous sections.
  4. Another clause added later stated that a robot cannot harm humanity through inaction.

"These laws rely on flawed assumptions," state the authors of the study, "the desire or ability of the programmers to program these laws into an algorithm, and the ability of that algorithm to deviate from the laws autonomously, or to reprogram them by itself."


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


The authors of the study claim that this approach was correct in designing "simple" systems that make autonomous decisions on behalf of humans, but they are less relevant when it comes to super AI.

"Maximize survival"

In the study, it is noted that the timing of the discussion is not accidental, and it arises against the background of examples of the level at which AI has developed. One of them is the development of artificial intelligence in the field of arcade games, which uses reinforcement learning technology, without human intervention.

"AI is driven by maximizing some notion of online reward," the study authors explain. "It does not require supervision or command from humans. This points in the direction of machines, whose goal will be to maximize their survival without human programmers.

Artificial intelligence (credit: INGIMAGE)
Artificial intelligence (credit: INGIMAGE)

"A super-intelligent AI has the tools to mobilize a variety of resources in order to achieve goals that may be comprehensible to humans, let alone controllable.

"A superintelligence, given the task of maximizing happiness in the world, may think it is more effective to destroy all life on Earth and create faster computer simulations of happy thoughts," the researchers explain. "A superintelligence controlled by an incentive method may not trust humans to deliver the promised reward or may worry that the human operator will fail to acknowledge the achievement of the set goals."

"A superintelligence controlled by an incentive method may not trust humans to deliver the promised reward or may worry that the human operator will fail to acknowledge the achievement of the set goals."

Researchers of the study

The researchers state that the ability of modern computers to adapt using sophisticated machine learning algorithms makes it difficult for us to understand or prepare for the possibility of an uprising by such a super AI. The fear is that a single super-intelligent AI could hold every possible computer program in its memory at once, and any program written to prevent machines from harming humans could be erased or rewritten, without us having a way to catch up on it in time or prevent it.

In an era where machines hold all our sensitive information, and some are engaged in policing or military operations worldwide, with terrorist organizations and hostile countries constantly engaged in technological breakthroughs in the field, it is hard not to think that this is a clear and artificial danger.