Renowned British-Canadian physicist Geoffrey Hinton, often referred to as the "Godfather of AI," issued stark warnings about the potential dangers of artificial intelligence following his receipt of the Nobel Prize in Physics in 2024 for his pioneering work on machine learning algorithms and neural networks. "When it comes to humanity's future, I'm not particularly optimistic," said Hinton, painting a grim picture of the path ahead, according to a report by The Independent.
Hinton's concerns center around the rapid advancement of AI technology, which he believes could soon surpass human intelligence and escape human control. "I suddenly changed my mind about whether these objects will be smarter than us. I think they are very close to it today and will be much smarter than us in the future... How are we going to survive that?" he explained, according to Popular Science.
"There is a 10-20 percent probability that within the next thirty years, Artificial Intelligence will cause the extinction of humanity," stated Hinton, as reported by CNN. He emphasized that this risk has increased due to the rapid development of AI technologies.
Having left Google last year to freely voice his concerns, Hinton stressed the need for government regulation and increased research into AI safety. "Just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely," he warned, pointing out the limitations of relying on market forces alone, according to Capital. He added that "the only thing that can force those large companies to do more safety research is government regulations."
"We will become like three-year-old children," Hinton said, comparing the potential control of AI over humanity to the relationship between adults and children, illustrating the power imbalance that could emerge if AI systems become more intelligent than humans, according to Mail Online. "You look around and there are very few examples of something more intelligent being controlled by something less intelligent... That makes you wonder whether when this artificial intelligence becomes smarter than us, it will take control."
Despite acknowledging the immense benefits of AI in areas such as healthcare, Hinton remains apprehensive about its unchecked development. "We also have to worry about a number of possible negative consequences. In particular, the threat that these things could get out of control," he cautioned, according to Mirror. He warned that AI technology increases the risk of cyber and phishing attacks, fake videos, and ongoing political interference.
John Hopfield, a 91-year-old emeritus professor at Princeton University and fellow pioneer in artificial intelligence, shares Hinton's apprehensions. "As a physicist, I am very worried about something that is uncontrolled, something I do not understand well enough to know what limits could be imposed on this technology," expressed Hopfield, according to Phys.org. He emphasized that scientists still poorly understand the functioning of modern AI systems.
"It is difficult to understand how we could prevent malicious actors from using it for negative purposes," Hinton stated, according to LIFO. In response to these concerns, he suggests that more resources should be dedicated to researching AI safety. "Investment in safety research needs to be increased 30-fold," he urged, emphasizing the need for a boost in efforts to mitigate potential risks, according to Popular Science. He emphasized the urgency for regulatory measures in AI development.
"I comfort myself with the normal excuse: if I hadn't done it, someone else would have," he said, reflecting on his contributions to AI, as mentioned in TIME Magazine.
This article was written in collaboration with generative AI company Alchemiq