After leaving Google to issue warnings about the risks of digital intelligence, Dr. Geoffrey Hinton has been flooded with requests for speaking engagements
Just days after resigning from Google to raise awareness about the potential risks of digital intelligence, Dr Geoffrey Hinton, often referred to as the “godfather of artificial intelligence,” announced that he has received requests for help from Bernie Sanders, Elon Musk, and the White House. In 2018, Dr Hinton received the Turing award, the highest accolade in computer science, for his contributions to “deep learning,” alongside Yann Lecun of Meta and Yoshua Bengio of the University of Montreal.
Hinton’s work on “deep learning” led to the development of the technology that now underpins the AI revolution. This technology was born out of his efforts to comprehend the human brain, which convinced him that digital brains might eventually outperform biological ones. However, the London-born psychologist and computer scientist may not provide the advice that the powerful are hoping to receive.
Inevitably, the US government has numerous concerns regarding national security. However, I disagree with their perspective,” he informed the Guardian. “For instance, I believe that the Defense Department thinks they are the only ones capable of handling these technologies, just like how they think they are the only ones capable of using nuclear weapons.
“I am a socialist,” Hinton added. “I do not think that private individuals or companies should own the media or the ‘means of computation.’”
While Dr Geoffrey Hinton has been vocal about the risks of digital intelligence, he admits he is not well-versed in policy matters and does not have a clear solution to offer. “I’m just someone who’s suddenly become aware that there’s a danger of something really bad happening,” he said. “I wish I had a nice solution, like: ‘Just stop burning carbon, and you’ll be OK.’ But I can’t see a simple solution like that.”
Over the past five decades, my efforts have been focused on creating computer models that emulate the brain’s learning process. The purpose was to gain a deeper understanding of how the brain learns. However, recently, I’ve come to the realization that these large-scale computer models may actually surpass the brain’s capabilities.
This realization raises concerns, and we must ponder whether there are any potential courses of action to address this issue. Unfortunately, I’m not very hopeful because there are no known examples of less intelligent entities controlling more intelligent ones.
To illustrate the magnitude of the issue, imagine an entity that is as much more intelligent than us as we are than a frog. It’s easy to suggest that we simply avoid connecting such entities to the internet, but even if they interact with us, they could potentially manipulate us into taking certain actions.
The tools he creates can aid authoritarian regimes in undermining truth or manipulating elections. Considering these dangers, it’s not difficult to envision the challenges we face. Moreover, given that Americans can’t even agree on prohibiting teenage boys from obtaining assault rifles, it’s apparent that addressing these challenges will be challenging.
In the tragic Uvalde incident, where 21 people, including children, were killed, over 200 police officers were reluctant to enter the building because the perpetrator had an assault rifle. However, despite such incidents, there seems to be no agreement to ban assault weapons. Thus, the current political system, which is fundamentally dysfunctional, is inadequate to deal with these threats.