It's the end of the world as we know it: 'Godfather of AI' warns nation of trouble ahead

One of the world’s foremost architects of artificial intelligence warned Wednesday that unexpectedly rapid advances in AI – including its ability to learn simple reasoning – suggest it could someday take over the world and push humanity toward extinction.

Geoffrey Hinton, the renowned researcher and "Godfather of AI," quit his high-profile job at Google recently so he could speak freely about the serious risks that he now believes may accompany the artificial intelligence technology he helped ushered in, including user-friendly applications like ChatGPT.

Hinton, 75, gave his first public remarks about his concerns at the MIT Technology Review’s AI conference. His comments appeared to rattle the audience of some of the nation’s top tech creators and AI developers.

Asked by the panel’s moderator what was the “worst case scenario that you think is conceivable,” Hinton replied without hesitation. “I think it's quite conceivable," he said, "that humanity is just a passing phase in the evolution of intelligence.”

Hinton then offered an extremely detailed scientific explanation of why that's the case -- an explanation that would be undecipherable to anyone but an AI creator like himself. Then, reverting to plain English, he said that he and other AI creators have essentially created an immortal form of digital intelligence that might be shut off on one machine to bring it under control. But it could easily be brought “back to life” on another machine if given the proper instructions, he said.

”And it may keep us around for a while to keep the power stations running. But after that, maybe not,” Hinton said. “So the good news is we figured out how to build beings that are immortal. ''''But that immortality," he said, ''is not for us.”


The British-Canadian computer scientist and cognitive psychologist has spent the past several decades at Google, most recently as a vice president and Google Engineering Fellow. He announced his departure in a May 1 article in the New York Times, and has since given several interviews about how artificial intelligence constructs could soon overtake the level of information that a human brain holds, and possibly grow out of control.

Hinton also stressed in a tweet that he did not leave Google in order to criticize it. "Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google," Hinton said. "Google has acted very responsibly."

On Wednesday, he gave a more measured response, saying Google, Microsoft and other developers of AI platforms designed to make them accessible to the general public are living in an economically competitive world where if they don’t develop it, somebody else will. Governments will have the same problem, he said, given all of the extraordinary things they will be able to do with artificial intelligence.

Hinton singled out China on several occasions, saying it will continue to develop AI because of its rush toward global domination, even if the U.S. Congress and the Biden administration push through some restrictions that they are contemplating.

"I think if you take the existential risk seriously, as I now do – I used to think it was way off, but I now think it's serious and fairly close – it might be quite sensible to just stop developing these things any further,” Hinton said. “But I think it's completely naive to think that would happen.”

“Even if the U.S. stops developing it, the Chinese won't,” he said. “They're going to be used in weapons. And just for that reason alone, governments aren't going to stop developing.”

On Thursday, the Biden administration is set to roll out a new set of actions aimed at promoting responsible innovation in artificial intelligence while protecting the rights and safety of Americans.

A senior administration official said managing the risks of AI is at the core of the new effort, given the threat it poses to a broad array of applications. Those include hacking into autonomous vehicles and other AI-driven entities, risks to privacy such as enabling real-time surveillance and the potential for job displacement from automation, the official said, speaking on the condition of anonymity to discuss administration efforts.

Not every expert agrees AI will destroy us

Not everyone agrees with Hinton’s most dire predictions, including in some cases Hinton himself. In a tweet Wednesday, he acknowledged that he was dealing, to some degree, with hypotheticals. “It's possible that I am totally wrong about digital intelligence overtaking us,” he said. “Nobody really knows, which is why we should worry now.”

Other computer security experts downplayed Hinton’s concerns, saying that artificial intelligence is basically an extremely sophisticated programming platform that can only go so far as humans allow it.

It cannot, for instance, become the kind of sentient, self-aware and all-knowing technology that created the Terminator of movie fame, according to Michael Hamilton, a co-founder of the Critical Insight risk management firm and former vice-chair of the Department of Homeland Security’s State, Local, Tribal and Territorial Government Coordinating Council.

“I think everybody needs to take a step back here and get away from the hyperbole, because we don't know that this is going to turn into Skynet,” Hamilton told USA TODAY, referring to the artificial intelligence system that destroys humanity in the Terminator movie franchise. “Everybody's saying AI is going to become sentient or whatever. No, it's not. It’s a computer. It does what you tell it to do.”

Simple reasoning, or thinking like a human

AI is not fully there yet, Hinton said on Wednesday, but it is making such rapid progress that he began to get frightened during the past several months.

Currently, he said, AI has an approximate IQ of about 80 or 90, but developers could conceivably raise that to an IQ of 210. That's more intelligent than only a handful of humans on the planet. Already, he said, AI's ability to do "simple reasoning" has startled him.

For example, he said, he told an AI platform that his house had rooms that were painted white, blue and yellow, and that the yellow walls were fading to white color.

“So what should I do if I want all the walls to be white in two years time?” he said he asked in an AI query. “And it said you should paint all the blue rooms yellow. That's not the natural solution. But it works right?”

“That's pretty impressive, common-sense reasoning of the kind that it's been very hard to get AI to do” until recently, he added, because it incorporated into its answer the concept of fading paint and also the passage of time.

Rapid advancementand taking control

Hinton says he is worried about the rapid – even astonishing – pace at which AI is advancing and surpassing the most ambitious expectations of many experts in the field. He believes that there is significant potential for AI to become much smarter than humans, allowing it to excel at manipulation and pose a serious threat in a variety of ways.

If AI continues on its trajectory and surpasses human intelligence, he said, it could become extremely difficult, if not impossible, to control.

A more intelligent entity being controlled by a less intelligent one is uncommon, he said, which means that AI could find ways to bypass restrictions on its activities – and its access to technology – and eventually manipulate people to serve its own purposes.

AI's ability to soak up knowledge and data from humans and vast stores of information raises serious concerns about its potential to manipulate people by spreading misinformation, Hinton said.

If and when AI becomes significantly smarter and more knowledgeable, Hinton says, it could use this advantage to deceive people and make them believe things that aren’t true.

"These things will have learned from us, by reading all the novels that ever were and everything Machiavelli ever wrote, how to manipulate people," Hinton said. "You won't realize what's going on. You'll be like a two-year old who's being asked do you want the peas or the cauliflower and doesn't realize that you don't have to have either."

And, he warned, "It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself."

Helping the rich get richer and the poor get poorer

Hinton, joining a chorus of other Big Tech experts, said he is concerned about how AI could put potentially huge numbers of people out of work and cause wholesale job displacements in certain industries.

That, combined with its potential for disinformation, could result in many vulnerable people being manipulated and not being able to discern truth from falsehood. That, in turn, could lead to societal disruption and economic and political consequences.

There are many ways in which AI will make things better for companies and even workers, who will benefit from its ability to churn out letters and respond to large volumes of email or queries.

"It's going to cause huge increases in productivity," Hinton said. "My worry is for those increases in productivity are going to go to putting people out of work and making the rich richer and the poor poorer. And as you do that, as you make that gap bigger, society gets more and more violent."

Another concern of Hinton’s, he said, is the potential for humans with malicious intent to misuse AI technology for their own purposes, such as making weapons, inciting violence or manipulating elections.

That’s one of the reasons he wants to play a role in establishing policies for the responsible use of AI, including the consideration of ethical implications.

"I don't think there's much chance of stopping development. What we want is some way of making sure that even if they're smarter than us, they're going to do things that are beneficial for us," he said. "But we need to try and do that in a world where there's bad actors who want to build robot soldiers that kill people."

No obvious solutions to AI problems

Hinton stresses he retired from Google not to criticize it or other AI developers, but so he can have the freedom to openly discuss the risks associated with artificial intelligence and related technologies like machine learning. He said in his talk Wednesday that he believes it is important to address AI safety issues – and to foster efforts to ensure that it has a positive impact on society – without constraints from corporate interests.

Asked what developers should do to minimize the chances of potentially catastrophic evolution of AI, Hinton apologized and said he has no good answers.

"It seems very hard to me. So, I'm sorry. I'm sounding the alarm and saying we have to worry about this. And I wish I had a nice simple solution I can push but I don't," Hinton said. "But I think it's very important that people get together and think hard about it and see whether there is a solution. It's not clear there is a solution."


No comments:

Powered by Blogger.