Creative World India Logo

Artificial intelligence raises risk of extinction, experts say in new warning

User Image

Views (72)

Post Image
Scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning Tuesday about the perils that artificial intelligence poses to humankind. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said. Sam Altman, CEO of ChatGPT maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence, were among the hundreds of leading figures who signed the statement, which was posted on the Center for AI Safety's website.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said.

The latest warning was intentionally succinct — just a single sentence — to encompass a broad coalition of scientists who might not agree on the most likely risks or the best solutions to prevent them, said Dan Hendrycks, executive director of the San Francisco-based nonprofit Center for AI Safety, which organized the move.

“There’s a variety of people from all top universities in various different fields who are concerned by this and think that this is a global priority,” Hendrycks said. “So we had to get people to sort of come out of the closet, so to speak, on this issue because many were sort of silently speaking among each other.”

He compared it to nuclear scientists in the 1930s warning people to be careful even though “we haven’t quite developed the bomb yet.”

“Nobody is saying that GPT-4 or ChatGPT today is causing these sorts of concerns,” Hendrycks said. “We’re trying to address these risks before they happen rather than try and address catastrophes after the fact.”

“Given our failure to heed the early warnings about climate change 35 years ago, it feels to me as if it would be smart to actually think this one through before it’s all a done deal,” he said by email Tuesday.

An academic who helped push for the letter said he used to be mocked for his concerns about AI existential risk, even as rapid advancements in machine-learning research over the past decade have exceeded many people’s expectations.

David Krueger, an assistant computer science professor at the University of Cambridge, said some of the hesitation in speaking out is that scientists don’t want to be seen as suggesting AI “consciousness or AI doing something magic,” but he said AI systems don’t need to be self-aware or setting their own goals to pose a threat to humanity.

“I’m not wedded to some particular kind of risk. I think there’s a lot of different ways for things to go badly," Krueger said. "But I think the one that is historically the most controversial is risk of extinction, specifically by AI systems that get out of control.”

___

O'Brien reported from Providence, Rhode Island. AP Business Writers Frank Bajak in Boston and Kelvin Chan in London contributed.


0 Likes

Comments (0)

Please Login to Comment