untitled design

AI could lead to ‘global annihilation’, experts say

Dozens of AI industry leaders, academics and even some celebrities called on Tuesday to reduce the risk of global annihilation due to artificial intelligence, arguing in a brief statement that the debate over the threat of extinction must be a priority. global.

“Mitigating the risk of extinction from AI must be a global priority, along with other societal-scale risks such as pandemics and nuclear war,” says the statement published by the Center for AI Safety.

The statement was signed by leading industry officials, including OpenAI CEO Sam Altman; the so-called “godfather” of AI, Geoffrey Hinton; senior executives and researchers at Google DeepMind and Anthropic; Kevin Scott, chief technology officer at Microsoft; Bruce Schneier, the pioneer of Internet security and encryption; climate advocate Bill McKibben; and musician Grimes, among others.

The statement highlights broad concerns about the ultimate danger of unchecked artificial intelligence. AI experts said society is still a long way from developing the kind of artificial general intelligence that is the stuff of science fiction; Today’s cutting-edge chatbots largely reproduce patterns based on training data they received and don’t think for themselves.

Still, the flood of hype and investment in the AI ​​industry has led to calls for regulation early in the AI ​​era, before major setbacks occur.

The statement follows the viral success of OpenAI’s ChatGPT, which helped fuel an arms race in the tech industry towards artificial intelligence. In response, a growing number of lawmakers, advocacy groups and tech experts have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and destroy jobs.

Hinton, whose pioneering work helped shape today’s AI systems, previously told CNN who decided to step down from his role at Google and “denounce” the technology after “suddenly” realizing “that these things are getting smarter than we are.”

Dan Hendrycks, director of the Center for AI Safety, said in a tweet on Tuesday that the statement first proposed by David Kreuger, a professor of AI at the University of Cambridge, does not preclude society from addressing other types of AI risk, as an algorithmic bias. or misinformation.

Hendrycks likened Tuesday’s statement to warnings from atomic scientists “issuing warnings about the very technologies they’ve created.”

“Societies can manage multiple risks at the same time; it’s not ‘either/or’ but ‘yes/and’,” Hendrycks tweeted. “From a risk management perspective, just as it would be unwise to exclusively prioritize present damages, it would also be unwise to ignore them.”

Source: CNN Brasil

You may also like

Get the latest

Stay Informed: Get the Latest Updates and Insights

 

Most popular