AI CEOs, experts worried about the “risk of extinction” from AI

AI researchers, engineers, and CEOs have released a statement expressing their concern about the potential danger AI poses, that it could wipe off humanity, comparing the threat level to the pandemics and nuclear war. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads their 22-word statement.

This statement, published by a San Francisco-based non-profit, the Center for AI Safety, co-signed by prominent figures such as OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, as well as Geoffrey Hinton and Youshua Bengio — two of the three AI researchers who won the 2018 Turing Award, sometimes referred to as the “Nobel Prize of computing.”

Read more

You may also like

More in IT

Comments are closed.