Home Technology Scientists and Experts Signed a Statement Urging Governments to Prioritize the Mitigation...

Scientists and Experts Signed a Statement Urging Governments to Prioritize the Mitigation of Risk of Extinction from Artificial Intelligence (AI)

1421
0

Two weeks ago, 80 scientists that are experts on artificial intelligence (AI), along with more than 200 “other notable figures”, expressed their concern about the dangers of AI by signing a statement demanding government officials to mitigate the risk of extinction from AI. As stated at the Center for AI Safety (CAIS) webpage:

Advertisement

“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks.”

According to their website, CAIS exists “to ensure the safe development and deployment of AI”. The purpose of the statement is to “open up a discussion” about the risks that AI poses in the future of humanity and also to “create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously”. According to the said statement:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

This succinct warning from scientists, academics, corporate executives, engineers, and other concerned individuals serves as a dire warning to the public and especially to governments and corporations who are unquestioningly accepting AI and other related technologies.

The signatories include Ian Goodfellow, principal scientist from Google DeepMind, John Schulman, Co-Founder of OpenAI, Yi Zeng, professor and director of Brain-inspired Cognitive AI Lab at the Institute of Automation, Aaron Xiang, Executive Chairman of the UN Sciences and Technology Organization, Sam Altman, CEO of OpenAI, and Geoffrey Hinton, Emeritus Professor of Computer Science at the University of Toronto and also known as “the father of AI”, among others.

In a report from AI Index, an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) formed by an interdisciplinary group of experts from different schools and universities and businesses, 57% of computer scientists believe that the current progress in this technology is “moving us toward AGI (artificial general intelligence)” and 58% believe that issues in AGI should be taken seriously.

This annual report “tracks, collates, distills, and visualizes data relating to artificial intelligence” so that governments and industry leaders can “take meaningful action to advance AI responsibly and ethically with humans in mind”. It was also stated in the said report that:

“These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new,” says the report. However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment.”

Currently, there is no serious government safeguards that will protect the public from the problems or the potential harms of generative AI. Potentially harmful AI tools are being used or deployed recklessly by business companies without sincere regards to public safety; gambling with people’s well-being and livelihoods.

Aside from massive layoffs, the lack of meaningful interventions against AI tools from governments and policymakers could “harm additional healthcare patients, undermine fact-based journalism, hasten the destruction of democracy, and lead to an unintended nuclear war”, according to AI Index Report.

References:

https://www.safe.ai/statement-on-ai-risk

https://www.commondreams.org/news/dangers-of-unregulated-artificial-intelligence

https://www.commondreams.org/news/artificial-intelligence-risks-nuclear-level-disaster