A consulting firm commission by the U.S. State Department has published a report this week that recommends the creation of a new government agency to mitigate the impending threat posed by artificial intelligence.
According to Gladstone AI, government involvement with the development of AI is needed to prevent "urgent and growing risks to national security" that could eventually result in "extinction-level threat to the human species."
Titled "An Action Plan to Increase the Safety and Security of Advanced AI," the 250-page report was commissioned by the State Department just prior to the release of ChatGPT, which was the first interaction many Americans had with publicly available artificial intelligence.
Gladstone AI observed the public's interaction with ChatGPT and drew several conclusions, particularly as it pertains to the next stage in AI evolution. AGI, or artificial general intelligence, is described as a "transformative technology with profound implications for democratic governance and global security" in the report.
AGI is an advanced AI system that can "outperform humans across all economic and strategically relevant domains, such as producing practical long-term plans that are likely to work under real world conditions."
The nightmare scenario the report envisions would be the "loss of control" — "a potential failure mode under which a future AI system could become so capable that it escapes all human effort to contain its impact."
Gladstone AI recommended to the State Department the creation of a new federal agency not only to control AI research but also to limit the amount of computer power that can be used in any given AI system. The authors of the report contend "frontier" companies will engage in reckless tactics to remain competitive.
"Frontier AI labs face an intense and immediate incentive to scale their AI systems as fast as they can. They do not face an immediate incentive to invest in safety or security measures that do not deliver direct economic benefits, even though some do out of genuine concern," the report said.
In May 2023, 300 signatories including OpenAI CEO Sam Altman, signed a public statement alerting the public to the dangers of AI that read, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
© 2024 Newsmax. All rights reserved.