Feds Address Advanced AI Dangers, Plan Independent Evaluations

With more and more industry figures warning about the dangers of runaway advanced AI constructs, the White House is getting in on the action.

Soon after last fall's debut of ChatGPT, the sentient-sounding chatbot from OpenAI, some experts and pundits began to sound warnings about safety and responsibility. Those warnings have increased lately, following one astounding AI breakthrough after another as AI leaders like Microsoft and Google duel to infuse advanced AI tech throughout their products and services.

But many are worried that the rapid advancement of generative AI, with no regulatory oversight, can lead to misuse, harm to society or even -- the most extreme skeptics claim -- the destruction of humanity. For example, just within the last 10 days or so, current or former executives from OpenAI, Microsoft and Google have joined a chorus of critics warnings about the dangers of runaway AI (see the May 3 article, "Microsoft, Google, OpenAI Execs All Warn About AI Dangers").

They joined more than 27,000 others -- including prominent industry figures Elon Musk and Steve Wozniak -- who have warned about AI dangers by signing an open letter that calls on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4, the latest large language model (LLM) from OpenAI.

Now the federal government is addressing the issue on a number of fronts, including independent evaluations of advanced AI systems.

For example, Vice President Kamala Harris is today meeting with the CEOs of Microsoft, Google, OpenAI and Anthropic (another AI leader). Reuters reported the meeting invitation noted the "expectation that companies like yours must make sure their products are safe before making them available to the public."

Prior to that meeting, the White House published a fact sheet on other government initiatives to "promote responsible AI innovation that protects Americans' rights and safety," including:

  • New investments to power responsible American AI research and development (R&D). The National Science Foundation announced $140 million in funding to launch seven new National AI Research Institutes. They catalyze collaborative efforts across institutions of higher education, federal agencies, industry and others to pursue transformative AI advances that are ethical, trustworthy, responsible and serve the public good.
  • Public assessments of existing generative AI systems. Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI will participate in a public evaluation of AI systems at DEFCON 31, a hacking conference to be held in Las Vegas in August. The evaluations will explore how respective AI models align with the principles and practices outlined in the Biden-Harris Administration's Blueprint for an AI Bill of Rights and AI Risk Management Framework. "This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models," the fact sheet said. "Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation."
  • Policies to ensure the U.S. government is leading by example on mitigating AI risks and harnessing AI opportunities. The Office of Management and Budget (OMB) will release draft policy guidance on the use of AI systems by the U.S. government for public comment, aiming to help establish specific policies for federal departments and agencies to follow in order to ensure their development, procurement and use of AI systems centers on safeguarding the American people's rights and safety. "It will also empower agencies to responsibly leverage AI to advance their missions and strengthen their ability to equitably serve Americans -- and serve as a model for state and local governments, businesses and others to follow in their own procurement and use of AI," the fact sheet said. The draft guidance will be released this summer for public comment this summer in order to incorporate input from advocates, civil society, industry, and other stakeholders before finalization.

As far as today's meeting between Harris and other senior administrative officials with the CEOs of AI-leading companies, the White House published a background press call on the new AI announcements. It quotes a senior administration official as saying about that meeting: "We aim to have a frank discussions about the risks we see in current and near-term AI development. We're working with -- in this meeting, we're also aiming to underscore the importance of their role on mitigating risks and advancing responsible innovation, and we'll discuss how we can work together to protect the American people from the potential harms of AI so that they can reap the benefits of these new technologies."

About the Author

David Ramel is an editor and writer for Converge360.


Subscribe on YouTube