Deepfake Audio Kicks Off Senate Hearing on AI Oversight
A U.S. Senate hearing to explore AI regulations and oversight in order to tackle misuse like audio/image/video deepfakes started with an audio deepfake of AI-generated remarks.
Sen. Richard Blumenthal (D-Conn) opened the "Oversight of AI: Rules for Artificial Intelligence" hearing with introductory remarks that were written by ChatGPT and delivered in what appeared to be his voice, thanks to voice cloning software, just one aspect of the deepfake issue.
The fake opening remarks included: "Too often we have seen what happens when technology outpaces regulation. The unbridled exploitation of personal data, the proliferation of disinformation and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice and how the lack of transparency can undermine public trust. This is not the future we want."
Blumenthal, chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, helmed the hearing that featured witnesses including: Sam Altman, the CEO of OpenAI, Christina Montgomery, the chief privacy and trust officer of IBM, and Gary Marcus, a professor emeritus of psychology and neural science at NYU.
Altman, perhaps the single person most responsible for fears of runaway advanced AI causing a multitude of potential problems (the extermination of humanity being on the high end), made news by actually advocating for licensing and testing requirements for development and release of AI models above a certain threshold of capabilities.
He also said the U.S. should require companies to disclose the data used to train their AI models, something that OpenAI stopped doing in the rush to monetize generative AI tech along with partner Microsoft and gain a competitive edge on rivals like cloud giant Google.
According to various news reports, some Altman quotes included:
- "I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening."
- "I do think some regulation would be quite wise on this topic. People need to know if they're talking to an AI, if content they're looking at might be generated or might not."
- "We might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities."
- "When Photoshop came onto the scene a long time ago, for a while people were really quite fooled by Photoshopped images and then pretty quickly developed an understanding that images were Photoshopped. This will be like that, but on steroids."
- "As this technology advances, we understand that people are anxious about how it could change the way we live. We are too."
In conclusion, he said:
This is a remarkable time to be working on AI technology. Six months ago, no one had heard of ChatGPT. Now, ChatGPT is a household name, and people are benefiting from it in important ways.
We also understand that people are rightly anxious about AI technology. We take the risks of this technology very seriously and will continue to do so in the future. We believe that government and industry together can manage the risks so that we can all enjoy the tremendous potential.
Montgomery, meanwhile emphasized the importance of trust and transparency in AI development and deployment, while highlighting IBM's principles of responsible stewardship, data rights and accountability.
She said in conclusion:
"Mr. Chairman, and members of the subcommittee, the era of AI cannot be another era of move fast and break things. But neither do we need a six-month pause -- these systems are within our control today, as are the solutions. What we need at this pivotal moment is clear, reasonable policy and sound guardrails. These guardrails should be matched with meaningful steps by the business community to do their part. This should be an issue where Congress and the business community work together to get this right for the American people. It's what they expect, and what they deserve."
Marcus, in his testimony, didn't hold back, even taking some shots at OpenAI, with the company's CEO Altman right there:
The big tech companies' preferred plan boils down to "trust us."
Why should we? The sums of money at stake are mind-boggling. And missions drift. OpenAI's original mission statement proclaimed "Our goal is to advance [AI] in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."
Seven years later, they are largely beholden to Microsoft, embroiled in part in an epic battle of search engines that routinely make things up -- forcing Alphabet to rush out products and deemphasize safety. Humanity has taken a back seat.
OpenAI has also said, and I agree, "it's important that efforts like ours submit to independent audits before releasing new systems", but to my knowledge they have not yet submitted to such audits. They have also said "at some point, it may be important to get independent review before starting to train future systems." But again, they have not submitted to any such advance reviews so far.
A 3-hour video of the hearing and downloadable testimony PDFs can be found here.
David Ramel is an editor and writer for Converge360.