OpenAI suggests voluntary AI standards, not government mandates, to ensure AI safety

More and more policymakers talking about government rules for emerging AI technology

The top lawyer for OpenAI, the company that developed ChatGPT, argued that the best way to regulate artificial intelligence is not to start with government mandated rules and regulations but to allow the companies themselves to set standards that ensure AI is used safely and responsibly.

OpenAI General Counsel Jason Kwon made that argument during a Tuesday panel discussion in Washington, D.C., which was hosted by BSA/The Software Alliance, even as he acknowledged that AI is developing so quickly that it can often lead to unexpected results that companies quickly need to rein in.

Still, when asked what his message to policymakers was, Kwon recommended voluntary, industry-led standards for AI, calling for a tactic that many companies in most industries tend to favor over government mandates.

DEMOCRAT SEEKS TO REGULATE AI-GENERATED CAMPAIGN ADS AFTER GOP VIDEO DEPICTS DYSTOPIAN BIDEN VICTORY IN 2024

OpenAI CEO Sam Altman

The top lawyer at OpenAI, run by CEO Sam Altman, above, said this week that the company recommends voluntary industry standards to regulate AI, not government mandates. (Jason Redmond / AFP via Getty Images / File)

"I think it first starts with trying to get to some kind of standards. Those could start with industry standards," he said, adding that decisions could be made later about "whether or not to make those compulsory."

He said voluntary tests developed by the industry might be designed to catch things like unwanted discrimination or bias in AI systems – elements that might allow generative AI to produce things like "toxic" outputs or that might lead to discriminatory results when deployed by companies.

"You might, for example, have a benchmark that runs a bunch of tests against these models [to] test for, does it generate toxic output when you really, really try very hard," he said. "What’s the percentage of times that that happens?"

While Kwon proposed voluntary standards, he admitted that OpenAI’s ChatGPT has led to some unexpected results. He said the company was aware of the possibility, for example, that its chatbot could be used to produce disinformation. But he indicated the company was surprised that one early "misuse" of the system was to produce "toxic outputs."

AI REQUIRES ‘NEW GENERATION’ OF ARMS CONTROL DEAL TO GOVERN FUTURE WARFIGHTING, SAYS MARINE VETERAN LAWMAKER

OpenAI Jason Kwon

OpenAI General Counsel Jason Kwon, right, talks about the possibility of regulation of AI with BSA/The Software Alliance CEO Victoria Espinel on May 9, 2023, in Washington, D.C. (Screenshot / Fox News Digital)

He said that while it might seem to be an obvious concern looking back, he said the company learned that it always has to be paying attention to how ChatGPT is being used.

placeholder"You can articulate these risks … but you also have to keep an open mind and paying attention to what is actually happening on the ground," he said.

Kwon said internal fixes to the company’s AI systems are done through a process of exposing them to experts who try to get the system to produce unwanted or "problematic" outputs and then design in safeguards to prevent those outputs.

While Kwon made it clear that OpenAI prefers industry-led standards, it’s not yet clear the company will get its way. Several federal agencies have indicated they are looking for ways to use their existing authority to regulate AI, and lawmakers in the House and Senate have been meeting with experts in order to learn more about AI and propose regulations that would likely have more teeth in them than any industry proposal.

REGULATE AI? GOP MUCH MORE SKEPTICAL THAN DEMS THAT GOVERNMENT CAN DO IT RIGHT: POLL

ChatGPT

More than 100 million people have used ChatGPT. (Jonathan Raa / NurPhoto via Getty Images / File)

Kwon acknowledged the need to give federal policymakers access to information on how these AI systems are built, but he still suggested that handing over this information might be a voluntary effort, not a mandatory one.

He said a "good first step" would be to "find ways in which policymakers and the government have as good information as possible."

"That might involve things like voluntary reporting of … the systems that are about to come online," he said.

No comments:

Powered by Blogger.