How do White House AI pledges stack up?

This week, the White House announced that it had received “voluntary commitments” from seven leading AI companies to manage the risks of artificial intelligence.

Getting the companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI – to agree on something is a step forward. They include bitter rivals with subtle but important differences in their approach to AI research and development.

For example, Meta is so keen to get its AI models into the hands of developers that it has made many of them open source, making their code available to anyone. Other labs, such as Anthropic, have taken a more cautious approach and released their technology in more limited ways.

But what exactly do these commitments mean? And are they likely to change much in the way AI companies operate because they are not supported by law?

Given the potential stakes of AI regulation, details matter. So let’s take a closer look at what has been agreed upon here and assess the likely impact.

Commitment 1: Companies commit to conducting internal and external security testing of their AI systems prior to release.

Each of these AI companies is security-testing their models before release – often referred to as “red-teaming”. On the one hand, this isn’t really a new commitment. And it’s a vague promise. It doesn’t go into much detail about what type of testing is needed, or who will do the testing.

In a statement accompanying the pledges, the White House said only that the AI ​​model would be tested “in part by independent experts” and would focus on AI risks “such as biosecurity and cyber security, as well as their broader societal implications”.

It’s a good idea to have AI companies publicly commit to continuing this type of testing, and encourage more transparency in the testing process. And there are certain types of AI risks – such as the danger that AI models could be used to develop bioweapons – that government and military officials are probably better suited than companies to evaluate.

I wish the AI ​​industry would agree on a standard set of security tests, such as the “autonomous replication” tests that the Alignment Research Center does on pre-release models from OpenAI and Anthropic. I would also like to see the federal government fund such testing, which can be costly and requires engineers with significant technical expertise. More to the point, many security trials are funded and controlled by companies, which raises obvious questions of conflict of interest.

Commitment 2: Companies commit to sharing information about managing AI risks across industry and with governments, civil society and academia.

This commitment is also a bit vague. Many of these companies are already publishing information about their AI models, usually in academic papers or corporate blog posts. Some of them, including OpenAI and Anthropic, also publish documents called “system boards” detailing the steps they have taken to make those models more secure.

But they have also sometimes hidden information, citing security concerns. When OpenAI released its latest AI model, GPT-4, this year, it broke with industry habit and chose not to disclose how much data it was trained on, or how large the model was (a metric known as “parameters”). It said it declined to release the information due to competition and security concerns. This is also the type of data technology companies that competitors like to stay away from.

Will AI companies be forced to disclose that kind of information under these new commitments? What if this accelerates an AI arms race?

I suspect that the White House’s goal is not to force companies to disclose their parameter calculations, but to encourage them to share information with each other about whether or not their models are at risk.

But even sharing that kind of information can be risky. If Google’s AI team prevented a new model from being used to develop a deadly bioweapon during pre-release testing, should it share that information outside of Google? Will this put bad actors at risk of getting ideas on how to get the same thing done with a less secure model?

Commitment 3: Companies commit to investing in cyber security and protection from insider threats to protect proprietary and unknown model loads.

This is pretty straight forward and undeniable from most of the AI ​​insiders I’ve spoken to. “Model weights” is a technical term for the mathematical instructions that allow an AI model to perform a task. If you were an agent of a foreign government (or rival company) who wanted to build their own version of ChatGPT or any other AI product, you would be stealing weight. And this is something AI companies have a keen interest in keeping tight control over.

Several issues of model weight leaking have already been publicised. For example, the weights of Meta’s original LLAMA language model were leaked to 4chan and other websites just days after the model was publicly released. Given the risks of more leaks – and other countries’ interest in stealing this technology from US companies – it seems like a good idea to ask AI companies to invest more in their security.

Commitment 4: Companies commit to facilitating the discovery and reporting of vulnerabilities in their AI systems by third parties.

I’m not quite sure what that means. Every AI company has discovered vulnerabilities in their models after releasing them, usually because users try to fiddle with the models or try to get past their guardrails (a practice known as “jailbreaking”) in ways the companies didn’t anticipate.

The White House commitment calls on companies to establish “robust reporting mechanisms” for these vulnerabilities, but it’s unclear what that might mean. An in-app feedback button, similar to the one that allows Facebook and Twitter users to report posts that break the rules? A bug bounty program, like the one OpenAI started this year to reward users who find bugs in their systems? anything else? We will have to wait for more details.

Commitment 5: Companies are committed to developing robust technical mechanisms to ensure that users are aware that AI-generated content, such as watermarking systems.

It’s an interesting idea, but leaves a lot of room for interpretation. So far, AI companies have struggled to develop tools that can tell whether people are viewing AI-generated content or not. There are good technical reasons for this, but it’s a real problem when people take AI-generated work as their own. (Ask any high school teacher.) And many of the tools that are currently promoted as being able to detect AI output can’t actually do so with any degree of accuracy.

I am not optimistic that this problem will be completely resolved. But I am glad that companies are committed to work on it.

Commitment 6: Companies commit to publicly disclosing the capabilities, limitations, and areas of appropriate and inappropriate use of their AI systems.

Another sensible sounding promise with plenty of scope. How often should companies report on the capabilities and limitations of their systems? How detailed should that information be? And given that many companies that build AI systems are later surprised by the capabilities of their systems, how well can they be expected to describe them?

Commitment 7: Companies commit to prioritizing research on the social risks posed by AI systems, including preventing harmful bias and discrimination and protecting privacy.

The commitment to “research priority” is about as vague as commitment gets. Still, I’m sure this commitment will be well received by many in the AI ​​ethics crowd, who want AI companies to prioritize short-term harm prevention like prejudice and discrimination, rather than doomsday concerns, as AI security folks do.

If you’re confused about the difference between “AI ethics” and “AI security,” know that there are two warring factions within the AI ​​research community, each thinking the other is focused on preventing the wrong kinds of harm.

Commitment 8: Companies are committed to developing and deploying advanced AI systems to solve society’s biggest challenges.

I don’t think many people would argue that advanced AI should No Used to tackle society’s biggest challenges. The White House lists “cancer prevention” and “climate change mitigation” as two areas on which it wants AI companies to focus their efforts, and on this it will get no disagreement from me.

What complicates this goal somewhat, however, is that in AI research, what appears trivial often has far more serious implications. Some of the techniques incorporated into DeepMind’s AlphaGo — an AI system trained to play the board game Go — proved useful in predicting the three-dimensional structures of proteins, a major discovery that spurred basic science research.

Overall, the White House’s deal with AI companies seems more symbolic than real. There is no enforcement mechanism in place to ensure that companies honor these commitments, and those reflect the precautions many AI companies are already taking.

Still, it’s a reasonable first step. And by agreeing to abide by these rules, it appears that AI companies have learned from the failures of past tech companies, which waited to side with the government until they ran into trouble. In Washington, at least when it comes to tech regulation, it pays to show up early.

Stay Connected With Us On Social Media Platforms For Instant Updates, Click Here To Join Us Facebook

For the latest news and updates, follow us on Google News

Read the original article here

Show More
Back to top button

disable ad blocker

please disable ad blocker