Artificial intelligence is a general, highly capable dual-use technology. It is therefore open to general, highly destructive misuse. For example, generative pre-trained transformer models – such as those that power chat agents like OpenAI’s ChatGPT , Google’s Bard and Baidu’s Ernie – can be used to spread misinformation in the form of written text. Generative vision models can create convincing “deepfake” photos. Realistic deepfake videos using actors as body doubles are also already here. It’s now startlingly easy for anyone to pollute the informational environment . What was once a game for nation-states is now accessible to motivated small groups, even individuals. Maybe you think it’s not regulators’ job to prevent people from forming false beliefs. But setting aside epistemic concerns, the opportunities for misuse are even more troubling. An infinite quantity of child sexual abuse material can now be generated at near-zero marginal cost. Worse, such AI-generated material is immune to state-of-the-art detection and prevention strategies. Law enforcement has no plan, and technical experts are at a loss. Without meaningful, targeted regulation, things will get worse on all fronts. The exact manner and relative seriousness of particular future misuse of large-scale, highly capable “foundation” or “base” models is difficult to predict. That is because it is difficult to predict how foundation models will improve. They will improve, though, and with those improvements will come more possibilities for, and increased likelihood of, misuse. Large AI developers in the United States and Europe have no specific regulatory responsibilities to mitigate these harms, but that is likely to change. This is no surprise. Even AI developers have an interest in preventing the misuse of their systems. Why would a company such as OpenAI want its technology being used to generate child sexual abuse material? Does Google or Meta want to be embroiled in the next big political scandal? No, they want to sell ads. However, present proposals regarding frameworks for regulation have an important gap. These frameworks do not include proposals for holding AI developers accountable for the misuse of their systems once they leak – and they will leak. What explains this gap? I think there are two related explanations. One is that it’s somehow unfair to hold people accountable for things that happen as a result of others’ bad behaviour. This is a mistake, as the case of negligently leaving an unsecured, loaded gun in plain view illustrates. I want to focus on the second explanation for the gap, which is the false but tempting idea that inevitability precludes responsibility. The idea goes like this: since model leaks are inevitable, AI developers cannot properly be regulated for the effects of those leaks. This, too, is a mistake. Inevitability does not entail a lack of regulatory responsibility. Here is an instructive analogy from a different regulatory domain also aimed at improving safety. The nature of modern food production entails that some amount of contamination in food is inevitable. Still, food manufacturers are required to meet minimum standards of reliability and safety. Look beyond law to ensure advances in AI are safe and beneficial For example, the US’ Food and Drug Administration guidelines on food defect levels allow some amount of “foreign matter” to be present in various foodstuffs. Equally, China’s Food Safety Law governs acceptable levels of contaminants in food. What the existence of these regulations teaches us is that inevitability doesn’t preclude regulation. On the contrary, regulators’ recognition that food safety is a critically important component of a secure national infrastructure commits them to taking an especially hard line on safety standards. Moreover, the basic structure of this regulatory regime is straightforward and is itself instructive. Food manufacturers and processors are required by regulators to meet various safety standards, with the specific content of these standards set by a transparent, scientifically rigorous procedure. Producers are encouraged to comply in two ways. First, they are liable to systematic, credible auditing designed to ensure compliance with the relevant standards. Second, when a food product causes harm via contamination , which is an inevitable outcome at scale, manufacturers are held liable for that harm if they can be shown to have failed to meet the relevant standard of care. Policymakers interested in designing effective AI regulation can borrow this structure and adapt it. AI systems should be subject to evaluations designed to show they meet specific verifiable, scientific standards of safety. The precise nature of these evaluations will depend on the kind of model at issue. But one general approach to evaluation is both manageable and flexible: credible, third-party capability evaluations. For large language models, we might want to evaluate whether a particular system is capable of generating disinformation at scale. For generative visual models, we might want to evaluate whether a particular system is capable of generating child sexual abuse material at scale. Other capabilities matter for AI safety, too. The details will matter, but a capabilities-evaluation framework allows regulators to simultaneously work at a high level of abstraction and remain effective. If a new, dangerous capability appears on the scene, it is trivial to add an evaluation to the standards. Technical work will be required to develop effective tests for these capabilities, but this technical work can and should be jointly funded by those who benefit from the regulation: AI developers and their customers. Resistance to a regulatory regime with this structure will be deep and wide, but that is the predictable attempt to preemptively capture the process by economically biased actors. Public policy should be made by the public or their representatives, not by AI developers. Nate Sharadin is a philosopher at the University of Hong Kong and a research fellow at the Center for AI Safety, where he works on AI alignment, regulation and the ethics of AI research