•  
  •  
 
Indiana Law Journal

Document Type

Article

Publication Date

6-2025

Publication Citation

100 Indiana Law Journal 1327

Abstract

Artificial intelligence (AI) model creators commonly attach restrictive terms of use to both their models and their outputs. These terms typically prohibit activities ranging from creating competing AI models to spreading disinformation. Often taken at face value, these terms are positioned by companies as key enforceable tools for preventing misuse, particularly in policy dialogs. The California AI Transparency Act even codifies this approach, mandating certain responsible use terms to accompany models.

But are these terms truly meaningful, or merely a mirage? There are myriad examples where these broad terms are regularly and repeatedly violated. Yet except for some account suspensions on platforms, no model creator has actually tried to enforce these terms with monetary penalties or injunctive relief. This is likely for good reason: We think that the legal enforceability of these licenses is questionable. We provide a systematic assessment of the enforceability of AI model terms of use and offer three contributions.

First, we pinpoint a key problem with these provisions: The artifacts that they protect, namely model weights and model outputs, are largely not copyrightable, making it unclear whether there is even anything to be licensed.

Second, we examine the problems this creates for other enforcement pathways. Recent doctrinal trends in copyright preemption may further undermine state-law claims, while other legal frameworks like the Digital Millennium Copyright Act (DMCA) and Computer Fraud and Abuse Act (CFAA) offer limited recourse. And anticompetitive provisions likely fare even worse than responsible use provisions.

Third, we provide recommendations to policymakers considering this private enforcement model. There are compelling reasons for many of these provisions to be unenforceable: They chill good faith research, constrain competition, and create quasi-copyright ownership where none should exist. There are, of course, downsides: Model creators have even fewer tools to prevent harmful misuse. But we think the better approach is for statutory provisions, not private fiat, to distinguish between good and bad uses of AI and restrict the latter. And, overall, policymakers should be cautious about taking these terms at face value before they have faced a legal litmus test.

Share

COinS