•  
  •  
 
Indiana Law Journal

Document Type

Article

Publication Date

6-2025

Publication Citation

100 Indiana Law Journal 1527

Abstract

As AI becomes increasingly embedded in our daily lives, this Article explores one of its critical, yet overlooked, societal implications: the propensity of large language models (LLMs) to generate mainstream, standardized content, potentially narrowing their users’ worldviews.

Taking a close look at the technological underpinnings of LLMs, the analysis suggests that—due to the combination of human judgments, training datasets, and inherent features of the underlying technological paradigm—LLMs’ outputs are likely to be geared toward the popular and to project to their users concentrated, mainstream worldviews, sidelining a broader spectrum of perspectives. This Article explores the asymmetrical power relations between LLMs and humans, further suggesting that the constricted worldview projected through LLMs is likely to affect users’ perceptions and may yield a variety of systemic harms, from diminishing cultural diversity to undermining democratic discourse and burdening the formation of collective memory.

To address these challenges, this Article advocates a novel legal-policy response: incorporating multiplicity as a core principle in AI governance. Multiplicity implies exposing users, or at least alerting them, to the existence of multiple options, content, and narratives and encouraging them to seek additional information. This Article reviews the emerging AI governance landscape and explains why prevalent governance principles in the field, such as explainability or transparency, are insufficient for adequately addressing the “narrowing world” concerns and how embedding multiplicity into AI ethical and regulatory schemes could directly address these challenges. It further sketches ways for incorporating this principle into AI governance structures, concentrating on two non-exhaustive directions: multiplicity-by-design, namely embedding multiplicity-promoting features in the architecture of AI systems, and fostering diversity within the LLMs market that will facilitate users’ access to “Second (AI) Opinions.” Finally, it highlights the importance of promoting AI literacy among users for maintaining broad and diverse perspectives in the LLMs era. Altogether, the analysis concludes that incorporating a principle of multiplicity into AI governance will allow society to benefit from the integration of generative AI in our daily lives while preserving the richness and intricacies of the human experience.

Share

COinS