•  
  •  
 
Indiana Law Journal

Document Type

Essay

Publication Date

Winter 2023

Publication Citation

99 Indiana Law Journal 317

Abstract

When an algorithm harms someone—say by discriminating against her, exposing her personal data, or buying her stock using inside information—who should pay? If that harm is criminal, who deserves punishment? In ordinary cases, when A harms B, the first step in the liability analysis turns on what sort of thing A is. If A is a natural phenomenon, like a typhoon or mudslide, B pays, and no one is punished. If A is a person, then A might be liable for damages and sanction. The trouble with algorithms is that neither paradigm fits. Algorithms are trainable artifacts with “off” switches, not natural phenomena. They are not people either, as a matter of law or metaphysics.

An appealing way out of this dilemma would start by complicating the standard A-harms-B scenario. It would recognize that a third party, C, usually lurks nearby when an algorithm causes harm, and that third party is a person (legal or natural). By holding third parties vicariously accountable for what their algorithms do, the law could promote efficient incentives for people who develop or deploy algorithms and secure just outcomes for victims.

The challenge is to find a model of vicarious liability that is up to the task. This Essay provides a set of criteria that any model of vicarious liability for algorithmic harms should satisfy. The criteria cover a range of desiderata: from ensuring good outcomes, to maximizing realistic prospects for implementation, to advancing programming values such as explainability. Though relatively few in number, the criteria are demanding. Most available models of vicarious liability fail them. Nonetheless, the Essay ends on an optimistic note. The shortcomings of the models considered below hold important lessons for uncovering a more promising alternative.

Share

COinS