Who’s Liable for AI Errors?

In March 2018, a self-driving Uber struck and killed a pedestrian in Arizona. The vehicle had an AI at the wheel and a human supervisor in the passenger seat. The supervisor was distracted, failed to intervene, and was later charged with negligence. Uber, the company that developed the AI, faced no criminal liability.

We are entering an era in which AI systems make decisions with real-world consequences yet lack intention, self-awareness, or moral agency. Our existing ethical and legal frameworks are unprepared for this shift. Who should we blame when machines make mistakes? The programmers? The company? The user? The AI itself?

To answer this, we need to understand the difference between narrow and general AI. Narrow AI, like self-driving cars, facial recognition software, and tools like ChatGPT, is highly capable but limited to specific domains. These systems do not possess human-like intelligence, free will, or consciousness. General AI, by contrast, refers to machines that could potentially think and reason consciously, and may even feel like humans. While the latter remains theoretical, narrow AI is here, and it’s becoming more competent every day.

Consider ChatGPT’s US bar exam performance. In 2022, the GPT-3.5 model scored in the bottom 10%. Just a year later, GPT-4 placed in the top 10%. This is an example of how narrow AIs are rapidly becoming proficient in an increasing number of specific domains. According to a 2023 survey of 2,778 AI experts, there is a 50% chance that AI will outperform humans in all tasks by 2050.

The issue is that despite their increasing capabilities, narrow AIs cannot be held morally responsible for their decisions. According to moral philosophy, responsibility hinges on two things: (1) the ability to choose an action freely, and (2) an understanding of the consequences of that action. These conditions were first proposed by Aristotle, and the idea remains influential today. Narrow AIs fail both conditions. Even if we consider the possibility of conscious machines arising sometime in the future, today’s AIs lack free will and awareness necessary to understand the impact of their choices. Thus, we cannot hold them morally responsible.

So, who do we hold accountable when AI goes wrong?

Often, responsibility falls on human supervisors, like the distracted Uber personnel. But this approach is flawed. First, AI systems usually act faster than humans can intervene. Second, AI development and deployment involve large teams of researchers, data scientists, engineers, product managers, and executives. Should they all share the blame equally when something goes wrong? Third, AI models are often reused in different contexts. The original developers might have no control over how their systems are implemented elsewhere.

These challenges are compounded by the “black box” nature of many AI systems, especially those based on deep learning. Engineers may understand how a model is trained, but cannot explain how it arrives at specific decisions. This opacity makes it hard to assign responsibility even to the developers.

As AI becomes more embedded in our daily life, we must confront the issue of assigning responsibility to these systems, which now have causal power but no consciousness or free will. Holding humans accountable for what machines do is a stopgap, not a solution.

Comment