Abstract
The rapid development of sophisticated artificial intelligence (“AI”)
tools in healthcare presents new possibilities for improving medical
treatment and general health. Currently, such AI tools can perform a
wide range of health-related tasks, from specialized autonomous systems
that diagnose diabetic retinopathy to general-use generative models like
ChatGPT that answer users’ health-related questions. On the other hand,
significant liability concerns arise as medical professionals and
consumers increasingly turn to AI for health information. This is
particularly true for black-box AI because while potentially enhancing
the AI’s capability and accuracy, these systems also operate without
transparency, making it difficult or even impossible to understand how
they arrive at a particular result.
The current liability framework is not fully equipped to address the unique challenges posed by black-box AI’s lack of transparency, leaving patients, consumers, healthcare providers, AI manufacturers, and policymakers unsure about who will be responsible for AI-caused medical injuries. Of course, the United States is not the only jurisdiction faced with a liability framework that is out of tune with the current realities of black-box AI technology in the health domain. The European Union has also been grappling with the challenges that black-box AI poses to traditional liability frameworks and recently proposed new liability Directives to overcome some of these challenges.
As the first to analyze and compare the liability frameworks governing medical injuries caused by black-box AI in the United States and European Union, this Article demystifies the structure and relevance of foreign law in this area to provide practical guidance to courts, litigators, and other stakeholders seeking to understand the application and limitations of current and newly proposed liability law in this domain. We reveal that remarkably similar principles will operate to govern liability for medical injuries caused by black‑box AI and that, as a result, both jurisdictions face similar liability challenges. These similarities offer an opportunity for the United States to learn from the European Union’s newly developed approach to governing liability for AI-caused injuries. In particular, we identify four valuable lessons from the European Union’s approach. First, a broad approach to AI liability fails to provide solutions to some challenges posed by black-box AI in healthcare. Second, traditional concepts of human fault pose significant challenges in cases involving black-box AI. Third, product liability frameworks must consider the unique features of black-box AI. Fourth, evidentiary rules should address the difficulties that claimants will face in cases involving medical injuries caused by black-box AI.
The current liability framework is not fully equipped to address the unique challenges posed by black-box AI’s lack of transparency, leaving patients, consumers, healthcare providers, AI manufacturers, and policymakers unsure about who will be responsible for AI-caused medical injuries. Of course, the United States is not the only jurisdiction faced with a liability framework that is out of tune with the current realities of black-box AI technology in the health domain. The European Union has also been grappling with the challenges that black-box AI poses to traditional liability frameworks and recently proposed new liability Directives to overcome some of these challenges.
As the first to analyze and compare the liability frameworks governing medical injuries caused by black-box AI in the United States and European Union, this Article demystifies the structure and relevance of foreign law in this area to provide practical guidance to courts, litigators, and other stakeholders seeking to understand the application and limitations of current and newly proposed liability law in this domain. We reveal that remarkably similar principles will operate to govern liability for medical injuries caused by black‑box AI and that, as a result, both jurisdictions face similar liability challenges. These similarities offer an opportunity for the United States to learn from the European Union’s newly developed approach to governing liability for AI-caused injuries. In particular, we identify four valuable lessons from the European Union’s approach. First, a broad approach to AI liability fails to provide solutions to some challenges posed by black-box AI in healthcare. Second, traditional concepts of human fault pose significant challenges in cases involving black-box AI. Third, product liability frameworks must consider the unique features of black-box AI. Fourth, evidentiary rules should address the difficulties that claimants will face in cases involving medical injuries caused by black-box AI.
Original language | English (US) |
---|---|
Journal | Stanford Technology Law Review |
Volume | 27 |
Issue number | 1 |
State | Published - Feb 6 2024 |
Externally published | Yes |
Keywords
- Artificial Intelligence
- Medical Liability
- Product Liability
- Black-Box AI
- Comparative Law
- US Law
- EU Law
- Law and Technology
- AI Liability
- Health Law
- ChatGPT