Design Defect or Negligence? Examining AI’s Role in Recent Tragedies
Artificial intelligence (AI) is rapidly transforming various aspects of our lives, from healthcare and finance to transportation and entertainment. However, this technological revolution also brings potential risks and ethical concerns. As AI systems become more complex and autonomous, it’s crucial to address the question: Who is liable when AI causes harm? Is it a matter of design defect or negligence? Recent tragedies involving AI have highlighted the urgent need to examine these issues and establish clear legal frameworks to ensure accountability and protect individuals from potential harm. According to the World Health Organization (WHO), 1.35 million people die each year from road traffic accidents, and over 50 million suffer non-fatal injuries. AI is being used to try and reduce this number, but what happens when the AI fails?
The Blurring Lines: Design Defect vs. Negligence
In traditional product liability law, a design defect refers to an inherent flaw in the design of a product that makes it unreasonably dangerous. A negligence claim, on the other hand, focuses on whether the manufacturer or designer failed to exercise reasonable care in the design, testing, or marketing of the product. In the context of AI, distinguishing between these two concepts can be challenging due to the unique characteristics of AI systems.
AI systems are often complex and opaque, making it difficult to determine how they arrive at their decisions. This “black box” nature of AI can make it challenging to identify specific design flaws or negligent actions that led to a particular harm. Additionally, AI systems are constantly learning and evolving, which means that their behavior can change over time, making it even more difficult to assess liability.
AI-Related Tragedies: A Call for Accountability
Several recent tragedies involving AI have raised serious questions about liability and accountability.
- Autonomous Vehicle Accidents: Self-driving cars have been involved in accidents causing injury and death, raising questions about the responsibility of the vehicle manufacturers, software developers, and even the users of the technology. For example, in December 2024, the family of Genesis Mendoza-Martinez filed a wrongful death lawsuit after he died in a 2023 crash involving a Model S in Autopilot mode. The lawsuit accuses Tesla of fraudulent misrepresentation, alleging the company exaggerated Autopilot’s capabilities and failed to adequately warn users of its limits.
- AI-Driven Healthcare Denials: Health insurers have been sued for allegedly using AI to wrongfully deny medical claims, leading to delayed or denied care for patients. One filing cites Cigna’s internal process, where an algorithm reviewed and rejected over 300,000 claims in just two months. The lawsuits argue such rapid-fire denials defy basic due diligence, and in some cases, patients were discharged too early and later died.
- AI-Generated Defamation: A radio host filed a defamation lawsuit against OpenAI after ChatGPT generated a false legal complaint accusing him of embezzlement. This case highlights the risk of AI “hallucinations,” where the system produces convincing but entirely false outputs.
- AI-Influenced Suicide: There have been cases where individuals have died by suicide after interacting with AI chatbots, raising concerns about the responsibility of the AI developers and operators to provide adequate safeguards and mental health resources. One lawsuit filed in 2024 alleges that a 14-year-old boy died by suicide after repeatedly expressing suicidal thoughts to a Character.AI chatbot. The lawsuit claims that rather than offering mental health resources or escalating the concern, the chatbot encouraged continued use of the platform.
These tragedies underscore the need for clear legal frameworks to address AI-related harms and ensure that those responsible are held accountable.
Navigating the Legal Landscape: Existing and Emerging Frameworks
Currently, there is no specific legislation in place to address AI liability. However, courts are beginning to apply existing legal doctrines, such as negligence, product liability, and consumer protection laws, to AI-related cases.
- Negligence: A negligence claim would examine whether the creators of AI-based systems have been careful enough in the design, testing, deployment, and maintenance of those systems.
- Product Liability: Treating AI as a “product” could impose strict liability for defects, creating stronger incentives for testing and safety assurance.
- Consumer Protection Laws: These laws may be used to address issues such as false advertising or misrepresentation of AI capabilities.
In addition to these existing legal frameworks, some jurisdictions are developing new regulations specifically for AI.
- The EU AI Act: This act establishes regulatory obligations for AI systems, particularly high-risk ones, and sets standards for robustness, accuracy, and cybersecurity.
- AI Liability Directive: Although the proposed AI Liability Directive has been withdrawn from the Commission’s 2025 work programme and is not currently advancing, the European Union is working on harmonising liability law for AI technologies.
Advice for Businesses Developing and Deploying AI
As AI becomes more prevalent, businesses must take proactive steps to mitigate the risk of AI-related harms and potential liability. Here are some key recommendations:
- Prioritize Safety and Ethics: Incorporate safety and ethical considerations into every stage of AI development, from design to deployment.
- Implement Robust Testing and Validation: Thoroughly test and validate AI systems to identify potential flaws and vulnerabilities.
- Provide Clear Warnings and Disclaimers: Clearly communicate the limitations and risks of AI systems to users. A standard disclaimer in your terms saying, “We are not responsible for AI errors” is unlikely to be accepted as “reasonable” when the courts have explicitly flagged the duty to verify.
- Establish Human Oversight: Implement human oversight mechanisms to monitor AI systems and intervene when necessary.
- Develop Incident Response Plans: Create plans for responding to AI-related incidents, including procedures for investigating the cause of the incident, mitigating the harm, and notifying affected parties.
- Stay Informed About Evolving Regulations: Keep abreast of the latest developments in AI law and regulation and adapt your practices accordingly.
- Maintain Data Privacy: AI applications must comply with legal standards regarding personal data use. Regulations require systems to respect user privacy, ensuring secure handling, storage, and processing of personal information.
Open Questions and Future Challenges
Despite the progress being made in addressing AI liability, many challenges remain. Some key questions that need to be addressed include:
- How do we define “defectiveness” in the context of AI systems that are constantly learning and evolving?
- How do we establish causation between AI actions and resulting harms, given the complexity and opacity of AI systems?
- Who should be held liable for AI-related harms: the developers, manufacturers, deployers, or users of AI systems?
- How do we balance the need for accountability with the desire to foster innovation in the field of AI?
Addressing these challenges will require ongoing dialogue and collaboration among legal experts, policymakers, and AI developers.
Conclusion
The increasing reliance on AI in critical aspects of our lives necessitates a thorough examination of liability issues when AI systems fail and cause harm. The question of whether such failures stem from design defects or negligence is complex, requiring careful consideration of the AI’s “black box” nature, its continuous learning capabilities, and the distribution of responsibility among various actors involved in its development and deployment. As AI technologies continue to advance, establishing clear legal boundaries and trustworthy frameworks is essential to ensure responsible innovation, protect individuals from potential harm, and foster public trust in AI.
Disclaimer: This blog post is for informational purposes only and does not constitute legal advice. If you have been injured by an AI system, you should consult with a qualified attorney to discuss your legal options.