22 Jun 2023

Who’s Responsible? Addressing Liability in the Age of AI

In the realm of modern technology, the association with Artificial Intelligence (AI) has become increasingly present.  AI has reached various segments of human activity, both private and business.  However, like any innovation or human creation, AI is imperfect and carries inherent risks.  It is susceptible to biases, errors, security breaches, and a growing level of autonomy, all of which entail potential liabilities associated with artificial intelligence.

Consider the scenario of an autonomous vehicle causing an accident.  Who should bear responsibility in such a case?  Likewise, if an AI-powered medical diagnosis tool misdiagnoses a patient, who should be held accountable – the developer, the manufacturer, or the user?  Exploring the legal and ethical aspects of AI liability and the implications of its future development becomes imperative.

Regulatory challenges

Elements of non-contractual liability, such as identifying the responsible party, establishing tortuous acts, proving causal links, and determining the degree of liability, present considerable challenges.

It is difficult to argue that AI intentionally caused damages (for now) or possessed motives to commit tortious acts.  The complexity of AI systems affects the transparency of decision-making processes, making it increasingly difficult to ascertain the causal connection between an AI’s action and the resulting damages.

These challenges highlight the inadequacy of existing legal frameworks and liability standards in regulating AI liability.  Consequently, a fresh regulatory response is required.

Legal solutions and regulatory responses

In terms of comparative law, solutions to non-contractual liability caused by AI differ and include the following:

  1. Contractual Liability: AI developers and operators can be held liable for damages caused by an AI system under contractual terms.  This can be achieved through explicit provisions in contracts that govern the use of AI systems.
  2. Strict Liability: Some legal frameworks propose a strict liability regime caused by AI systems.  This would mean that AI developers and operators would be held strictly liable for damages caused by the system, irrespective of their negligence.
  3. Regulatory Frameworks: Several countries have established regulatory frameworks for AI systems, mandating developers and operators to adhere to specific safety and security standards (USANational AI Initiative Act, ChinaA New Generation Artificial Intelligence Development Plan, UKCentre for Data Ethics and Innovation).  These frameworks can be used to establish legal liability for damages caused by AI systems.
  4.  Insurance: Mitigating AI-related risks can be achieved through AI liability insurance policies.  These policies would provide coverage for damages caused by AI systems, which would incentivize developers and operators to take reasonable care to ensure the safety and reliability of their systems.

EU’s regulatory response

On September 28, 2022, the European Commission revealed AI Liability Directive (Directive), proposing a legal framework to establish liability for damages caused by AI systems.  The Directive is expected to introduce a risk-based approach to AI liability, where the level of liability corresponds to the risk associated with the AI system.  Additionally, it suggests a strict liability regime for high-risk AI systems, holding developers and operators responsible for damages caused, regardless of negligence. This Directive encourages developers and operators to adopt necessary measures to ensure system safety and reliability.  Its rules seek to maximize the benefits of AI while minimizing associated risks.  Ultimately,  the Directive is set to shape the future of AI liability within the EU and beyond.

Ethical considerations

The issue of AI liability raises significant ethical considerations, including:

  1. Fairness and accountability: There are concerns that assigning liability for AI-related accidents or errors may not always be fair or just, particularly when the responsible party is not immediately apparent.  The debate also touches upon whether AI systems should be held accountable for their actions and decisions or if the responsibility should lie with the humans who created, deployed, or utilized these systems.
  2. Transparency and trust: Ensuring responsible and ethical deployment of AI systems requires transparency and confidence in their development and operation. This encompasses precise goal-setting, unbiased and representative training data, and explainable decision-making processes.
  3. Human values and dignity: There are concerns that AI systems may not always be consistent with human values and dignity; concerns emerge regarding AI systems’ consistency with human values and dignity, especially when these systems make impactful decisions affecting people’s lives.  Aligning AI systems with ethical principles, such as respect for human rights, fairness, and social justice, can address this concern.
  4. Impact on employment: Replacing human labor with AI remains an important topic of discussion.  To mitigate potential adverse effects on jobs and livelihoods, the benefits of AI should be distributed fairly and equitably.
  5. Risk and safety: Finally, the development and deployment of AI systems should prioritize risk mitigation and protection.  Rigorous testing, validation, and safeguards are essential to prevent harm or damage.

Future developments

There are reasonable expectations that further development of AI shall cause legal and regulatory implications. Here are some of the key areas to follow:

  1. Increased regulation: Advancements in AI technology and its growing autonomy may demand enhanced regulatory oversight to ensure the safety and reliability of these systems. This could lead to new laws and regulations governing AI development, testing, and deployment.
  2. Expanded liability for AI users: As AI systems become more autonomous, users may be increasingly expected to share their cup of liability for the actions and decisions these systems make.  Training, education, and user interfaces that communicate the capabilities and limitations of AI systems would be crucial in this regard.
  3. International cooperation on AI regulation: Since AI transcends national boundaries, cooperation among states may become essential to establish common standards and regulations for AI systems. This would ensure consistency in liability frameworks across jurisdictions and promote AI technology’s safe and ethical use.
  4. Liability insurance for AI systems: Some experts propose developing mandatory liability insurance programs for AI systems, similar to car insurance. Such programs would provide a mechanism to compensate victims of AI-related accidents or errors while encouraging developers and manufacturers to prioritize safety.

The rapid development of AI requires a timely and appropriate regulatory response to address non-contractual liability caused by AI, both at national, EU, and global levels.  Legal solutions should strive for harmonization, efficiency, and regulatory coordination to maximize AI’s benefits.  While the current focus lies on the liability of developers and operators, the prospects of AI expansion may also extend liability to users.  Although previously a scenario confined to science fiction, the notion of holding AI accountable for its actions may no longer be unlikely.

 

Authors: Luka Đurić, Janko Ignjatović

Image generated by Runway AI App