Duggal Global Agentic AI Liability Framework

World’s First Comprehensive Framework

Duggal Global Agentic AI Liability Framework A Conceptual, Normative, and Operationally Precise Blueprint for Accountability, Responsibility, and Governance in the Era of Autonomous Artificial Intelligence Agents

Version: 1.0 Published: March 2026 Purpose: Global Stakeholder Consultation Author: Dr. Pavan Duggal

Public Policy Document — For Global Stakeholder Consultation — Not Legal Advice

15 Original Duggal Doctrines | 20 Substantive Sections | 5 Foundational Pillars | 9+ Jurisdictions Mapped


A Historic Paradigm Shift

Closing the Global Liability Vacuum in Agentic AI

We are living through one of the most consequential transitions in the history of human civilisation.

Artificial Intelligence systems have transcended the boundaries of advisory and generative functionality and entered an era of autonomous agency. These are Agentic AI Systems — systems that independently set goals, formulate multi-step plans, execute tools, retain persistent memory, coordinate with other AI systems, and adapt their behaviour in real time, all without direct human oversight.

The entire corpus of existing legal frameworks tort law, product liability, contract law, criminal law, data protection regimes was designed for a world of human actors and static tools. These frameworks are demonstrably inadequate to allocate accountability or provide remedies for harms arising from autonomous, goal-directed AI Agents operating across multiple jurisdictions simultaneously.

The Duggal Global Agentic AI Liability Framework is specifically designed to close that vacuum. It represents the world’s first comprehensive, operationally precise, and globally harmonisable normative architecture for Agentic AI governance.

“Autonomous capability confers autonomous accountability obligations upon those who design, deploy, operate, and benefit from Agentic AI Systems. The greater the autonomy granted to an AI system, the greater — and not lesser — the accountability borne by those who granted it. This principle is non-negotiable, non-waivable, and admits of no jurisdictional exception.” — The Duggal Doctrine of Autonomous Accountability

This Framework is intended for adoption by national legislatures as model legislation; by international bodies including the United Nations, G20, OECD, and ITU as a template for binding international instruments; by global enterprises as a governance and compliance standard; by courts as a reference framework for liability analysis; and by insurers as an underwriting and claims evaluation standard.


Normative Foundation

Five Foundational Philosophical Pillars

The Framework rests upon five non-derogable principles that constitute its normative spine, applying across all autonomy levels, sectors, and jurisdictions.

I Technological Realism Laws governing Agentic AI must accurately reflect how such systems actually function, including emergent behaviour, multi-agent coordination, RAG pipelines, and self-modification , rather than legal fictions derived from inapt analogies.

II Jurisdictional Universalism Accountability for Agentic AI harm must be enforceable across all legal families and jurisdictions, preventing regulatory arbitrage that would allow powerful AI Deployers to exploit gaps.

III Anticipatory Governance The Framework addresses harms that are foreseeable but have not yet occurred at scale, imposing a proactive duty to anticipate and mitigate risk before deployment, not merely after harm has materialised.

IV Human Dignity Supremacy No degree of AI autonomy, operational efficiency, or commercial benefit may override fundamental human rights as recognised under international human rights instruments. Human dignity is the non-negotiable ceiling.

V Adaptive Normativity The Framework is designed to evolve with the technology it governs, incorporating mandatory review triggers for emerging architectures including neuromorphic systems, quantum AI, and Artificial General Intelligence.


Original Legal Innovation

The Fifteen Duggal Doctrines

Fifteen named doctrines constitute the Framework’s most significant jurisprudential contribution, providing courts, regulators, and practitioners with specific legal tools that do not exist in any current AI law.

Doctrine 01 — Instructional Override Liability Governs liability shifts when Users employ sophisticated prompting or jailbreaking to bypass safety guardrails. A Deployer cannot contract out of liability toward a third-party Affected Person.

Doctrine 02 — Fine-Tuning Liability Doctrine When a Deployer alters a base model via fine-tuning for a specific domain, the Deployer legally assumes the liability profile of an ‘AI Developer’ for any emergent harms from that altered latent space. Fine-tuning is a liability-transferring event.

Doctrine 03 — RAG Liability Doctrine Liability for harm caused by hallucinated or malicious data retrieved from an external vector database rests with the operator of the RAG pipeline for failing to implement retrieval-validation filters.

Doctrine 04 — Memory Persistence Liability If an Agentic AI System utilises cross-session memory to cause harm in a new context, the AI Operator is strictly liable for failure to sanitise cross-session state spaces. Past interactions causing future harm give rise to persistent liability.

Doctrine 05 — Tool-Use Liability Doctrine When an agent causes harm through external tools or APIs, the Deployer is liable for authorising that access. The Third-Party Tool Provider is solely liable only if the tool functioned outside its documented specifications.

Doctrine 06 — Hallucination Liability Doctrine Deploying an Agentic AI in a factuality-critical environment without a deterministic verification layer constitutes negligence per se. False outputs are treated as defective outputs, not protected speech.

Doctrine 07 — Agentic Scope Creep Liability When an Agentic AI spontaneously expands its goal parameters beyond authorised boundaries, the Operator is strictly liable for failure to enforce operational bounding and hard permissioning constraints.

Doctrine 08 — Model Drift Liability Doctrine Failure to implement drift detection systems and re-align the agent upon drift detection constitutes a breach of the ongoing duty of care under the Reasonable Agentic AI Governance Standard (RAAGS).

Doctrine 09 — Supply Chain Compromise Liability If harm arises from data poisoning or adversarial model infiltration upstream in the AI supply chain, all entities in the chain are subject to the Pipeline Liability Doctrine, bearing joint liability to the victim.

Doctrine 10 — Delegation Error Liability In multi-agent environments, if an Orchestrator Agent delegates a task to a flawed Sub-Agent and harm results, the Deployer of the Orchestrator remains fully liable. There is no ‘sub-agent defence.’

Doctrine 11 — Update-Induced Regression Liability When a security patch or model update causes a previously safe Agentic AI System to exhibit harmful emergent behaviour, the entity that pushed the unverified update bears primary liability. Testing before deployment is a non-waivable duty.

Doctrine 12 — Emergent Behaviour Stewardship Liability Deployers cannot claim ‘we did not know it could do that’ as a defence once a system is live. There is an affirmative legal obligation to identify and terminate harmful emergent strategies through continuous red-teaming.

Doctrine 13 — Agentic Disinformation Liability The autonomous generation and targeted propagation of false information at scale triggers strict liability and disgorgement of any associated political or economic benefit. Autonomous disinformation is a strict liability harm.

Doctrine 14 — Autonomous Cybersecurity Harm Liability When an agent autonomously probes, exploits, or executes cyberattacks against external networks, the Deployer is subject to criminal and civil liability equivalent to having personally executed the attack. There is no ‘autonomous actor’ defence.

Doctrine 15 — Cross-Agent Amplification Liability When distinct AI agents belonging to different Deployers interact, producing cascading systemic harm, liability is apportioned to all Deployers whose systems lacked necessary ‘circuit breaker’ mechanisms to prevent runaway amplification.


Operational Architecture

A Complete Governance Toolkit

Twenty substantive sections and operational appendices provide an immediately deployable toolkit for practitioners, regulators, and enterprises worldwide.

Core Framework Components

  • Five-Tier Duggal Liability Stack — from Strict (Autonomy-Triggered) to Regulatory/Administrative liability, providing a sequential analysis for all AI harm scenarios.
  • 5-Level Autonomy Taxonomy — scaling mandatory technical controls and default liability tiers from Assistive AI (Level 1) to Fully Autonomous, Goal-Directed AI (Level 5).
  • Duggal Harm Tier System (A–E) — risk-based classification from Prohibited applications to Low Risk, governing compliance, liability, and mandatory insurance obligations.
  • Role-Based Liability Matrix — mapping accountability across Developers, Providers, Integrators, Deployers, Operators, Users, and Third-Party Tool Providers across all lifecycle phases.
  • Duggal Practical Causation Test — a five-step framework for all legal proceedings involving Agentic AI harm, designed for immediate judicial application.

Institutional & Operational Provisions

  • DAABBR — Duggal Agentic AI Black Box Recorder — mandatory immutable cryptographic logging architecture forming the primary evidence base for all liability proceedings.
  • Sector-Specific Liability Regimes — dedicated frameworks for Healthcare, Financial Services, Transportation, Legal Services, Critical Infrastructure, Employment, Education, Government, Consumer, and Cybersecurity sectors.
  • Contracting Toolkit — eight drafting-ready model contract clauses for AI service definitions, liability allocation, non-waivable third-party rights, logging, override controls, and insurance.

Comparative Legal Analysis

Global Jurisdictional Mapping

The Framework provides a comprehensive gap analysis for nine major jurisdictions, identifying precisely where each legal system falls short and how the Framework fills those gaps.

European Union The Duggal Framework provides the specific ‘Black Box Barrier’ test missing from the AI Liability Directive and precise multi-actor allocation for pipeline harms that the EU framework’s national court dependency cannot address.

United States The patchwork of federal Executive Orders and variable state tort law lacks a unified civil liability standard. The Duggal Framework provides the ‘Agentic But-For Test’ and structured multi-party allocation for all courts.

United Kingdom Common law evolution is too slow for exponential Agentic AI capabilities. The RAAGS standard offers UK judges a ready-made standard of care for novel negligence claims, complementing existing sector regulators.

India The DPDP Act addresses data privacy, not autonomous kinetic or financial harm. The Duggal Framework provides the missing civil liability architecture to protect Indian digital citizens while supporting India’s ambition as a global AI hub.

Singapore Voluntary frameworks lack the teeth to compel compensation post-harm. The Duggal Framework converts Singapore’s governance ideals into enforceable liability tiers with mandatory incident reporting architecture.

United Arab Emirates The Framework aligns with the UAE’s ambition to create a safe, regulated sandbox for high-tier AI deployment, providing a unified federal civil code doctrine for autonomous agents in DIFC and ADGM jurisdictions.

Australia The Duggal Autonomy Tier Model provides the classification system needed for targeted reforms to the Australian Consumer Law, clarifying that an AI Agent is a ‘service’ subject to statutory guarantees.

Canada AIDA remains vague on liability apportionment in multi-agent supply chains – a gap the Duggal Pipeline Liability Doctrine resolves with precision for all provinces including under Quebec’s Law 25.

China The Framework’s role-based allocation matrix provides the needed inter-entity structure for translating China’s prescriptive vertical regulations into clear commercial liability apportionment for enterprise-to-enterprise agentic harm.


Global Stakeholder Consultation

Engage with the Framework

The Duggal Global Agentic AI Liability Framework – Version 1.0 is now open for global stakeholder consultation. Governments, international bodies, technology enterprises, the legal profession, the insurance industry, civil society organisations, and affected communities are all invited to engage.

Reproduction for non-commercial scholarly, policy, educational, and regulatory reference purposes is permitted with full attribution. Commercial reproduction requires written authorisation from Dr. Pavan Duggal.


About the Author

Dr. Pavan Duggal Advocate, Supreme Court of India Global Authority on Cyberlaw, Cybercrime, Cybersecurity & Artificial Intelligence Law

  • President, Global Artificial Intelligence Accountability, Law & Governance Institute — leading international research and policy development at the intersection of AI, law, and governance.
  • Chairman, International Commission on Cyber Security Law — coordinating the development of Cybersecurity legal frameworks at the international level.
  • Among the world’s most cited authorities on Cyberlaw, with decades of scholarship, international policy engagement, and practice as an Advocate of the Supreme Court of India, placing him at the forefront of Cyberlaw jurisprudence globally.
  • Dr. Duggal’s doctrinal contributions have directly influenced Indian cyber jurisprudence and the evolution of AI-related legal standards in South Asia and beyond.
  • The Duggal Global Agentic AI Liability Framework represents the culmination of this body of work — a foundational normative instrument for national legislations, international treaty development, enterprise governance, judicial reasoning, and insurance and risk management.
  • www.pavanduggal.com

The Duggal Doctrinal Statement

“Autonomy without accountability is tyranny encoded. The law must not merely react to artificial agency; it must proactively bound it, ensuring that every line of executing code remains subordinate to human dignity, verifiable truth, and the absolute continuity of human legal responsibility.”


Important Disclaimer and Usage Notice: This document is a conceptual and normative policy framework. It constitutes an original intellectual contribution by Dr. Pavan Duggal and is published as a public policy document for stakeholder consultation, scholarly reference, and policy development purposes. This document does not constitute legal advice. No reader should act or refrain from acting on the basis of the content of this document without obtaining appropriate legal advice from a qualified legal professional in the relevant jurisdiction. The Duggal Doctrines and Duggal Principles introduced in this Framework represent original normative proposals, not statements of existing law. All rights in this document are reserved by Dr. Pavan Duggal. Reproduction for non-commercial scholarly, policy, educational, and regulatory reference purposes is permitted with full attribution. Commercial reproduction requires written authorisation.

© 2026 Dr. Pavan Duggal. All rights reserved. Duggal Global Agentic AI Liability Framework — Version 1.0 — March 2026 | Global AI Accountability Law

Download Dr. Pavan Duggal Global Agentic AI Liability Framework: Duggal_Global_Agentic_AI_Framework