AI Agents and Product Liability: Who Pays When the Agent Books the Wrong Flight?

I have been collecting agent-error cases over the past nine months, and the docket has now passed thirty filings. None of them have produced a published merits decision yet — most are at the pleading stage, several have been compelled to arbitration, two have settled — but a doctrinal shape is starting to emerge.

This post takes a stylized hypothetical (an AI agent that books a wrong flight) and walks through the legal theories. The concrete cases I have are not (yet) cleanly a flight-booking case, but variations: a calendar agent that double-booked a medical appointment, a financial-advice agent that made trades the user disputes authorizing, a real-estate agent that sent a non-contractual offer. The conceptual issues are the same.

The hypothetical

A consumer uses an AI agent product, sold by a SaaS company as a "general assistant." The user instructs: "Book me on the cheapest flight from Boston to Chicago this Friday afternoon." The agent books a $150 flight from BOS to CHI — but CHI is the airport code for Chiang Mai International Airport in Thailand, not Chicago O'Hare (ORD). The user discovers this two hours later. The flight is non-refundable. The user demands the SaaS company reimburse $150 plus consequential damages. The SaaS company refuses, citing its terms of service.

Where does the law go?

Theory 1: Contract

The default starting point. The SaaS company's terms of service almost certainly disclaim liability for AI agent errors and cap any liability at the subscription fee. If the terms are validly incorporated and not unconscionable, this is the answer. End of analysis.

What I am seeing in actual filings: terms-of-service defenses succeed about half the time. The unsuccessful cases tend to involve either (a) consumers who never directly accepted terms (terms-of-service inheritance through bundled apps), (b) jurisdictions with strong unconscionability traditions, or (c) damage exposure that materially exceeds the subscription fee in ways that produce a meaningful unconscionability argument.

Practical takeaway: terms of service are still doing most of the work. But they are not doing all of the work, and the cases where they fail are the cases worth thinking about.

Theory 2: Negligent design / failure to warn

The plaintiff argues that the agent product was negligently designed because it failed to confirm the destination when the user's input was ambiguous. The "Chicago" / "CHI airport code" disambiguation is a foreseeable error pattern that a reasonably designed agent would have flagged.

Three doctrinal hooks:

The interesting move is that none of these theories require treating the agent as a "product" in the strict-liability sense. They are negligence theories that apply to the company that designed and operated the agent.

Theory 3: Strict products liability

Whether AI agents are "products" for purposes of strict products liability under Restatement (Third) § 1 is genuinely unsettled. The traditional formulation requires a tangible product with a manufacturing or design defect. Software has historically been on the bubble; the case law splits geographically.

The first appellate decision treating an AI agent as a "product" for § 402A purposes will be a watershed. I do not expect it imminently — the procedural posture of the cases I am tracking does not put any of them on a fast appellate path — but it is coming.

What courts will probably converge on: AI agents that operate within consumer-facing software products will be treated as components of those products, subject to the products-liability framework that applies to software generally in the relevant jurisdiction. So in California (where software is generally a product), the agent would be subject to strict liability; in many other states, only negligence theories would apply. This is not a satisfactory state of the law, but it is what we have.

Theory 4: Agency law

The plaintiff argues that the SaaS company is the "principal" of the AI agent, which is a "agent" in the legal sense, and the company is liable for the agent's actions taken within the scope of its (granted) authority.

This theory has more juice than I would have predicted a year ago. Several recent law-review articles have proposed an "AI as agent" doctrinal framework, and I am starting to see it cited at the pleading stage. The intuition is straightforward: if a human assistant booked the wrong flight at the consumer's instruction, agency-law principles would govern liability between the assistant and the agency that employs them. Why should AI be different?

The answer is that AI is currently very different — there is no two-way relationship of consent between the agent and its principal in the way agency doctrine assumes. But the framework still has analytical utility, and several courts are reaching for it because alternative frameworks fit awkwardly. I expect the doctrine to be developed quickly over the next two years.

Theory 5: Statutory consumer protection (UDAP)

Many states have unfair-and-deceptive-acts-and-practices statutes that authorize consumer claims for material misrepresentations or deceptive practices. Where an AI agent product is marketed in ways that overpromise capability, UDAP claims can supplement common-law theories.

The advantage to plaintiffs: lower burden of proof, often statutory damages, sometimes attorney's fees. The disadvantage: most UDAP statutes have been narrowed by recent state-supreme-court interpretations. UDAP is supplementary, not foundational.

What I am watching

Three things in the next twelve months:

  1. The first appellate decision in this space. The cases at the trial level are too procedurally tangled to be confidence-inspiring; we need an appellate court to say something on the merits.
  2. The federal Frontier AI Safety legislation that is moving through the Senate. Drafts include a section on AI agent disclosure obligations that, depending on final shape, could federalize part of this discussion.
  3. Insurance market response. Insurers writing E&O coverage with AI agent endorsements are setting underwriting expectations that are starting to function like a private regulatory regime. The standard endorsements being developed at the largest carriers are going to do a lot of work.

Practical guidance for vendors

For our defense-side clients building agent products:

The doctrine is unsettled, but the operational implications are knowable. Build accordingly.