We’ve been following the journeys where Visa, Mastercard and American Express have spent recent months building payment systems for AI agents that can shop and complete transactions without a person clicking “buy” at checkout.
Mastercard and Santander completed Europe’s first live end to end AI agent payment inside a regulated banking framework in March this year. Visa expanded its Agentic Ready programme globally in April. American Express also launched a developer kit for agentic commerce and promised cover for mistaken purchases made through registered AI agents.
But Chargebacks911 says the payments system under those transactions is not ready for the disputes that could follow. The company says banks and merchants are building the buying tools first and leaving the dispute systems behind.
Why Are These Systems Not Ready?
According to Mastercard’s 2025 State of Chargebacks report, using research from Datos Insights, global chargeback volume is forecast to grow 24% between 2025 and 2028, reaching 324 million transactions a year before the full effect of agentic commerce arrives.
The Consumer Bankers Association also said AI agents could overwhelm existing dispute systems if they make repeated mistakes such as ordering duplicate products or misunderstanding instructions.
The problem comes from the fact that existing chargeback systems were built around a human making a payment decision. AI agents make purchases using permission given earlier, often without asking the customer again at the exact moment of payment.
Monica Eaton, founder and chief executive of Chargebacks911, said, “The card networks have built dispute frameworks over decades around one idea: the cardholder either did or did not authorise this transaction.
“In an agentic world, that question doesn’t have a clean answer. Who authorised it? The consumer gave the agent permission to act. Did that permission cover this specific transaction? That depends on what the agent was told, what it inferred, and whether the merchant can prove any of it. But most merchants cannot.”
More from Artificial Intelligence
- Anthropic’s New AI Assistant Could Replace Your SME’s Next Admin Hire
- Is The Middle East And North Africa The Most Exciting Place In The World To Build AI Healthcare Right Now?
- AI Will Now Be Handling UK Fraud And Tax Return Errors – How Will It Work?
- Samsung, Hyundai And LG Just Showed You Where Robotics Is Heading – And It’s All About Data
- Musk Confirms xAI Used OpenAI To Train Its Models
- Experts Share: Will Tech Save The Job Market In 2026 And Beyond?
- AI Shouldn’t Replace Your Team, It Should Supercharge It: The Case For Augmentation Over Automation
- Samsung Has Surpassed The $1 Trillion Milestone In Valuation – What Role Do AI Chips Play In This?
What Happens When AI Shopping Behaviour Looks Suspicious?
Traditional fraud systems study human behaviour. Banks and merchants check things like shopping habits and transaction patterns to work out if a payment looks genuine. Chargebacks911 says AI agents create a different type of activity that can confuse those systems.
An AI agent does not browse or behave like a person. It acts methodically and often repeats actions in a highly structured way. That can look similar to automated fraud or bot traffic, even when the transaction is legitimate.
Chargebacks911 says this creates trouble for both sides of a dispute. Merchants may struggle to prove a purchase was authorised, while banks may struggle to separate genuine fraud from a valid AI transaction.
Donald Kossmann, chief technology officer at Chargebacks911, said, “The industry spent years training fraud and dispute systems to read human behavior. Agentic commerce doesn’t produce human behaviors but it produces something more consistent, more data rich, and more auditable, but only if merchants have built the right infrastructure to capture it.
“The merchants who understand this early will have a structural advantage. Their dispute rates will fall, their recovery rates will rise and their evidence quality will be higher than anything legacy systems can produce.”
The company says merchants need detailed records of what an AI agent was allowed to do, what instructions it received and what actions it took during a purchase journey. Chargebacks911’s Unified Dispute Management System records permission trails, transaction activity and timestamped actions across the payment process.
Could Small Transaction Mistakes Become Expensive Disputes?
Chargebacks911 believes many future disputes will begin with customers checking their statements weeks after an AI agent completed a purchase.
Consumers may not recognise a transaction completed automatically through delegated authority. An AI agent could reorder something or buy the wrong product size, or even misread vague instructions from a user.
Eaton said, “Visa, Mastercard and Amex are doing exactly what they should be doing, enabling the front end of agentic commerce to function. The question nobody is asking loudly enough is what happens at the back end, when a consumer looks at their statement and doesn’t recognise a charge that their agent authorized three weeks ago. That dispute is coming and the merchants who have built the evidence trail will resolve it in minutes. The ones who haven’t will lose the revenue, pay the fee, and have no way to fight it.”
Chargebacks911 recommends three actions for merchants, banks and payment providers:
The company says businesses should…
1. Record detailed permission settings when customers hand authority to AI agents
2. Log agent activity throughout the transaction process
3. Rewrite fraud rules built around human shopping behaviour.