Right now, AI systems are evolving in that they’ve gone from just being able to give shoppers recommendations to making purchases automatically, creating a new reality for consumers.
Payment networks are experimenting with agentic commerce, where software buys on behalf of humans. Visa confirmed it has completed secure AI-initiated transactions with partners and expects 2026 to be a pivotal year for agent-driven commerce. Mastercard is building similar infrastructure through its Agent Pay initiative.
Consumer interest is high, with a survey by PSE Consulting having found 85% of UK shoppers planning AI-assisted holiday shopping would trust the system to place orders and pay on their behalf. That trust is giving companies room to experiment, but it also creates a new set of questions about responsibility and consent.
Monica Eaton, Founder and CEO of Chargebacks911, said: “The card wasn’t stolen. The merchant didn’t make a mistake. The agent did exactly what it was told to do. But the customer still says, ‘I didn’t want that.’ That is a very different situation.”
What Happens When The Purchase Feels Wrong?
Agentic commerce introduces a scenario that is distinct from fraud or human error. An AI might renew a subscription automatically, reorder items no longer needed, select a cheaper brand than expected, or book travel that fits a calendar but not personal preference. Every transaction may follow the rules of authentication, yet leave the customer feeling the purchase is unwanted.
“The payments industry has always treated the click as the signal of intent,” Eaton explained. “Agentic commerce removes the click. So now we need a new way to prove intent when a human was not directly involved.”
Some platforms are experimenting with pricing models tied to AI-driven outcomes, rather than manual transactions. Eaton warned that this could lead to more purchases happening quietly in the background. “If agents start buying things quietly in the background, customers will see more charges they do not recognise or do not agree with. And when that happens, the first reaction is often a dispute.”
More from News
- Chelsea FC Hop Onboard The AI Train: How Big Brands Are Partnering With AI Companies
- Sam Altman Denies Allegations Surrounding OpenAI’s Water Usage, Reports Say
- What Does The 2026 Tech Boom In The Restaurant Industry Mean For Restaurant Owners And Businesses?
- How Is OpenAI’s New Device An Invasion Of Privacy?
- Americans Lost Over $20 Million From ATM Theft, FBI Reports
- Is Meta Discontinuing Its “Messenger” Platform?
- Fashion Week 2026: How Are Fashion Brands Marketing Themselves?
- One In Four Brits Don’t Feel Confident Investing: What Is Holding Them Back?
Who Is Responsible For AI-Made Decisions?
Donald Kossmann, Chief Technology Officer at Chargebacks911, highlighted the broader implications. “In an agentic commerce environment, purchases will increasingly be made by AI systems rather than people. These agents will compare options, select suppliers and execute transactions automatically. That will compress the buying journey, but it will also expand the dispute surface. When an AI places an order that is incorrect, unauthorised or simply unwanted, who is responsible: the consumer, the merchant or the agent?”
He added: “Investors worry that AI will replace specialist software. The bigger disruption may be what happens when machines start buying from one another. Private equity’s anxiety about AI’s impact on software is understandable. If intelligent agents can replicate specialist tools at a fraction of the cost, then the investment logic behind many niche SaaS businesses does look fragile. But the more immediate disruption may not be to software itself, but to trust.”
Kossmann argued that disputes will test the resilience of platforms and merchants. “As automated purchasing scales, fraud and chargebacks are likely to rise before the rules and standards catch up. That puts pressure not just on merchants, but on the platforms and lenders exposed to them.”
Can Trust Become A Competitive Advantage?
Kossmann says trust is the new differentiator in AI commerce. “The most defensible software businesses may not be those with the most features, but those with the strongest embedded payments and dispute performance. AI agents will naturally favour merchants and platforms with lower fraud rates, cleaner settlement histories and reliable refund processes. In other words, in an agentic economy, dispute performance becomes a distribution advantage. The real question for investors is not only whether AI will replace software, but which platforms can prove they are trusted enough for machines to buy from.”
Chargebacks911 recommends that merchants establish clear permissions for AI agents, enhance visibility of transactions, and maintain detailed evidence of AI actions. The technology may be ready, Eaton said, but the human trust element will determine whether agentic commerce is adopted without backlash. “Agentic commerce can work, but only if the industry keeps the customer’s intent at the centre of the transaction. If that link breaks, chargebacks become the safety valve.”