Apple’s age verification rollout has already created some friction for users in certain markets. Do you think this points to a deeper issue in how age verification systems are typically designed and rolled out at a global level?
I think what this story shows is that age verification cannot be treated as a universal flow. The standards that feel intuitive in one market – like relying heavily on a driver’s license or credit card in the US – can create real friction in another.
If you want these systems to work, you have to design them around local reality: the documents people actually have, the ways they actually prove age, and the norms that actually exist in that country.
Why do you think so many age verification systems are still built around assumptions that don’t reflect how people actually prove their age in different countries? What risks does this create?
A big part of it comes down to how these systems are developed and scaled. Many are initially built around the dominant identity frameworks in a few key markets, and then expanded globally without fully adapting to local differences.
The challenge is that identity isn’t standardised. The documents people hold, the way they prove their age, and even their level of access to digital credentials can vary significantly from one country to another.
The risk is that you end up with systems that technically meet regulatory requirements but don’t work in practice for a large portion of users. That can lead to legitimate users being excluded, increased drop-off in verification flows, and ultimately a loss of trust in the platform itself.
Verification is often a frustrating, high-friction experience that can turn users away altogether. At what point does that friction stop being a conversion issue and start becoming a genuine access barrier for legitimate users?
Companies often talk about friction like it is just a conversion problem, but in flows like this it quickly becomes an access problem as well.
The moment a legitimate adult user reaches a dead end because the system has been designed around assumptions that don’t apply to them, that’s no longer just a UX challenge – it’s a failure.
The goal should be to protect minors without making eligible users feel locked out of services or settings they should reasonably be able to access.”
When age verification systems don’t align with real-world user behaviour, what kind of downstream impact does that have on trust, usability, and overall product experience?
When systems don’t align with real-world behaviour, the impact tends to show up very quickly in both usability and trust.
From a user perspective, it creates confusion and frustration – people are being asked to verify themselves in ways that don’t feel intuitive or accessible to them. That often leads to drop-off, or users abandoning the process altogether.
Over time, that friction starts to erode trust. If users feel a system doesn’t reflect their reality or is unfairly blocking access, they’re less likely to engage with it in the future. Ultimately, it turns what should be a simple safeguard into a negative part of the overall product experience.
More from Interviews
- A Conversation With Olga Ukrainskaya, Technical Marketing Manager, AI Expert And AI45 2026 Judge
- A Conversation With Robert Kraal, Co-Founder at Silverflow On Payment Processing Methods
- A Chat With Tiffany Masson, Founder And CEO Of Falkovia On AI Governance
- A Chat With Arif Ali, Technical Director Of Just After Midnight On How Everyday Choices Can Put Small Firms At Risk
- A Chat With Robin Nordnes, Founder & CEO At Raiku On Blockchain Infrastructure
- Interview with Susanne Seitz, CEO Of Siemens Buildings On Combining The Real And Digital Worlds
- A Chat With Jean-Baptiste Gaudemet, SVP Strategic Innovation Lab at Kyriba And FinTech50 2026 Judge
- A Chat With Madhu Nadig, Co-Founder & CTO Of Flagright And FinTech50 2026 Judge
From your experience working across different markets, what does “good” age assurance look like in practice? Are there particular approaches or models that tend to work better than others?
From working across different markets, the lesson is always consistent: the best systems give users more than one or two valid paths.
That can mean broader document support, locally relevant credentials, and in some cases lower-friction options such as age estimation as part of the mix. It’s rarely about relying on a single method.
Good age assurance is about balancing effectiveness with usability, making sure the system is robust enough to meet regulatory requirements, while still being accessible to the widest possible group of legitimate users.”
As regulation continues to evolve and AI plays a bigger role in identity and age verification, how can companies strike the right balance between compliance, user experience, and maintaining trust?
As regulation evolves, there’s a natural tendency for companies to focus heavily on compliance, but that can’t come at the expense of user experience or trust. The challenge is treating all three as equally important, rather than trade-offs.
AI is starting to expand what’s possible in terms of age verification, particularly when it comes to offering lower-friction options like age estimation. But with that comes a responsibility to be transparent about how these systems work and how user data is handled.
The companies that get this right will be the ones that build flexibility into their approach, combining multiple verification methods, adapting to local requirements, and designing with the user in mind from the outset. That’s what ultimately creates systems that are not just compliant, but trusted and widely usable.