AI diagnostic tools have moved from pilot programmes to standard practice across NHS settings faster than most people outside the health system realise.
According to NHS England, AI systems that flag cancer risk from lung scans have been rolled out across 64 NHS trusts, funded by £21 million, helping speed up the analysis of X-rays and CT results. Pilots at Guy’s and St Thomas’ combine AI risk stratification with robotics for biopsies. In the US, the FDA launched real-time AI pilots in April 2026 for clinical trials with AstraZeneca and Amgen, designed to shorten trial durations by 20 to 40% through continuous data monitoring.
The patients involved in the vast majority of these assessments have not been told that an algorithm played any role in their care. Consent for data use is often only implied and disclosure of AI involvement in clinical decisions has no routine requirement under current NHS or FDA frameworks. The clinician retains final authority, and the legal and regulatory position is largely that this is sufficient.
But “sufficient” and “right” are different questions. A growing body of research suggests that patients who are proactively informed about AI involvement show higher levels of trust and higher acceptance of AI-assisted diagnoses. Studies cited by patient advocacy groups put the uplift at 34% on trust and 43% on acceptance. At the same time, clinicians and regulators warn that mandatory disclosure for every algorithmic input could overwhelm care pathways without improving outcomes, and that the line between “AI involvement” and routine software use is harder to draw than it appears.
As NHS AI adoption accelerates and the FDA indicates a shift toward greater patient notification, the question of whether patients have a right to know when AI has influenced their diagnosis has moved beyond theory. The debate is live, actively contested and has no clean answer.
More from Tech
- Studies Show A Link Between High Screen Time And Neural Impairments: Is The Issue The Screen Itself, Or What Screen Time Replaces?
- How To Choose The Right Antivirus Software For Your Needs
- AI Could Transform Rural Healthcare, But Who Will Benefit The Most? Experts Comment
- Spotify Is Done Being Just A Music App And The Peloton Deal Proves It
- Is VoIP The Backbone Of Digital Transformation?
- Liverpool FC Is Using Fan Data The Way Big Tech Uses Yours – And The Results Are Remarkable
- European HealthTech Is Using AI To Catch Cancer Earlier – And Investors Are Starting To Pay Attention
- Space-Based Solar Power Is Moving Closer To Reality – Space Oddity Or The Solution To Our Power Problems?
The Transparency Debate
The case for disclosure rests on two pillars: patient autonomy and error accountability.
AI diagnostic tools, however accurate on average, aren’t infallible. They can embed bias from training data, perform poorly on demographic groups underrepresented in those datasets, and fail in ways that a human clinician might catch precisely because the human would be uncertain where the algorithm was confident. A patient who knows AI was involved can ask questions, seek second opinions and make informed decisions about their own care. A patient who has no idea has none of those options.
The case against routine disclosure is more pragmatic. AI is already embedded across healthcare systems at the level of imaging software, triage tools and administrative systems. Disclosing every instance of algorithmic involvement would require a level of documentation and communication that current clinical workflows struggle to support, and could paradoxically dilute attention away from the high-stakes diagnostic decisions where disclosure actually matters.
The GMC and Medical Protection guidance currently targets cases where patient data is shared with third parties rather than mandating disclosure for all AI-assisted decisions.
We put it to a group of clinicians, ethicists, patient advocates and HealthTech founders to find out where they actually stand.
Our Experts:
- Professor Susan Shelmerdine, Consultant Radiologist, NHS and UCL
- Rahul Shivkumar, Co-Founder, Assured Health
- Marc Fernandez, Chief Strategy Officer, Neurologyca
- Matt Flenely, Head of Product and Marketing, Datactics
Professor Susan Shelmerdine, Consultant Radiologist, NHS and UCL
![]()
“From a legal standpoint, doctors are already obliged to provide information about all material risks of a treatment decision and must disclose any risk to which a reasonable person in the patient’s position would attach significance. This could be interpreted to mean that if AI has the potential to cause material risk of harm, it should be disclosed and consent provided for use. The rules already exist, but the interpretation of them could be made clearer with respect to AI for healthcare professionals more broadly.
“The public on the whole, based on national surveys, mostly support being informed and asked for their consent for AI to be used in their healthcare. Transparency and human oversight remain vital to public acceptance of AI in medical diagnosis and treatment, which also includes the use of their data for creating or developing future AI tools.
“The issue is that it is not always possible to selectively apply AI software to some health records and not others at present. Sometimes the person using the AI is not the person with the patient-facing relationship. I am a radiologist, and many hospitals use AI to assist in imaging evaluation. We do not always meet the patients, we do not always know what they told their specialist, and we certainly cannot remove the AI interpretation from selected imaging tests. So how to actually carry out the wishes from informed consent may be challenging, and in some situations currently it is a case of relaying information on ‘this is what happens here’. There is clearly room for development and refinement in our clinical pathways.”
Rahul Shivkumar, Co-Founder, Assured Health
![]()
“Patients should know when AI has influenced their diagnosis, but the more important point is how that information is communicated and whether it actually helps them understand what shaped the decision. In healthcare, clinicians already interpret complex tools and data into something a patient can engage with, so AI should be treated the same way, where the expectation is not just disclosure but clarity around its role in the process.
“A requirement to disclose AI involvement is a reasonable direction, but it risks becoming a formality if it is not paired with context. Simply stating that AI was used does not explain how much it influenced the outcome or how it interacted with clinical judgement, and the standard should be closer to how other complex inputs are explained today, where the goal is to make the decision-making process interpretable rather than just technically transparent.
“The larger risk sits with non-disclosure because it adds to a broader pattern where important decisions are already being made in parts of the system that patients do not see, whether that is diagnosis, access or billing, and over time that lack of visibility erodes trust. Disclosure on its own does not solve that, but it is a necessary step toward making these systems more understandable, which ultimately supports both patient engagement and the long-term adoption of AI in a way that holds up under scrutiny.”
Marc Fernandez, Chief Strategy Officer, Neurologyca
![]()
“The necessity of patient disclosure should depend on how the technology is used. AI can serve as a support tool providing pattern recognition to assist a clinician, or it can act as a decision-maker when automated triage or risk scoring occurs without meaningful human oversight. That distinction matters because disclosure becomes more important as AI moves closer to making decisions on its own.
“Patients rarely ask about the brand of an MRI machine because those tools have decades of standardised certification and clear accountability. AI lacks that level of institutional trust. The absence of a universal certification regime makes transparency important for patient confidence. Failing to disclose the influence of an algorithm when it is making automated decisions risks a public backlash that could slow adoption.
“The industry should focus on disclosing how these systems are validated and monitored, along with where accountability sits in the decision-making process. Eventually, AI will become part of the trusted infrastructure of care over time. Until then, legal frameworks should prioritise transparency for high-stakes autonomous decisions.”
Matt Flenely, Head of Product and Marketing, Datactics
![]()
“AI is poised to become a beneficial tool in clinical judgement with the potential to enable faster, better outcomes for citizens. But as we know, AI amplifies your data, not your intentions. Transparency is therefore essential to maintain trust. Doctors and health leaders should make sure patients are aware when AI has been involved in clinical processes including diagnosis and treatment, and ensure close alignment with principles already established in areas of GDPR requirements around automated processing.
“Introducing a legal requirement for disclosure within NHS trusts would be a key step in this, to ensure it is clear and not open to interpretation. This should go further than being a simple compliance checkbox. Rather, it should be embedded into broader reporting to set a standard for how AI involvement is explained to patients. This way, it will not become a burden to healthcare providers.
“There are risks on both sides. Disclosing AI involvement may lead to misunderstanding, media misrepresentation or the growth of scepticism, particularly in a society that does not always interpret data mindfully and responsibly. However, we cannot responsibly withhold data. A lack of transparency would inevitably erode patient trust, which could undermine faith in the AI system and risk losing all the benefits that AI is likely to bring, such as scalable decisions and recommendations for treatment that humans could not be expected to create.”
For any questions, comments or features, please contact us directly.
