For many years, more than most of us probably even realise, artificial intelligence has quietly supported healthcare from behind the scenes – interpreting scans, spotting patterns and helping clinicians make faster, more accurate decisions. Up until this point, AI was only ever really used to do quick searches or enquiries on medical issues, to find out more information and that sort of thing, but real, human doctors have still always been our first port of call for reliable, medical expertise.
But, as the technology grows more sophisticated, the role that AI can potentially play in medicine is expanding dramatically, and as such, a new question is emerging: should we start consulting AI for our first medical opinion, not just a second one.
This shift that could fundamentally reshape how patients access care, how doctors practise medicine and how trust is built in the healthcare system. As algorithms become increasingly capable of diagnosing everything from eye disease to skin cancer, many in the medical field are asking whether the traditional doctor-patient relationship is ready for a rethink and whether patients themselves are prepared to rely on a machine before they ever see a human face.
How Is/Can AI Be Used In Medicine?
Artificial intelligence is already playing a vital role in modern medicine, with applications that range from image analysis and diagnostics to personalised treatment recommendations and administrative support.
In radiology, AI systems can examine X-rays, CT scans and MRIs with speed and precision, often identifying abnormalities that might be missed by the human eye.
In oncology, AI is being used to predict cancer risk, assess tumour progression and help tailor treatments to the genetic profile of individual patients. Meanwhile, chat-based symptom checkers and virtual triage tools are increasingly being used to advise patients on whether they need urgent care or can safely stay home.
AI also helps streamline hospital workflows, from managing patient records to optimising operating theatre schedules – it’s an incredibly useful tool in medical logistics.
In drug development, machine learning accelerates the discovery of potential treatments by analysing vast datasets far more quickly than traditional methods allow. As the technology continues to evolve, its role is expanding beyond support to potential frontline decision-making – raising new possibilities, but also new challenges, for clinicians and patients alike.
Concerns Surrounding the Use of AI in Medicine and Healthcare
While the benefits of AI in healthcare are often celebrated (and rightfully so, for the most part), the growing presence of algorithms in diagnosis and treatment also raises a host of complex concerns.
One of the most pressing issues is accountability – when an AI makes an error, such as a misdiagnosis or inappropriate recommendation, it’s not always clear who is responsible. This lack of clarity challenges both legal frameworks and patient trust. Of course, if it were a real, human doctor, they would be the one to take responsibility, but when it’s a programme, a computer or an algorithm that’s making the call, who’s going to take the fall?
There’s also the question of transparency. Many AI systems, particularly those using deep learning, operate as “black boxes”, offering little insight into how a decision was reached. This can be troubling for both doctors and patients, who may struggle to question or validate the reasoning behind an AI-generated conclusion.
Bias is another major concern, especially in this day and age. If the data used to train these systems lacks diversity or reflects systemic inequalities, the AI may replicate or even amplify those biases, leading to disparities in care. Of course, even the most advanced AI system is only as intelligent as the amount and quality of data it’s provided with. For example, diagnostic tools trained predominantly on images of light-skinned patients may underperform for people with darker skin.
On the other hand, there are also risks around over-reliance, both by patients and clinicians, which could lead to a deskilling of the medical workforce or the sidelining of human judgement. This is a concern that’s not unique to medicine however – it’s something that’s being considered across AI sub-sectors – but it’s a vary real issue to consider.
Finally, questions of data privacy and consent loom large, especially as AI requires access to vast amounts of sensitive health information. Without robust safeguards, there’s a risk that patient data could be misused or inadequately protected. Together, these issues highlight the need for careful regulation, transparency and ongoing human oversight.
So, what does this mean for AI in medicine? Well, one issue in particular has been coming up more and more frequently of late, and that is how AI “doctors” are being used for primary medical opinions these days rather than for second opinions or just checking the odd medical question. Is this a good thing or is it setting a frightening precedent?
We gathered a group of experts across the medical industry (as well as a few AI experts) to ask them this question – that is, “should we go to AI doctors for first opinions instead of second opinions?
Here’s what they had to say.
Our Experts
- Dr Antonio Weiss: Author of AI Demystified
- Deborah Grayson: Pharmacist and Nutritional Therapist
- Dr. Tom Oakely: CEO of Feedback Medical
- Antonio Espingardeiro: IEEE Member and Software and Robotics Expert
- James Morris: Chief Executive of The CSBR
- Dr Asia Ahmed: Digital Clinician at Medichecks
- Ayesha Iqbal: IEEE Senior Member and Engineering Trainer at the Advanced Manufacturing Training Centre
- Steve King: Founder and CEO of Dragonfly AI
- Dr. Jim Ang: Expert in Human-Computer Interaction at the University of Kent’s School of Computing
- Pratik Maroo: Senior Vice President and Head of Healthcare and Life Sciences at Zensar
- Dr. Alan Clarke: Dentist and Clinical Director at Paste Dental
- Claes Ruth: CFO at Livi
Dr Antonio Weiss, Author of AI Demystified
“Whilst the use cases for generative AI, as shown in my latest book AI Demystified, are both broad and hugely impressive, we must always remember a vital consideration: risk thresholds. AI models still hallucinate and get things wrong. And with still emerging liability, governance and legal issues, the risk of getting something wrong in healthcare is simply greater than in say customer service.
For now, genAI is a hugely powerful second opinion; but it should not yet be the first port of call in medicine and healthcare.”
Deborah Grayson. Pharmacist and Nutritional Therapist
“AI can have many benefits and can be useful if used correctly but it is important to know its limitations as it can easily be wrong.
Whilst it can be a good tool to get some basic direction as to how to proceed and when to see a doctor or health professional it is not the same as having a consultation and the visual assessment that comes with a face-to-face appointment. I am sure at some time many people have Googled a headache and given themselves a diagnosis of a brain tumour and this is the risk with AI as the nuance of the whole picture is lost. The flipside is missing a key red flag which could be the sign of a sinister condition if an important symptom is overlooked.
From an in-clinic perspective, there are more tools becoming available to aid the clinician within their appointments and it can be really helpful to streamline the record keeping process and help with decision making and even help with differential diagnosis. It is however important that the final clinical decision is made by the clinician and any responsibility for missed symptoms and misdiagnosis would still be clearly the responsibility of the clinician. In my opinion, AI should simply be another tool in the toolbox and not the definitive decision maker.”
Dr. Tom Oakely, CEO of Feedback Medical
“At the moment the constraint around AI in healthcare is lack of imagination. Decision makers keep trying to think about what we do now and how it can be more effectively done, rather than reimagining – from scratch – what the patient pathways and systems could be like if we use AI.
The major limiting factors and costs in healthcare are clinicians, buildings and the culture of only engaging healthcare for treatment rather than earlier prevention. AI and technology could enable an entirely different relationship for individuals with healthcare and drastically reduce the role of clinicians to all but the essential elements.
Right now healthcare leaders are limiting the application of AI to transcribing meetings and reviewing test results, just trying to prop up the current system. Working from an ‘AI first’ principle, the nature of healthcare could be entirely turned on its head within the next decade, even with all the vested interests who try and stop any major reforms.”
More from Artificial Intelligence
- How to Add AI Chat to Your Website
- What Is Automated Code, And Who Uses It?
- OpenAI Granted $200 Million Contract To Help US Military Boost AI Defence
- AI and the Accumulation of Cognitive Debt: A Trade Off Between Efficiency and Clarity?
- How AI And Automation Are Transforming CRM
- UK Government Partners With Tech Companies To Upskill AI Workforce
- Experts Comment: Is the Internet Being Polluted By AI Slop?
- AI Will Transform Everything But First It Needs a Trust Layer
Antonio Espingardeiro, IEEE Member and Software and Robotics Expert
“Telecare is an important aspect in how artificial intelligence (AI) will transform medicine. One of the first stages in hospitals deals with the triage of patients. So this means we can chat to a bot when not feeling well by providing our symptoms. The AI system can then analyse large volumes of data and advise us to see a medical doctor in certain hospitals. This method can forward patients to the right clinical service and reduce waiting times for consultation, which still is one of the biggest challenges today.
In this way, AI could streamline the process and make doctors more efficient, especially considering that there is often a mismatch between the time of examinations and the actual appointment with the GP. The more complex the scenario, the more tests are needed, increasing diagnostic time. However, if AI misdiagnoses a condition due to incorrect data, the consequences could be serious and accountability may fall on developers.”
James Morris, Chief Executive of The CSBR
“International evidence suggests that AI has the potential to be transformative in health care in terms of improving productivity, saving money and improving patient outcomes.
One key recommendation from our recent discussion paper is to mandate the adoption of AI tools for administrative support, including transcription, and appointment scheduling optimisation.
UK pilots and international studies show such tools can reduce documentation burden—cutting keystrokes by up to 67% and saving 3–4 minutes per 10-minute consultation—thereby enabling clinicians to dedicate more time to patient care.
To make AI effective in clinical settings there needs to be a focus on education and training. AI literacy and digital skills training should be an essential part of GP education and on-going professional development. This is critical to building patient trust. The NHS must also develop transparent communication frameworks for explaining AI systems to both clinicians and patients with clear human oversight mechanisms.”
Dr Asia Ahmed, Digital Clinician Medichecks
Ayesha Iqbal, IEEE Senior Member and Engineering Trainer at the Advanced Manufacturing Training Centre
“The emergence of AI in healthcare has completely reshaped the way we diagnose, treat and monitor patients. However, the adoption of AI in healthcare is facing some challenges. These include the complexity of AI systems, lack of technology awareness, lack of skilled AI workforce and regulatory guidelines and lack of trust.
Therefore, it is crucial to establish ethical guidelines and standards, ensure data privacy and security, offer trialability and educate patients so that trust can be developed. At that point, widespread adoption of AI in healthcare can be realised.”
Steve King, Founder and CEO of Dragonfly AI
“One of the areas I’m most concerned about, while also deeply fascinated by, is medicine. I’m both excited and a little uneasy. The idea that AI can help us extend human life, or solve in days what’s taken researchers decades, is nothing short of revolutionary. I believe we’re on the brink of a major transformation in both medicine and the human condition.
But the challenge is: we’re not ready for it. Our societal systems – whether environmental, economic, or social – aren’t built for people living 200 or 300 years. Yet AI is accelerating us toward that possibility, and fast.
In many ways, AI is advancing in medicine almost as rapidly as it is in marketing. But the stakes are far higher. We’re not just talking about optimising ads – we’re talking about mental health, longevity, quality of life, and the sustainability of our environment. These are areas where the consequences are profound, and the ethical questions are complex.”
Dr. Jim Ang, Expert in Human-Computer Interaction at the University of Kent’s School of Computing
“A lot of people are already using AI to look up medical information or even try to self-diagnose. Whilst it might be convenient and seem helpful, it is risky. Current AI tools can still “hallucinate”; they sometimes give wrong information but sound confident, which can be misleading or even dangerous.
There is promising research underway to develop AI that can reason more effectively and draw on validated medical knowledge base, but we are not quite there yet. That said, I believe AI could be useful for supporting doctors, by speeding up admin tasks, summarising notes, or potentially helping with triage. Ultimately, AI should support, not replace human clinicians. Accountability must rest with the healthcare professionals and institutions who choose to use these tools, ensuring patient safety remains the priority.”
Pratik Maroo, Senior Vice President and Head of Healthcare and Life Sciences at Zensar
“Among both the public and healthcare staff, there is recognition that AI could be beneficial for patient care and administrative purposes. However, there are also hesitations about AI’s implementation, with the public and staff extremely sceptical about AI replacing the role of human doctors. There is a demand for human interaction, and a belief that AI should always be overseen by a human, to avoid technology making the wrong diagnosis and to ensure data is safe and secure.
“Individuals overwhelmingly believe that doctors should not be replaced by AI but should instead use it to assist them and save time. Addressing issues related to the human element of care and ensuring the accuracy of AI-assisted decision-making will be crucial for successful implementation.
“Regulation will also be a hurdle to consider, UK GDPR governs all processing of personal data, including AI. However, AI challenges data protection law by requiring massive training datasets and processing them in ways traditional safeguards weren’t designed for. This has direct implications for personal data security and data subject rights. The UK government has announced plans to introduce AI legislation in 2025 to address risks, and all deployments of AI in healthcare will need to be compliant with evolving regulation.”
Dr. Alan Clarke, Dentist and Clinical Director at Paste Dental
“Patients want information about their oral health and dentists want to be as transparent as possible. However, there’s a fine line between presenting a diagnosis and treatment options, and ‘selling’ a service. None of us went into dentistry to become salespeople, we want to give great health outcomes.
By presenting the information on the screen with all the issues highlighted in colours, we and our patients become co-diagnoses and they completely understand our treatment recommendations.
The core benefit of the Second Opinion software we use lies in its ability to improve patient’ health outcomes. By detecting potential issues earlier and more accurately, the software supports preventive care, which reduces the need for more invasive and costly procedures down the line.”
Claes Ruth, CFO at Livi
“Automated administration and better communication are making it easier for healthcare professionals to deliver best-in-class personalized medicine. Our AI programs have been running for almost a year now and have been shown to reduce the time spent on administration by a 40 percent (since 2022), with up to 18,000 patient appointments transcribed every week.
Generative AI is already demonstrating how it can streamline and simplify some of the complex and time-consuming administrative and back-end clinical processes, thereby reducing complexity and alleviating workplace pressures for clinicians. Much of healthcare is unstructured data in the form of clinical notes and medical records which can be analyzed by AI in vast volumes. The current interaction of AI tools and large language models (LLMs) are expert at understanding medical notes, conversations, and specific situations, then outputting the data into the most appropriate format. This might include transcribing doctors’ notes, pre-filling referral letters and performing administrative tasks more efficiently.”