Despite What You’ve Heard, ChatGPT Still Gives Medical And Legal Advice

Last week, headlines were littered with news that ChatGPT is suddenly no longer allowed to provide users with health or legal advice – a pretty big announcement given the fact that these two features of the app were a fairly big part of the release of GPT-5 earlier this year.

The whole incident was “shock”, “horror” and implications that “OpenAI is under fire“. But, the reality? Well, nothing, really.

There seems to be a pattern emerging, not only in tech and business news, but across the board. One publication “reports” and publishes a story with a catchy headline, and countless others copy and paste (or paraphrase, if you’re lucky).

But, in addition to the obvious problems associated with this kind of lazy reporting – that is, all the “news” is the same, only a single voice or opinion is heard, the same story is published countless times and so on – another, far greater issue has emerged.

That is, one instance of false reporting, whether intentional or not, or even a simple misunderstanding, can lead to misinformation being spread at a rate of knots. Suddenly, headlines all over the place are carrying stories with the same untrue assertion and narrative, and many people still stick to the idea that news or “facts” can be verified by mass agreement. That is, one headline raises eyebrows but if many publications report the same issue, it must surely be true.

Now, anybody with any experience in media understands the immediate faulty thinking here, and I’d really like to think most reasonable people are more crticial and do more fact checking than this. However, the reality is that too many people still get sucked into this trap.

Unfortunately, in an era rife with misinformation (“fake news”, as we call it these days), the last thing we need is a proliferation of this problem in the professional media. No, it’s not a brand new thing, but it seems to be getting a lot worse, a lot more quickly at the moment.

So, back to the point – ChatGPT and its ability to give medical and legal advice.

The crux of the matter here is that no, ChatGPT is not suddenly being prevented from being able to give users both medical and legal advice, as was promised when GPT-5 was released earlier this year. So, why were so many people convinced that there was, suddenly, this big change?

 

Big News? More Like No News

 

Early last week, news outlets all over carried stories with headlines echoing the so-called change in OpenAI’s policies regarding what kind of advice ChatGPT is and isn’t permitted to give users. A few of the first ones to pop up included phrases like:

  • “ChatGPT Will No Longer Give Health Or Legal Advice”
  • “ChatGPT ‘Restricted’ From Giving Medical, Legal, Or Financial Advice Over Liability Fears”
  • “ChatGPT’s ‘New Rules’ Reportedly Ban Specific Legal, Health, Money Tips”

And from there, countless publications followed suit with paraphrased versions of these headlines.

It all sounds pretty important – of course, the implication is that something big has changed and that ChatGPT is being forced to alter its policies surrounding legal and medical advice.

But actually, if you scratch a little further – and not even much further – you’ll soon discover that in reality, there has been no change. Essentially, what happened is that OpenAI released updated policy list, a lot of which was just a reorganisaiton of existing policies, and someone who has no understanding of the initial policies interpreted it as a set of new rules and restrictions.

The policy in question is one that prohibits specific utilisation of ChatGPT for several inappropriate uses, including the “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional”. This corresponds directly with OpenAI’s previous policy that went along with the initial release of GPT-5 that advised against uses of GPT-5 that “may significantly impair the safety, well-being or rights of others”. This specification included advice regarding financial, medical or legal matters.  disclosure of the use of AI assistance and its potential limitations”. This advice needed to be reviewed by a licensed professional and, in addition to that, the use of AI in the process needed to be disclosed.

So, what’s changed and why has OpenAI republished these policies?

Well, nothing’s changed – according to OpenAI, this was simply about consolidating all of these AI-related rules and restrictions of use into one list that could be applied across all of the companies products and campaigns.

So, again, there have been no changes, just a rephrasing or reorganisation of existing terms of use. Karan Singhal, Singhal, head of Health AI for OpenAI, cleared things up in a now-deleted Tweet published by Kalshi:

 

tweet-x-openai

 

And, for anyone who’s been paying attention over the last few months since GPT-5’s big release, this is old news. ChatGPT has always been able to provide users with some level of legal, financial and medical advice, but at the very outset, this was restricted. It was always about providing people with an additional resource for this sort of information rather than replacing experts. Whether or not this is a good idea (and if people can actually handle discerning the difference between advice given by a doctor and a bot, is another question entirely – but one that, I’ll put bluntly, was spoken about at length when all of this first emerged as news.

 

 

The Real Headline? 

 

The crux of the matter here is twofold:

  1. OpenAI hasn’t changed its policies: That is, those regarding whether or not ChatGPT can provide users with financial, legal and medical advice. It can still provide generalised advice and it is not (and has never been) intended to replace professional advice.
  2. Increasingly often, news is being created out of nothing: Donald Trump’s version of “fake news” is one thing (a so-called intentional fabrication of the truth), but we’re starting to see more and more incidents of an unintentional spread of misinformation fuelled by laziness, professional inexperience and a culture of “fast news”.

The first is a momentary factual misrepresentation that can, and has been, easily amended. These factual inaccuracies are problematic, undoubtedly, but in isolation, they can be rectified fairly quickly and easily.

But the latter is a far more concerning trend that is growing rapidly, and it’s one we should be very, very wary of. It’s buying into this modern age (a lot of which is encouraged by the availability of AI tools and other technology) of being able to do things and create value incredibly quickly. But, this “value” isn’t value at all – it’s a simulation of it, and it’s dangerous.

The reason this is so concerning is that we’ve seen it a lot, especially in tech news. Sam Altman, for instance, has been at the centre of so many of these stories that really are of no substance. For instance, remember the whole “Sam Altman Says We Shouldn’t Trust ChatGPT Too Much” story that blew up as a news story?

The reality of that issue was that Altman had said absolutely nothing new. From the very start of his involvement in AI and his sharing of his opinion on the topic, he’s always advised users to practice caution when it comes to trust. He’s always encouraged skepticism, as have most major players in the AI landscape.

But suddenly, after speaking on this topic during a podcast, somebody caught on, and clearly, with very little understanding of the topic and Altman’s longstanding opinions, decided to publish a story asserting that Sam was suddenly telling the world to stop trusting AI. It implied that it was dangerous and even nefarious in nature – immediately, I thought of a sci-fi-esque movie in which an AI agent suddenly becomes autonomous and takes over the world. It implied that perhaps OpenAI and Altman had discovered something major and were warning the world – dramatic! Scandalous! Big news!

But, cut back to real life – another no news, big news story.

And, it’s not just Altman, OpenAI and ChatGPT that are at the centre of this chaos, and the more it’s encouraged, the worse it’ll get.

So, how do we deal with this?

We can’t stop every last individual from spreading unreliable news from countless publications all over the world, but what we can do is be more critical and more skeptical – that is, as readers and consumers of this kind of information – and not simply blindly share stories without taking a moment to have a real look at them.

And for those who are part of the media cycle, let’s hold ourselves and our colleagues accountable. We live in a world of instant gratification – from online purchases and app-based transportation services to on-demand streaming platforms and more. Ordinary people can now consume more media than ever, so it’s up to us to make sure that that which we publish still holds just as much value as it was expected to in the past – if not more.