One of the most prominent issues that has arisen regarding AI and the daily use of chatbots is that of reliability. Are AI chatbots providing us with information we should trust? How do we know whether or not their information is reliable?
Somewhat disconcertingly, Sam Altman, CEO of OpenAI, has asserted, in no uncertain terms, that there may be a little too much trust being put into AI chatbots, including ChatGPT.
In Altman’s words, “people trust ChatGPT a little too much.”
Coming from an AI industry leader himself, there’s no surprise that this statement, as well as the broader conversation itself, has led to a few raised eyebrows. If we were already a little worried about the reliability of AI, is this an indication that it’s not just a valid concern, it may be an actual issue? Is AI completely unreliable?
A Whole Lot of Nothing
As much as Altman has become known for making the odd controversial statement and stirring the pot here and there, this particular situation seems to have been blown a little out of proportion.
AI news is always flooded with panic and concern surrounding trust, reliability and larger existential questions and worries, and to be perfectly frank, these comments made by Altman on OpenAI’s new and recently released podcast are nothing new. He’s not making breaking news or telling us something we didn’t already know. In fact, this is something both he and experts have been saying from the get go.
Trust has just become a bit of a buzzword in the AI sphere, but the idea of AIs hallucinating and providing inaccurate information is old news. It’s not something that’s been hidden by experts or industry professionals, so why are people surprised now that Altman is simply emphasising one of the faults of AI technology?
We’ll say it exactly as it is – a whole lot of panic over nothing new. And even though Sam has done his fair share of poking the bear, so to speak, this isn’t one of those times.
More from Artificial Intelligence
- Experts Comment: Should We Go to AI Doctors for First Opinions Instead of Second?
- How to Add AI Chat to Your Website
- What Is Automated Code, And Who Uses It?
- OpenAI Granted $200 Million Contract To Help US Military Boost AI Defence
- AI and the Accumulation of Cognitive Debt: A Trade Off Between Efficiency and Clarity?
- How AI And Automation Are Transforming CRM
- UK Government Partners With Tech Companies To Upskill AI Workforce
- Experts Comment: Is the Internet Being Polluted By AI Slop?
So, Do We Put Too Much Trust In ChatGPT?
Now, just because this isn’t breaking news doesn’t mean it’s not an issue that we should be aware of – in fact, the fact that Altman and other experts are constantly raising the issue is actually more of an indication that we’re not taking it seriously enough.
Yes, the technology is incredible and it has the ability to do a lot more than we could possible have dreamed of in the past. And, the content that it produces sounds professional and very believable. However, just because it sounds good doesn’t mean that it’s actually accurate or should be trusted. We’re well aware the AI hallucinates and the issue with hallucinations is that it’s incredibly difficult to tell the difference between solid, trustworthy information and a straightforward hallucination.
So, the advice from experts is: be skeptical.
Don’t blindly trust ChatGPT or any AI chatbot with the information it’s providing you. Always question the information you’re being fed and do some fact checking, the same way you would for a human.
As Altman says, AI actually “should be the tech that you don’t trust that much”, rather than the tech that you place all of your trust in, without question.
Furthermore, being a little more skeptical with trust will also be good for eliminating at least a bit of our growing dependence on AI, because that also seems to be becoming an increasingly concerning trend.
Moral of the story: don’t panic, the house isn’t on fire. Just practice some healthy scepticism, don’t blindly trust AI and you’ll be fine.