It was November 30, 2022 and ChatGPT had announced: “We are excited to introduce ChatGPT to get users’ feedback and learn about its strengths and weaknesses. During the research preview, usage of ChatGPT is free. Try it now…”
Since then, this system has become one of the most talked about tools in the world.
ChatGPT’s maker OpenAI explained then that the goal was to learn from users during a research preview and see strengths and weaknesses. That free experiment has grown into one of the most talked-about pieces of software in decades. This week ChatGPT turns three, a moment when praise and criticism sit side by side as the “toddler years” of generative AI continue.
OpenAI first positioned ChatGPT as a sibling to InstructGPT, designed to follow directions and talk in a natural tone. Millions of people found uses in minutes ranging from homework help to writing workplace emails. Firms across different areas now use the chatbot to speed processes and give guidance. Precisely says the promise depends on good data in the same way a young child needs the right care to grow.
Security teams have noticed behaviour that feels less cute. Yubico says the arrival of generative AI marked the moment phishing scams became far slicker and cleaner.
How Big Has Usage Become?
Precisely says the platform has 700 million weekly active users worldwide. That is a huge audience for a tool that did not exist four years ago.
People ask it questions that once went through a search engine or a colleague. It has also entered classrooms and offices, changing how tasks get done.
ChatGPT’s sudden reach means mistakes and misinformation can spread fast if the data behind an answer is wrong. Precisely points to the need for high quality inputs to keep trust.
Tendü Yoğurtçu, PhD chief technology officer at Precisley comments on the urgent need to address data across training and inference in generative AI software:
“In many ways, the generative AI movement is entering a formative stage. This is a period defined by rapid progress, intense exploration, and the need for clear guardrails. As with any early adoption phase, outcomes depend on the quality of inputs. The same applies to generative AI. Its performance and long-term value rely on the integrity of its data across both training and inference.
“We are seeing clear and meaningful advances from generative AI and agentic AI. At the same time, inaccurate, inconsistent, or incomplete data creates more than technical flaws. It leads to real-world consequences that influence business decisions, customer experiences, and societal outcomes, from missed opportunities to inequities in lending or healthcare. AI reflects the data behind it. If that foundation is weak, every insight, prediction, and recommendation is at risk.
“As the technology evolves, data integrity will become even more central to AI maturity. This includes data quality, integration, governance, and the enrichment required to build essential context, including the use of location intelligence. Together, these capabilities strengthen the trustworthiness of AI systems. They allow organisations to scale AI with confidence and deliver results that support growth, efficiency, and innovation.”
More from Artificial Intelligence
- AI Drives $80B Across European Economies, With Sweden Leading The Way
- Tovie AI Launches Agent Platform To Bring Scalable AI Automation To The Enterprise
- Solude Partners With Nuklai To Bring AI-Driven, Dynamic Customer Communication to Businesses
- ChatGPT Ditches The Em Dash: What Does This Mean For AI Detection?
- EU Accused Of Weakening AI And Data Rules After Pressure From Big Tech
- Jeff Bezos Launches New $6.2Bn AI Startup: Project Prometheus
- Agentic AI Explained
- What If The Biggest Barriers To AI Adoption Aren’t Technical, But Human?
What Fears Surround Its Use?
Yubico calls this period a “golden era” for criminals who send fake emails and texts. They can now craft convincing messages that read as if a real company wrote them. Tailored spear-phishing has become easier to produce and harder to spot.
Security specialists say threat actors can now copy writing styles with little skill needed. This creates new problems for companies trying to keep staff safe online.
OpenAI built the chatbot to reject harmful requests, but attackers trick systems or mix lies with truth to go around those barriers. That makes safe design a constant job.
Policymakers and businesses look at three years of progress and see both gains and dangers. Tools that make work faster also give criminals new tricks.
The toddler comparison from Precisely feels accurate. A three year old can speak and learn fast but also make a mess without guidance. ChatGPT at three shows promise and trouble in the same breath.
Niall McConachie, regional director (UK & Ireland) at Yubico, says the anniversary should serve as a wake-up call, prompting a serious rethink of how we approach identity security:
“ChatGPT’s third birthday isn’t just a tech milestone; it marks the democratisation of cybercrime like phishing globally. We’re no longer just dealing with poor grammar and clumsy scams – we’re now facing automated, adaptive threats that blur the lines of what humans can detect between real and AI. Attackers are now using GenAI to automatically write malicious code at scale, and generate convincing phishing sites that evolve faster than traditional cyber defences can keep up with. GenAI can now replicate the tone, urgency and context of a colleague, friend or brand, and can do so at scale. As we reflect on three years of ChatGPT, we must acknowledge a critical shift – the human line of defence has been breached and it can no longer be our primary safeguard.
“The public clearly senses this shift. Recent research shows that 81 percent* of people are now concerned about AI threatening the security of their personal or business accounts, which is a 20 percent increase from last year. That’s not just a data point, it signals a growing crisis of confidence. Yet many individuals and businesses are still relying on insecure passwords and one-time codes, even though these are easily bypassed by AI-generated phishing attacks or behaviour mimicry. If we continue to depend on these outdated methods, we will fall behind.
“The only meaningful defence in this new era of GenAI-driven crime is phishing-resistant multi-factor authentication (MFA) tools like passkeys. Hardware security keys offer exactly that and provide immunity from even the most advanced AI-powered scams. This is because they require something you have (a physical key), something you know (a PIN) and something you are (physical touch of the key to gain access to accounts). If an AI can deceive a person, but can’t trick the protocol, that’s where our protection must begin.”