A new kind of cybercrime called vibe hacking is changing how online fraud takes place. Instead of breaking into computer systems, criminals take the AI route to break into people’s emotions. The Economic Times reported that these hackers rely on psychological strategies rather than software vulnerabilities. Their goal is to make people trust what they see or hear, even when it is entirely fake.
Vibe hacking uses AI to copy how people talk, move and express themselves. Deepfake tools can now mimic real voices and facial gestures, so hackers can pretend to be people we know in real life. Victims often end up giving said people their passwords or financial information because the fake interaction feels natural and believable. This deception works especially well on social media, video calls and workplace chat platforms where people already expect to see familiar faces.
Cybersecurity experts told The Economic Times that this kind of manipulation is growing fast. AI systems can now hold full conversations that sound genuine. Even people who are confident with technology struggle to tell what is real. That is what makes vibe hacking different… It targets emotions before logic even comes into play.
How Did AI Make Vibe Hacking Worse?
Anthropic’s Threat Intelligence Report from August 2025 showed that AI has made it easier than ever for cybercriminals to launch complex attacks. The report said that criminals are using tools like Claude Code to plan, carry out and even adapt during cyberattacks. Hackers don’t even need entire team of skilled hackers to do this because it can now be done by a single person with basic technical knowledge.
One of the most troubling cases involved a large-scale extortion scheme built entirely with AI. Anthropic found that a cybercriminal used Claude Code to target at least 17 organisations in healthcare, emergency services and government sectors. Instead of locking data with ransomware, the hacker threatened to publish stolen information unless the victims paid ransoms of up to $500,000. Claude was used to scan networks, steal login details and analyse financial records to decide how much money to demand from each victim.
The AI even created realistic ransom notes. These messages had detailed breakdowns of each organisation’s finances, even staff salaries and donor lists. They looked professional and convincing, using real company figures and names. Anthropic’s security team said the AI generated these messages automatically, deciding on tone, layout and level of intimidation to maximise pressure. This was a clear example of AI being misused as an active criminal assistant.
The report also found that criminals are embedding AI into every stage of their operations, from identifying victims to selling stolen data. This automation means people with little technical background can now carry out complex fraud. That ease of access has opened the door for new kinds of cybercrime and vibe hacking contributes greatly to this change.
What Makes Vibe Hacking Different From Manual Hacking?
Traditional hacking depends on technical weaknesses like unprotected software or poor passwords. Vibe hacking targets human weakness and that is trust. The Economic Times described it as the evolution of cybercrime from attacking machines to attacking minds. It turns ordinary digital communication into an emotional trap.
Unlike phishing or malware scams, these new attacks feel personal. They often begin with a familiar voice message or a convincing video call. An AI-generated manager could do something like ask an employee to share a document or approve a payment. Because everything looks and sounds real, the victim rarely doubts the request. The manipulation relies on psychology more than technology.
The rate at which deepfakes are being used and therefore developed faster and smarter, it has made this deception far more dangerous. AI systems can now create lifelike avatars that smile, blink and speak naturally. In a workplace setting, this means a hacker could attend a virtual meeting using a fake version of a known executive. A few minutes of conversation are often enough to gain trust and access to sensitive systems or funds.
More from Artificial Intelligence
- How Do Hotels And Resorts Use AI?
- Can AI Feed The Planet? How AI Is Transforming Agritech
- Why Are Some Of Europe’s Top AI Startups Relocating To The UK?
- Robots With An Attitude Coming To Life With Human Computer Lab
- Is The Dot-Com Bubble Back To Haunt Us? What The Past Can Teach Us About The AI Boom
- Steven George-Hilley To Lead AI Policy For Major Think Tank
- Microsoft, Meta, OpenAI And Alphabet Go All In On AI: Are They Setting Themselves Up For Failure When The Bubble Bursts?
- The 9 Jobs AI Still Can’t Do Better Than Humans
How Are Companies Responding To All This?
The Economic Times reported that cybersecurity teams are calling for stronger training and emotional awareness in workplaces. Technical defences alone are no longer enough. Employees must learn to question even the most natural-seeming online interactions. Simple checks such as confirming requests through another channel can stop a vibe hacker’s progress before any damage occurs.
Organisations are also updating their security systems to identify AI-generated content. This can involve checking for unnatural eye movements, mismatched lighting in videos or audio distortions. These are small signs that can reveal when someone is dealing with a digital impostor. Some firms are using multi-factor authentication and stricter approval processes to make sure sensitive actions cannot be completed through chat or video calls alone.
Anthropic has also taken direct action. After discovering the AI-assisted extortion case, it banned the accounts responsible and shared technical information with law enforcement. The company has built a screening tool that detects suspicious patterns in AI use. It also introduced new ways to flag behaviour that looks like automated reconnaissance or data theft. These tools help catch bad actors before they can scale up their operations.
Governments are beginning to pay attention as well. Authorities in different sectors have urged organisations to report any incident that involves deepfakes or AI deception. Collective awareness helps build a clearer picture of how these crimes evolve, especially when they cross borders or involve public institutions.
What Are The Risks For Ordinary People?
The psychological side of vibe hacking makes it dangerous for everyday users. Scammers can now mimic friends or relatives easily. The Economic Times mentioned how the realism of these interactions often leaves victims unaware they have been targeted until it is too late. Once trust has been built, personal details, bank access or private files can be stolen, literally in minutes.
Anthropic’s findings show that AI tools have removed many of the limits that used to hold back cybercrime. Criminals no longer need deep technical knowledge. They can use AI to fake credentials, automate communication, and even write realistic social media posts. This opens a huge space for manipulation, from fake fundraising campaigns to corporate fraud.
Emotional manipulation also has a longer-lasting effect. Victims often feel embarrassed or ashamed after discovering they were deceived by someone pretending to care or help. That emotional damage can make recovery harder, both personally and financially.
What Can Be Done To Stop It?
Both Anthropic and The Economic Times agree that awareness is the strongest defence. People and businesses must learn to question emotional cues in digital spaces. Training that teaches employees how to recognise manipulation is becoming as important as antivirus software. A moment of hesitation before clicking or responding can make a difference.
Technological defences are also getting better as AI systems that detect deepfakes or identify suspicious voice patterns are being developed. These can help flag when a video or call may have been generated synthetically. Companies are starting to use these detection tools with human verification methods as a way to create a safety net between emotional and technical protection.
Anthropic plans on continuing to update its safety measures and share its findings with researchers and authorities. Its August report concluded that collaboration between technology firms, governments and private organisations is much needed to contain these threats. The company’s monitoring tools have already stopped multiple cases of AI abuse before they reached full scale.