How AI Is Assisting Cybercriminals

In recent months, artificial intelligence has significantly altered the landscape of cybercrime, much like it has across various industries. According to cybersecurity firm BlueVoyant, attackers are increasingly using advanced technologies to generate malicious code and craft highly convincing phishing emails. Looking ahead, we can anticipate the rise of AI-driven malware, sophisticated document forgery, and disinformation campaigns, among other threats.

 

This year has seen a surge in the use of AI across nearly all sectors as companies strive to harness its benefits for their operations. Cybercrime is no different, with AI becoming a key tool for malicious actors. Understanding how these technologies are being exploited can aid in developing better defenses against them.

 

Are AI-Generated Passwords Being Sold?

 

Credentials for ChatGPT have become a valuable commodity on the dark web, where stolen logins are traded just like those for other online services. Cybercriminals often gather login details, which include email addresses and passwords, through information theft software designed to extract sensitive data from vulnerable devices. Such software is particularly effective if users are running outdated operating systems or have disabled automatic security protections.

 

Many individuals register for OpenAI services using their corporate email addresses, and BlueVoyant’s threat intelligence has noted that these credentials tend to fetch a higher price on the dark web compared to those linked to personal emails.

 

What Is WormGPT and How Is It Used?

 

While ChatGPT is designed to prevent misuse for illegal purposes, there are other AI tools without such restrictions. WormGPT, for instance, is marketed as a tool for security professionals to test malware and enhance their defenses. Although the creators disapprove of using it for illicit activities, it is still capable of being used for malicious purposes. BlueVoyant has observed that a variant of WormGPT is available on the dark web via subscription and can write harmful code in various programming languages to steal cookies or other sensitive information from users’ devices.

 

WormGPT can also support phishing schemes by generating highly convincing and sophisticated phishing messages, making it harder to detect fraud. Additionally, it can identify legitimate services that can be exploited for illegal purposes, such as SMS messaging services used for large-scale phishing campaigns.

 

What Future Threats Might AI Bring?

 

BlueVoyant’s threat intelligence predicts that as AI continues to evolve, new cyber threats will emerge. AI-enhanced malware may become adept at stealing sensitive data and evading antivirus software by making intelligent, autonomous decisions, reducing the need for attackers to communicate with their malware and improving their chances of staying undetected.

 

AI will likely also advance the capabilities of document forgery. As more transactions are conducted online using images of identity documents, the need for robust document verification grows. AI tools will make it easier for criminals to forge documents that can bypass online verification systems, potentially facilitating fraudulent activities such as opening bank accounts with fake documents to launder illicitly obtained money.

 

Similarly, AI-powered disinformation campaigns can become more effective if the technology is well-crafted. AI tools can help spread false information in multiple languages, making it seem more credible.

 

Will AI Lead to Job Losses in Cybersecurity?

 

There is a common concern that AI will render human labor obsolete across various industries. However, this does not seem to be a major concern in the field of cybercrime, according to BlueVoyant’s experts. On the contrary, the shortage of IT professionals has led to increased demand for those who understand generative AI and are willing to apply their skills in illegal activities.

 

As businesses globally integrate AI tools into their workflows, the less glamorous side of AI will also experience a rise in activity. Security teams must brace for an increase in AI-driven cyber threats, including advanced phishing attacks, malware, and extensive password theft. This could, in turn, lead to a higher demand for cybersecurity professionals.