top of page
Search

New AI Tool Empowers Cybercriminals to Launch Complex Cyberattacks: WormGPT

  • Writer: Sadananda Sahoo
    Sadananda Sahoo
  • Jul 27, 2023
  • 2 min read

Given how popular generative artificial intelligence (AI) is right now, it may not come as a surprise that the technology has been repurposed by bad actors for their own gain, opening up new opportunities for faster crimes.


A new generative AI cybercrime tool dubbed WormGPT has been offered on darknet forums as a mechanism for adversaries to carry out complex phishing and corporate email compromise (BEC) assaults, according to SlashNext findings.


Security researcher Daniel Kelley remarked, "This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities." Cybercriminals can utilize such technology to automatically create false emails that are very convincing and customized for the receiver, improving the attack's chances of success.


The software's creator referred to it as the "biggest enemy of the well-known ChatGPT" that "lets you do all sorts of illegal stuff." It is rumored to employ the free GPT-J language model created by EleutherAI.


Tools like WormGPT could be a potent weapon in the hands of a bad actor, especially in light of the fact that organizations like Google Bard and OpenAI ChatGPT are working harder to prevent the misuse of large language models (LLMs) to create convincing phishing emails and produce harmful code.


According to a report released this week by Check Point, "Bard's anti-abuse restrictors in the domain of cybersecurity are significantly lower compared to those of ChatGPT." As a result, exploiting Bard's capabilities makes creating malicious information much simpler.

Advanced Cyber Attacks.


The Israeli cybersecurity company revealed how cybercriminals are using ChatGPT's API to get around the platform's limitations, trade stolen premium accounts, and sell brute-force software that uses massive lists of email addresses and passwords to break into ChatGPT accounts.


WormGPT's lack of ethical constraints highlights the danger posed by generative AI, allowing even inexperienced hackers to conduct attacks quickly and on a large scale without the necessary technological resources.

 
 
 

コメント


bottom of page