Security
BadGPTs: how cybercriminals abuse AI chatbots

BadGPTs: how cybercriminals abuse AI chatbots

In the ongoing evolution of technology, AI models such as ChatGPT are a striking example. These useful tools hold great promise for the future. Unfortunately, these possibilities also come with a dark side, where cybercriminals exploit the power of AI models for phishing and other forms of cyberattacks. More and more malicious chatbots are popping up on the shadowy paths of the internet, with names like “BadGPT” and “FraudGPT”.

What are BadGPTs?

BadGPTs are chatbots that use the same artificial intelligence as OpenAI’s ChatGPT. While most use ChatGPT to enhance texts and emails, hackers deploy manipulated versions of these AI chatbots to amplify phishing emails. They use chatbots, some even freely available on the open internet, to create fake websites, spread malware and modify messages to impersonate trusted entities, such as executives.

Some of the most notorious BadGPTs around at the moment:

  • WolfGPT
  • DarkBARD
  • FraudGPT
  • WormGPT

How do these malicious chatbots work?

Most dark web hacking tools use open-source AI models, such as Meta’s Llama 2, or “jailbroken” models from vendors such as OpenAI and Anthropic to power their services. Jailbroken models are hijacked using techniques such as “prompt injection” to bypass their built-in security controls. Although companies such as Anthropic are actively fighting jailbreak attacks, the threat remains.

The consequences of Ai abuse

Everyone probably saw it coming. Powerful technology will always be misused. And that’s a shame. Because that’s how trust in the digital age gets another blow.

We previously wrote a blog about trust issues in the digital world: https://safe-connect.com/trust-issues-in-the-digital-landscape/

Ai can be abused. This is obvious. We have already seen it in the case where explicit photos of Taylor Swift, among others, were generated. Fortunately, the famous artist filed a case and everyone is aware that this is a crime.

You can read the full article here: https://www.theguardian.com/technology/2024/jan/30/taylor-swift-ai-deepfake-nonconsensual-sexual-images-bill

But this unique incident defies any imagination:

Earlier this year, an employee of a multinational company in Hong Kong paid as much as $25.5 million to a cybercriminal posing as the company’s chief financial officer during an AI-generated deepfake conference call. In other words, these were live deepfakes talking in real time. The duped employee was thus convinced that this was indeed true and wrote the money over This incident illustrates how AI chatbots, such as BadGPT, pose a threat to companies and individuals.

The full article can be found here: https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html  

Protecting yourself with Cybersecurity Awareness Training

Malware and phishing emails written by generative AI are particularly difficult to detect because they are designed to evade detection. The only way to protect your organisation against them is by raising human awareness around digital danger.

You do that with Cybersecurity Awareness Training. With this training, your colleagues learn to recognise dangers and take the right action. That way, you counter Phishing and Deepfakes.