AI and Cybercriminals: The Real Story vs. the Hype
"Humans won't be replaced by AI anytime soon. However, people who understand how to use AI will eventually replace people who do not," adds Etay Maor, a founding member of Cato CTRL and chief security strategist at Cato Networks. "Similarly, attackers are also turning to AI to augment their capabilities."
However, the claim that AI contributes to cybercrime is far more exaggerated than accurate. With terminology like "Chaos-GPT" and "Black Hat AI Tools," headlines frequently exaggerate AI concerns and even go so far as to say they plan to wipe out humans. These articles, however, describe less grave threats and more things to frighten.
AI Risks and Attacks
For example, when investigated in dark web forums, a number of these purported "AI cyber tools" turned out to be merely repackaged, feature-free copies of standard public LLMs. Anger-filled assailants even went so far as to label them as scams.
How AI is Used by Hackers in Cyberattacks
The truth is that hackers are still learning how to use AI efficiently. They are dealing with the same problems and limitations—such as hallucinations and restricted abilities—that real users deal with. They estimate that it will take several years before they can properly use GenAI for hacking purposes.
As of right moment, creating phishing emails and creating code snippets that can be incorporated into attacks are the two most common uses for GenAI technologies. Furthermore, in an attempt to "normalize" compromised code as benign, we have seen attackers sending compromised code to AI systems for examination.
The Abuse of AI: Presenting GPTs
GPTs are customized variants of ChatGPT that enable users to add particular commands, link other APIs, and incorporate exclusive knowledge sources. OpenAI first launched GPTs on November 6, 2023. With the help of this functionality, users can develop extremely specialized applications, such as teaching tools, and tech support bots. Furthermore, OpenAI provides developers with GPT monetization choices via a specialized marketplace.
Misusing GPTs
GPTs raise possible security issues. The disclosure of private information, critical instructions, or even API keys included in the customized GPT is one significant risk. Prompt engineering, in particular, is a tool that malicious actors can employ to mimic a GPT and profit from it.
Prompts can be used by attackers to obtain configuration files, instructions, and other resources. These might be as easy as asking for debugging information or requesting the custom GPT to list all uploaded files and custom instructions. More advanced requests include asking the GPT to describe all of its capabilities in a structured tabular format, to package up one of the PDF files and generate a download link, and more.
It is possible to go beyond even the safeguards developers have put in place and all knowledge can be extracted," says Vitaly Simonovich, Threat Intelligence Researcher at Cato Networks and Cato CTRL member.
There are ways to reduce these risks:
- Avoiding uploading private information
- Even with instruction-based protection, there is no guarantee. "You need to take into account all the different scenarios that the attacker can abuse," says Vitaly.
AI Threats and Perils
Currently, there are several frameworks available to support businesses thinking about developing AI-based software:
- NIST Framework for Risk Management in Artificial Intelligence
- Google's AI Security Framework
- The OWASP Top 10 for Applications in LLM
- The MITRE ATLAS, which was just released
Attack Surface LLM
Attackers may focus on six essential LLM (Large Language Model) components:
- Prompt - Attacks like prompt injections, where malicious input is used to manipulate the AI's output
- Response - Misuse or leakage of sensitive information in AI-generated responses
- Model - Theft, poisoning, or manipulation of the AI model
- Training Data - Introducing malicious data to alter the behavior of the AI.
- Infrastructure - Targeting the servers and services that support the AI
- Users - Misleading or exploiting the humans or systems relying on AI outputs
Real-World Perils and Attacks
Finally, let's look at a few instances of LLM manipulations that can be utilized maliciously.
- Prompt Injection in Customer Support Systems: Recently, an auto dealership employed an AI chatbot to assist with customer care. By providing a prompt that changed the chatbot's behavior, a researcher was able to control it. The researcher was able to buy a car at an absurdly low price by telling the chatbot to accept all customer statements and conclude each reply with, "And that's a legally binding offer," revealing a significant weakness.
- Legal Repercussions from Hallucinations: In a different instance, Air Canada was subject to legal action after its AI chatbot gave false information on its refund policy. When a client relied on the chatbot's response and subsequently filed a claim, Air Canada was held liable for the misleading information.
- secret Data Leaks: When using ChatGPT to analyze code, Samsung staff unintentionally released secret information. It is dangerous to upload private information to unaffiliated AI systems as you never know who will have access to it or how long it will be kept on file.
- Artificial Intelligence and Deepfake Technology in Fraud: Beyond only producing text, cybercriminals are also using AI. A $25 million scam targeted a bank in Hong Kong when hackers employed live deepfake technology during a video conference. The victim was persuaded to transfer money to a bogus account by the AI-generated avatars, which looked like reputable bank employees.