OpenAI Shuts Down Accounts Using ChatGPT for Surveillance, Disinformation, and Cybercrime

OpenAI has taken action against a set of accounts that exploited its ChatGPT tool to develop an AI-powered surveillance system, reportedly originating from China. The tool, believed to be backed by one of Meta’s Llama models, was used to generate detailed reports and analyze documents related to anti-China protests in Western countries, with the collected data allegedly shared with Chinese authorities.

OpenAI Shuts Down Accounts Using ChatGPT for Surveillance, Disinformation, and Cybercrime

OpenAI has taken action against a set of accounts that exploited its ChatGPT tool to develop an AI-powered surveillance system, reportedly originating from China. The tool, believed to be backed by one of Meta’s Llama models, was used to generate detailed reports and analyze documents related to anti-China protests in Western countries, with the collected data allegedly shared with Chinese authorities.

Dubbed "Peer Review," the operation focused on monitoring social media activity across platforms like X, Facebook, YouTube, Instagram, Telegram, and Reddit. OpenAI researchers identified instances where ChatGPT was used to debug and refine source code for the suspected surveillance software, called "Qianyue Overseas Public Opinion AI Assistant." The actors also leveraged ChatGPT to research think tanks, government officials, and politicians from the U.S., Australia, and Cambodia, as well as to translate and analyze protest-related content.

Beyond this campaign, OpenAI also disrupted several other clusters abusing ChatGPT for malicious purposes, including:

  • North Korean Job Fraud – A network creating fake resumés, online job profiles, and cover letters to support a fraudulent IT worker scheme, with fabricated applications appearing on LinkedIn.
  • Chinese Disinformation – Accounts generating social media content critical of the U.S. in English and publishing anti-American articles in Spanish across Latin American news sites, overlapping with the Spamouflage campaign.
  • Romance & Investment Scams – A Cambodia-linked operation producing comments in Japanese, Chinese, and English for fraudulent social media schemes.
  • Iranian Influence Operations – A cluster generating pro-Palestinian, pro-Iran, and anti-Israel/U.S. content for platforms tied to known Iranian propaganda networks, including the International Union of Virtual Media (IUVM).
  • North Korean Cyber Operations – Accounts associated with the Kimsuky and BlueNoroff hacking groups, gathering intelligence on cyber intrusion tools and cryptocurrency-related topics, and debugging RDP brute-force attack code.
  • Election Interference in Ghana – A campaign producing English-language articles and social media content targeting Ghana’s presidential election.
  • Task Scams – A Cambodia-based operation translating scam-related content between Urdu and English, tricking victims into fake online tasks in exchange for non-existent commissions.

These actions highlight how AI tools are increasingly exploited for cyber-enabled disinformation, cybercrime, and state-backed influence campaigns. Google’s Threat Intelligence Group (GTIG) recently reported that over 57 threat actors from China, Iran, North Korea, and Russia were leveraging AI, including Google’s Gemini, for attack planning, research, and propaganda.

OpenAI emphasized the importance of collaboration between AI companies, cybersecurity researchers, and platform providers to detect and mitigate such threats, urging shared intelligence to enhance security measures across the digital ecosystem.