Russia has shifted its disinformation strategy to flooding the internet with millions of misleading articles designed to be scraped by AI-driven tools, engaging in LLM grooming to train large language models like ChatGPT to reproduce manipulative Kremlin narratives. On 23 October 2025, EUvsDisinfo reported that instead of targeting audiences directly via social media, Russia’s disinformation apparatus now corrupts AI infrastructure by injecting false claims into responses produced by AI chatbots. The article begins:
In the digital age, disinformation campaigns have evolved beyond social media and ‘fake news’, becoming a full form of information warfare – an area in which Russia excels. The Kremlin’s foreign information manipulation and interference (FIMI) campaigns have remained largely consistent since the Cold War. But the emergence of the Internet and other communication technologies have allowed for more flexibility and greater impact with fewer resources. Just as the Web 2.0 reshaped information warfare some two decades ago, the rise of artificial intelligence (AI) has transformed the Kremlin’s strategy. Instead of just pushing tailor-made narratives to the readers, Moscow now also targets machines – a strategy all the more important given that many users are now replacing Google Search for AI tools such as ChatGPT.
Read more: https://euvsdisinfo.eu/large-language-models-the-new-battlefield-of-russian-information-warfare/
Key Points
- French agency Viginum exposed the Portal Kombat operation producing low-quality content in various languages that repackages false claims from Russian state media to ensure AI models incorporate these disinformation narratives into their responses.
- NewsGuard Reality Check found that six out of ten tested chatbots repeated false claims from the Pravda network, with the share of false and misleading information in 10 leading chatbots nearly doubling from 18 percent in 2024 to 35 percent in 2025.
- Russia’s LLM grooming efforts represent a major global security threat by distorting public opinion, eroding trust in digital information integrity, and spreading seemingly legitimate narratives at unprecedented scale through trusted platforms.
- The automation and scale of these campaigns make them harder to detect and counter, with even relatively trusted platforms such as Wikipedia amplifying Kremlin disinformation by quoting sources in the Pravda network.
Russia’s Artificial Intelligence-Powered Influence Operations: How the Kremlin Weaponizes AI for Global Disinformation
Russia has systematically expanded its information warfare to contaminate AI language models and Wikipedia entries with pro-Kremlin narratives across more than 80 countries. The Pravda network, launched in 2014 and significantly developed since Russia’s 2022 invasion of Ukraine, functions as an information laundromat that aggregates content from sanctioned Russian outlets and distributes it through fraudulent news portals mimicking legitimate media brands. By systematically citing Kremlin-linked sources in Wikipedia articles and flooding the internet with propaganda-laden content, Russian actors are poisoning the training data that AI systems rely upon, enabling disinformation to be amplified when users query chatbots about current events.
Beyond data contamination, Russia deploys sophisticated AI-enhanced operations to directly generate and distribute disinformation. Russia’s Social Design Agency has conducted “Operation Undercut” since late 2023, using AI-generated content and impersonation tactics to erode Western support for Ukraine by portraying Ukrainian leadership as corrupt and ineffective. The operation collaborates with networks like CopyCop to spread deepfake videos and targets European and American audiences with tailored multilingual content designed to amplify anti-Ukraine sentiment and reduce military aid flows. Similarly, automated website networks have republished hundreds of thousands of articles from mainstream outlets while selectively modifying specific pieces to insert pro-Russian narratives, demonstrating how AI-powered content delivery systems can subtly manipulate information at scale.
The scope of these AI-enabled operations reflects Russia’s strategic investment in influence capabilities. The Kremlin agency Rossotrudnichestvo increased spending by 1.5 times in 2025, directing 412 million rubles toward programs that train foreign activists, journalists, and bloggers in propaganda techniques. U.S. intelligence officials identified Russia as the most prolific foreign actor using AI to generate content targeting the 2024 presidential election, with the Justice Department seizing 32 domains used in the Doppelganger campaign that employed cutting-edge AI to spread disinformation and state-sponsored narratives.
NewsGuard research revealed that leading chatbots repeated false narratives from the Pravda network 33 percent of the time when prompted with related queries, demonstrating how Moscow’s strategy of flooding web crawlers with falsehoods successfully distorts how AI models process information. This multi-layered approach—combining automated content generation, strategic data poisoning, and coordinated amplification through fake personas—represents an evolution in information warfare that exploits both the open nature of AI training systems and the trust users place in algorithmic outputs.
External References:
— Justice Department Disrupts Russian Government-Sponsored Foreign Malign Influence Operation
— Russian Networks Flood the Internet with Propaganda, Aiming to Corrupt AI Chatbots — Bulletin of the Atomic Scientists
— AI Chatbots Echo Russian Disinformation — Axios
Disclaimer:
The Global Influence Operations Report (GIOR) utilizes AI throughout the posting process, including the generation of summaries for news items, introductions, key points, and, often, the “context” section. We recommend verifying all information before use. Additionally, all images are generated using AI and are intended solely for illustrative purposes. While they represent the events or individuals discussed, they should not be interpreted as real-world photography.