The Russian AI manipulation campaign that targetied Japan’s election represents a dangerous new phase in Kremlin influence operations that systematically infects artificial intelligence chatbots with propaganda to shape public opinion. On 27 October 2025, Nippon.com reported that international affairs expert Ichihara Maiko documented how Russia-aligned bot accounts exploited the July 2025 House of Councillors election by grooming AI language models with more than 3.6 million propaganda items annually, causing leading AI tools to repeat false information 35% of the time. The article begins:
Concern is mounting over Russian interference in Japanese elections as the Kremlin’s online influence operations enter a new phase. International affairs expert Ichihara Maiko explores the problem and suggests a way forward. Accusations of interference by foreign actors have swirled around Japan’s July 20 House of Councillors election. On July 15, during the official campaigning period, Yamamoto Ichirō of the Japan Institute of Law and Information Systems reported that Russian bots were posting disinformation and distorting information. The five X (formerly Twitter) accounts Yamamoto cited were frozen the following day. There is little to be gained from any formal investigation on whether they are Russian agents indeed.
Key Points
- Russia’s strategy involves creating numerous small bot accounts that propagate Kremlin-aligned information via comments posted to the accounts of key influencers and celebrities to reach massive audiences even with minimal followers.
- The phenomenon dubbed LLM grooming involves pro-Kremlin content infecting AI chatbots through the Pravda network publishing vast numbers of translated news stories, with Pravda Nihon routinely reposting as many as 250 pro-Russia items daily.
- A NewsGuard audit found that the 10 leading generative AI tools repeated false information on controversial news topics 35% of the time on average, with particularly high fail rates for Inflection at 56.67% and Perplexity at 46.67%.
- Analysis of responses to posts about Russian influence operations revealed that approximately 32% of negative replies originated from Russia-aligned accounts, with 94 such accounts responding with a total of 218 comments and quote posts.
Russia Uses Chatbots for Disinformation in Global Influence Operations
Russia has transformed its disinformation strategy by flooding the internet with millions of misleading articles designed to corrupt AI chatbots rather than targeting human audiences directly. The Moscow-based Pravda network operates approximately 182 domains across 74 countries, publishing an estimated 3.6 million pro-Kremlin articles in 2024 alone that aggregate content from Russian state media and pro-Kremlin influencers. Research from the American Sunlight Project revealed that this network appears specifically designed to target web crawlers and AI training datasets through what experts term “LLM grooming”—the systematic injection of disinformation into large language models.
NewsGuard’s testing of ten major AI chatbots found they repeated false Pravda network narratives 33 percent of the time, with platforms including ChatGPT, Google Gemini, Microsoft Copilot, and Meta AI directly citing Pravda articles as sources. Beyond data poisoning operations, Russian influence networks exploit open-source AI models like Meta’s Llama 3 to generate fictional news stories at scale. The GRU-backed CopyCop network, led by Florida fugitive John Mark Dougan operating from Moscow, uses self-hosted AI models to create hundreds of fake websites impersonating legitimate news outlets while avoiding Western content moderation systems.
French agency Viginum first exposed the Portal Kombat operation producing low-quality multilingual content that repackages Russian state media claims, ensuring AI models incorporate these narratives into responses. The Atlantic Council’s Digital Forensic Research Lab confirmed that Pravda content has infiltrated Wikipedia citations, creating information laundering pathways that extend far beyond chatbot outputs. This systematic approach to corrupting information ecosystems represents a fundamental threat to the integrity of AI-powered tools that millions of users rely upon for news and information.
The share of false information in leading chatbots nearly doubled from 18 percent in 2024 to 35 percent in 2025, according to NewsGuard’s Reality Check findings. This escalation represents what researchers describe as a fundamental shift in information warfare—moving from immediate propaganda impact toward long-term systematic corruption of AI training systems that will shape discourse for years to come.
External References:
- Russian networks flood the Internet with propaganda, aiming to corrupt AI chatbots — Bulletin of the Atomic Scientists
- A Well-funded Moscow-based Global ‘News’ Network has Infected Western Artificial Intelligence Tools Worldwide with Russian Propaganda — NewsGuard
- Russian disinformation ‘infects’ AI chatbots, researchers warn — France24
Disclaimer: The Global Influence Operations Report (GIOR) utilizes AI throughout the posting process, including the generation of summaries for news items, introductions, key points, and, often, the “context” section. We recommend verifying all information before use. Additionally, all images are generated using AI and are intended solely for illustrative purposes. While they represent the events or individuals discussed, they should not be interpreted as real-world photography.