The UK Foreign Affairs Committee heard testimony about a Russian disinformation network operating automated accounts on a massive scale. On 18 November 2025, the Committee heard Nina Jankowicz of American Sunlight Project testify that her organization identified 1,100 likely automated accounts in 2024 that posted hundreds of times daily, with more than 800 accounts still active posting 11.1 million times in the last year on issues including Gaza, Ukraine, and the cost of living in both the US and the UK. The testimony begins:
Right now, in terms of foreign information manipulation and interference, as well as broader online influence campaigns coming from adversaries, we are seeing a back to basics from state actors as a result of tech platforms’ retreat from content moderation. I will give you one example. In 2024, ASP—my organisation—identified 1,100 likely automated accounts that posted hundreds of times a day and repeatedly retweeted overt Russian propaganda within 60 seconds of it posting. We looked back at that network right before this evidence session to see what was going on with it, and right now, we are seeing that more than 800 of these accounts are still active. To give you an idea of the volume, they have posted more than 11.1 million times in the last year, on issues ranging from the war in Gaza to the war in Ukraine, to the cost of living and housing crisis not only in the US but in the UK as well.
Read more: https://policymogul.com/committee-publication/22585/18-november-2025
Key Points
- The American Sunlight Project identified 1,100 likely automated accounts in 2024, posting hundreds of times daily and retweeting overt Russian propaganda within 60 seconds. More than 800 of these accounts are still active, posting 11.1 million times in the last year on Gaza, Ukraine, and the cost-of-living crisis in the US and UK.
- Jankowicz testified that Pravda network, a collection of several hundred pro-Russian content aggregation sites, is pumping out at least 3.6 million articles annually to groom large language models, with testing showing the biggest proprietary models spitting out Russian propaganda when asked about Ukraine events.
- Jankowicz stated that the US has unilaterally disarmed in the fight against FIMI, with the Global Engagement Centre dismantled, the Office of the Director of National Intelligence’s Foreign Malign Influence centre cut back, the FBI’s foreign influence taskforce gutted, and CISA’s misinformation work eliminated due to budget cuts.
- The committee heard that Russia spends approximately $1.5 billion annually on propaganda outside its borders, while China spends $8 billion to $10 billion, compared to the entire OECD spending less than $500 million in 2023, with USAID cuts reducing that to $300 million.
Russia’s Automated Disinformation: AI-Enhanced Bot Farms and Global Influence Operations
RT and Federal Security Service operatives developed the Meliorator system to generate fictitious online personas at industrial scale, creating profiles purporting to represent Americans or Europeans that amplify pro-Kremlin messaging through coordinated networks. Moscow’s bot farms employ web crawlers to manufacture seemingly authentic biographical details while purchasing U.S.-based domain infrastructure to mask Russian origins and deceive platform authentication systems. The Pravda network operations target more than 80 countries worldwide by posing as authoritative sources to infiltrate AI training data and Wikipedia articles, functioning as an information laundromat that legitimizes disinformation.
Automation enables unprecedented message velocity that overwhelms traditional content moderation. During Poland’s September drone incursion, approximately 200,000 social media messages spread Russian narratives within hours, with experts tracking 200 to 300 mentions per minute blaming Ukraine or NATO for Russian provocations. These coordinated attacks synchronize with military actions to maximize psychological impact. Research indicates people correctly identify AI bots in political discussions only 42 percent of the time, while automated bot traffic constituted 51 percent of all web traffic in 2024—the first time in a decade, surpassing human activity.
Geographic deployment reveals strategic targeting across vulnerable democracies. The Czech Republic experienced tens of thousands of translated messages from sanctioned Russian websites flowing into domestic ecosystems, with disinformation sites producing more daily articles than major legitimate media houses. Investigation revealed operators face fines up to 50 million crowns or eight years imprisonment, yet Czech authorities demonstrated insufficient political will to intervene before parliamentary elections. Beyond Europe, Russian agents embedded in Burkina Faso’s intelligence service assist the junta in monitoring opponents and training propagandists while AI-generated endorsements from celebrities create cult followings for authoritarian leaders, demonstrating how Moscow adapts automation tactics to exploit local political vulnerabilities.
External References:
• The bear and the bot farm: Countering Russian hybrid warfare in Africa — ECFR
• Justice Department Leads Efforts to Disrupt Russian Social Media Bot Farm — U.S. DOJ
• The architecture of lies: Bot farms are running the disinformation war — Help Net Security
Disclaimer: The Global Influence Operations Report (GIOR) utilizes AI throughout the posting process, including the generation of summaries for news items, introductions, key points, and, often, the “context” section. We recommend verifying all information before use. Additionally, all images are generated using AI and are intended solely for illustrative purposes. While they represent the events or individuals discussed, they should not be interpreted as real-world photography.