menu-close
RussiaNovember 21 2025, 6:46 am

UK Committee Hears Russian Automated Disinformation Evidence

The UK For­eign Affairs Com­mit­tee heard tes­ti­mo­ny about a Russ­ian dis­in­for­ma­tion net­work oper­at­ing auto­mat­ed accounts on a mas­sive scale. On 18 Novem­ber 2025, the Com­mit­tee heard Nina Jankow­icz of Amer­i­can Sun­light Project tes­ti­fy that her orga­ni­za­tion iden­ti­fied 1,100 like­ly auto­mat­ed accounts in 2024 that post­ed hun­dreds of times dai­ly, with more than 800 accounts still active post­ing 11.1 mil­lion times in the last year on issues includ­ing Gaza, Ukraine, and the cost of liv­ing in both the US and the UK. The tes­ti­mo­ny begins:

Right now, in terms of for­eign infor­ma­tion manip­u­la­tion and inter­fer­ence, as well as broad­er online influ­ence cam­paigns com­ing from adver­saries, we are see­ing a back to basics from state actors as a result of tech plat­forms’ retreat from con­tent mod­er­a­tion. I will give you one exam­ple. In 2024, ASP—my organisation—identified 1,100 like­ly auto­mat­ed accounts that post­ed hun­dreds of times a day and repeat­ed­ly retweet­ed overt Russ­ian pro­pa­gan­da with­in 60 sec­onds of it post­ing. We looked back at that net­work right before this evi­dence ses­sion to see what was going on with it, and right now, we are see­ing that more than 800 of these accounts are still active. To give you an idea of the vol­ume, they have post­ed more than 11.1 mil­lion times in the last year, on issues rang­ing from the war in Gaza to the war in Ukraine, to the cost of liv­ing and hous­ing cri­sis not only in the US but in the UK as well.

Read more: https://policymogul.com/committee-publication/22585/18-november-2025

Key Points

  • The Amer­i­can Sun­light Project iden­ti­fied 1,100 like­ly auto­mat­ed accounts in 2024, post­ing hun­dreds of times dai­ly and retweet­ing overt Russ­ian pro­pa­gan­da with­in 60 sec­onds. More than 800 of these accounts are still active, post­ing 11.1 mil­lion times in the last year on Gaza, Ukraine, and the cost-of-liv­ing cri­sis in the US and UK.
  • Jankow­icz tes­ti­fied that Prav­da net­work, a col­lec­tion of sev­er­al hun­dred pro-Russ­ian con­tent aggre­ga­tion sites, is pump­ing out at least 3.6 mil­lion arti­cles annu­al­ly to groom large lan­guage mod­els, with test­ing show­ing the biggest pro­pri­etary mod­els spit­ting out Russ­ian pro­pa­gan­da when asked about Ukraine events.
  • Jankow­icz stat­ed that the US has uni­lat­er­al­ly dis­armed in the fight against FIMI, with the Glob­al Engage­ment Cen­tre dis­man­tled, the Office of the Direc­tor of Nation­al Intel­li­gence’s For­eign Malign Influ­ence cen­tre cut back, the FBI’s for­eign influ­ence task­force gut­ted, and CISA’s mis­in­for­ma­tion work elim­i­nat­ed due to bud­get cuts.
  • The com­mit­tee heard that Rus­sia spends approx­i­mate­ly $1.5 bil­lion annu­al­ly on pro­pa­gan­da out­side its bor­ders, while Chi­na spends $8 bil­lion to $10 bil­lion, com­pared to the entire OECD spend­ing less than $500 mil­lion in 2023, with USAID cuts reduc­ing that to $300 million.

Russia’s Automated Disinformation: AI-Enhanced Bot Farms and Global Influence Operations

RT and Fed­er­al Secu­ri­ty Ser­vice oper­a­tives devel­oped the Melio­ra­tor sys­tem to gen­er­ate fic­ti­tious online per­sonas at indus­tri­al scale, cre­at­ing pro­files pur­port­ing to rep­re­sent Amer­i­cans or Euro­peans that ampli­fy pro-Krem­lin mes­sag­ing through coor­di­nat­ed net­works. Moscow’s bot farms employ web crawlers to man­u­fac­ture seem­ing­ly authen­tic bio­graph­i­cal details while pur­chas­ing U.S.-based domain infra­struc­ture to mask Russ­ian ori­gins and deceive plat­form authen­ti­ca­tion sys­tems. The Prav­da net­work oper­a­tions tar­get more than 80 coun­tries world­wide by pos­ing as author­i­ta­tive sources to infil­trate AI train­ing data and Wikipedia arti­cles, func­tion­ing as an infor­ma­tion laun­dro­mat that legit­imizes disinformation.

Automa­tion enables unprece­dent­ed mes­sage veloc­i­ty that over­whelms tra­di­tion­al con­tent mod­er­a­tion. Dur­ing Poland’s Sep­tem­ber drone incur­sion, approx­i­mate­ly 200,000 social media mes­sages spread Russ­ian nar­ra­tives with­in hours, with experts track­ing 200 to 300 men­tions per minute blam­ing Ukraine or NATO for Russ­ian provo­ca­tions. These coor­di­nat­ed attacks syn­chro­nize with mil­i­tary actions to max­i­mize psy­cho­log­i­cal impact. Research indi­cates peo­ple cor­rect­ly iden­ti­fy AI bots in polit­i­cal dis­cus­sions only 42 per­cent of the time, while auto­mat­ed bot traf­fic con­sti­tut­ed 51 per­cent of all web traf­fic in 2024—the first time in a decade, sur­pass­ing human activity.

Geo­graph­ic deploy­ment reveals strate­gic tar­get­ing across vul­ner­a­ble democ­ra­cies. The Czech Repub­lic expe­ri­enced tens of thou­sands of trans­lat­ed mes­sages from sanc­tioned Russ­ian web­sites flow­ing into domes­tic ecosys­tems, with dis­in­for­ma­tion sites pro­duc­ing more dai­ly arti­cles than major legit­i­mate media hous­es. Inves­ti­ga­tion revealed oper­a­tors face fines up to 50 mil­lion crowns or eight years impris­on­ment, yet Czech author­i­ties demon­strat­ed insuf­fi­cient polit­i­cal will to inter­vene before par­lia­men­tary elec­tions. Beyond Europe, Russ­ian agents embed­ded in Burk­i­na Faso’s intel­li­gence ser­vice assist the jun­ta in mon­i­tor­ing oppo­nents and train­ing pro­pa­gan­dists while AI-gen­er­at­ed endorse­ments from celebri­ties cre­ate cult fol­low­ings for author­i­tar­i­an lead­ers, demon­strat­ing how Moscow adapts automa­tion tac­tics to exploit local polit­i­cal vulnerabilities.

Exter­nal References:

The bear and the bot farm: Coun­ter­ing Russ­ian hybrid war­fare in Africa — ECFR
Jus­tice Depart­ment Leads Efforts to Dis­rupt Russ­ian Social Media Bot Farm — U.S. DOJ
The archi­tec­ture of lies: Bot farms are run­ning the dis­in­for­ma­tion war — Help Net Security

Dis­claimer: The Glob­al Influ­ence Oper­a­tions Report (GIOR) uti­lizes AI through­out the post­ing process, includ­ing the gen­er­a­tion of sum­maries for news items, intro­duc­tions, key points, and, often, the “con­text” sec­tion. We rec­om­mend ver­i­fy­ing all infor­ma­tion before use. Addi­tion­al­ly, all images are gen­er­at­ed using AI and are intend­ed sole­ly for illus­tra­tive pur­pos­es. While they rep­re­sent the events or indi­vid­u­als dis­cussed, they should not be inter­pret­ed as real-world photography.