menu-close
RussiaOctober 23 2025, 9:03 am

Russia Uses “LLM Grooming’ to Inject Disinformation Into AI Chatbots

Rus­sia has shift­ed its dis­in­for­ma­tion strat­e­gy to flood­ing the inter­net with mil­lions of mis­lead­ing arti­cles designed to be scraped by AI-dri­ven tools, engag­ing in LLM groom­ing to train large lan­guage mod­els like Chat­G­PT to repro­duce manip­u­la­tive Krem­lin nar­ra­tives. On 23 Octo­ber 2025, EUvs­Dis­in­fo report­ed that instead of tar­get­ing audi­ences direct­ly via social media, Rus­si­a’s dis­in­for­ma­tion appa­ra­tus now cor­rupts AI infra­struc­ture by inject­ing false claims into respons­es pro­duced by AI chat­bots. The arti­cle begins:

In the dig­i­tal age, dis­in­for­ma­tion cam­paigns have evolved beyond social media and ‘fake news’, becom­ing a full form of infor­ma­tion war­fare – an area in which Rus­sia excels. The Krem­lin’s for­eign infor­ma­tion manip­u­la­tion and inter­fer­ence (FIMI) cam­paigns have remained large­ly con­sis­tent since the Cold War. But the emer­gence of the Inter­net and oth­er com­mu­ni­ca­tion tech­nolo­gies have allowed for more flex­i­bil­i­ty and greater impact with few­er resources. Just as the Web 2.0 reshaped infor­ma­tion war­fare some two decades ago, the rise of arti­fi­cial intel­li­gence (AI) has trans­formed the Krem­lin’s strat­e­gy. Instead of just push­ing tai­lor-made nar­ra­tives to the read­ers, Moscow now also tar­gets machines – a strat­e­gy all the more impor­tant giv­en that many users are now replac­ing Google Search for AI tools such as ChatGPT.

Read more: https://euvsdisinfo.eu/large-language-models-the-new-battlefield-of-russian-information-warfare/

Key Points

  • French agency Vig­inum exposed the Por­tal Kom­bat oper­a­tion pro­duc­ing low-qual­i­ty con­tent in var­i­ous lan­guages that repack­ages false claims from Russ­ian state media to ensure AI mod­els incor­po­rate these dis­in­for­ma­tion nar­ra­tives into their responses.
  • News­Guard Real­i­ty Check found that six out of ten test­ed chat­bots repeat­ed false claims from the Prav­da net­work, with the share of false and mis­lead­ing infor­ma­tion in 10 lead­ing chat­bots near­ly dou­bling from 18 per­cent in 2024 to 35 per­cent in 2025.
  • Rus­si­a’s LLM groom­ing efforts rep­re­sent a major glob­al secu­ri­ty threat by dis­tort­ing pub­lic opin­ion, erod­ing trust in dig­i­tal infor­ma­tion integri­ty, and spread­ing seem­ing­ly legit­i­mate nar­ra­tives at unprece­dent­ed scale through trust­ed platforms.
  • The automa­tion and scale of these cam­paigns make them hard­er to detect and counter, with even rel­a­tive­ly trust­ed plat­forms such as Wikipedia ampli­fy­ing Krem­lin dis­in­for­ma­tion by quot­ing sources in the Prav­da network.

Russia’s Artificial Intelligence-Powered Influence Operations: How the Kremlin Weaponizes AI for Global Disinformation

Rus­sia has sys­tem­at­i­cal­ly expand­ed its infor­ma­tion war­fare to con­t­a­m­i­nate AI lan­guage mod­els and Wikipedia entries with pro-Krem­lin nar­ra­tives across more than 80 coun­tries. The Prav­da net­work, launched in 2014 and sig­nif­i­cant­ly devel­oped since Rus­si­a’s 2022 inva­sion of Ukraine, func­tions as an infor­ma­tion laun­dro­mat that aggre­gates con­tent from sanc­tioned Russ­ian out­lets and dis­trib­utes it through fraud­u­lent news por­tals mim­ic­k­ing legit­i­mate media brands. By sys­tem­at­i­cal­ly cit­ing Krem­lin-linked sources in Wikipedia arti­cles and flood­ing the inter­net with pro­pa­gan­da-laden con­tent, Russ­ian actors are poi­son­ing the train­ing data that AI sys­tems rely upon, enabling dis­in­for­ma­tion to be ampli­fied when users query chat­bots about cur­rent events.

Beyond data con­t­a­m­i­na­tion, Rus­sia deploys sophis­ti­cat­ed AI-enhanced oper­a­tions to direct­ly gen­er­ate and dis­trib­ute dis­in­for­ma­tion. Rus­si­a’s Social Design Agency has con­duct­ed “Oper­a­tion Under­cut” since late 2023, using AI-gen­er­at­ed con­tent and imper­son­ation tac­tics to erode West­ern sup­port for Ukraine by por­tray­ing Ukrain­ian lead­er­ship as cor­rupt and inef­fec­tive. The oper­a­tion col­lab­o­rates with net­works like Copy­Cop to spread deep­fake videos and tar­gets Euro­pean and Amer­i­can audi­ences with tai­lored mul­ti­lin­gual con­tent designed to ampli­fy anti-Ukraine sen­ti­ment and reduce mil­i­tary aid flows. Sim­i­lar­ly, auto­mat­ed web­site net­works have repub­lished hun­dreds of thou­sands of arti­cles from main­stream out­lets while selec­tive­ly mod­i­fy­ing spe­cif­ic pieces to insert pro-Russ­ian nar­ra­tives, demon­strat­ing how AI-pow­ered con­tent deliv­ery sys­tems can sub­tly manip­u­late infor­ma­tion at scale.

The scope of these AI-enabled oper­a­tions reflects Rus­si­a’s strate­gic invest­ment in influ­ence capa­bil­i­ties. The Krem­lin agency Rossotrud­nich­est­vo increased spend­ing by 1.5 times in 2025, direct­ing 412 mil­lion rubles toward pro­grams that train for­eign activists, jour­nal­ists, and blog­gers in pro­pa­gan­da tech­niques. U.S. intel­li­gence offi­cials iden­ti­fied Rus­sia as the most pro­lif­ic for­eign actor using AI to gen­er­ate con­tent tar­get­ing the 2024 pres­i­den­tial elec­tion, with the Jus­tice Depart­ment seiz­ing 32 domains used in the Dop­pel­ganger cam­paign that employed cut­ting-edge AI to spread dis­in­for­ma­tion and state-spon­sored narratives.

News­Guard research revealed that lead­ing chat­bots repeat­ed false nar­ra­tives from the Prav­da net­work 33 per­cent of the time when prompt­ed with relat­ed queries, demon­strat­ing how Moscow’s strat­e­gy of flood­ing web crawlers with false­hoods suc­cess­ful­ly dis­torts how AI mod­els process infor­ma­tion. This mul­ti-lay­ered approach—combining auto­mat­ed con­tent gen­er­a­tion, strate­gic data poi­son­ing, and coor­di­nat­ed ampli­fi­ca­tion through fake personas—represents an evo­lu­tion in infor­ma­tion war­fare that exploits both the open nature of AI train­ing sys­tems and the trust users place in algo­rith­mic outputs.

Exter­nal References:
Jus­tice Depart­ment Dis­rupts Russ­ian Gov­ern­ment-Spon­sored For­eign Malign Influ­ence Operation
Russ­ian Net­works Flood the Inter­net with Pro­pa­gan­da, Aim­ing to Cor­rupt AI Chat­bots — Bul­letin of the Atom­ic Scientists
AI Chat­bots Echo Russ­ian Dis­in­for­ma­tion — Axios

 Dis­claimer:
The Glob­al Influ­ence Oper­a­tions Report (GIOR) uti­lizes AI through­out the post­ing process, includ­ing the gen­er­a­tion of sum­maries for news items, intro­duc­tions, key points, and, often, the “con­text” sec­tion. We rec­om­mend ver­i­fy­ing all infor­ma­tion before use. Addi­tion­al­ly, all images are gen­er­at­ed using AI and are intend­ed sole­ly for illus­tra­tive pur­pos­es. While they rep­re­sent the events or indi­vid­u­als dis­cussed, they should not be inter­pret­ed as real-world photography.