menu-close
RussiaOctober 29 2025, 8:04 am

Russian AI Manipulation in Japan Election Exposed

The Russ­ian AI manip­u­la­tion cam­paign that tar­getied Japan’s elec­tion rep­re­sents a dan­ger­ous new phase in Krem­lin influ­ence oper­a­tions that sys­tem­at­i­cal­ly infects arti­fi­cial intel­li­gence chat­bots with pro­pa­gan­da to shape pub­lic opin­ion. On 27 Octo­ber 2025, Nippon.com report­ed that inter­na­tion­al affairs expert Ichi­hara Maiko doc­u­ment­ed how Rus­sia-aligned bot accounts exploit­ed the July 2025 House of Coun­cil­lors elec­tion by groom­ing AI lan­guage mod­els with more than 3.6 mil­lion pro­pa­gan­da items annu­al­ly, caus­ing lead­ing AI tools to repeat false infor­ma­tion 35% of the time. The arti­cle begins:

Con­cern is mount­ing over Russ­ian inter­fer­ence in Japan­ese elec­tions as the Krem­lin’s online influ­ence oper­a­tions enter a new phase. Inter­na­tion­al affairs expert Ichi­hara Maiko explores the prob­lem and sug­gests a way for­ward. Accu­sa­tions of inter­fer­ence by for­eign actors have swirled around Japan’s July 20 House of Coun­cil­lors elec­tion. On July 15, dur­ing the offi­cial cam­paign­ing peri­od, Yamamo­to Ichirō of the Japan Insti­tute of Law and Infor­ma­tion Sys­tems report­ed that Russ­ian bots were post­ing dis­in­for­ma­tion and dis­tort­ing infor­ma­tion. The five X (for­mer­ly Twit­ter) accounts Yamamo­to cit­ed were frozen the fol­low­ing day. There is lit­tle to be gained from any for­mal inves­ti­ga­tion on whether they are Russ­ian agents indeed.

Read more: https://www.nippon.com/en/in-depth/d01170/japan%E2%80%99s-upper-house-election-reveals-how-russian-influence-operations-infecting-ai-with-.html

Key Points

  • Rus­si­a’s strat­e­gy involves cre­at­ing numer­ous small bot accounts that prop­a­gate Krem­lin-aligned infor­ma­tion via com­ments post­ed to the accounts of key influ­encers and celebri­ties to reach mas­sive audi­ences even with min­i­mal followers.
  • The phe­nom­e­non dubbed LLM groom­ing involves pro-Krem­lin con­tent infect­ing AI chat­bots through the Prav­da net­work pub­lish­ing vast num­bers of trans­lat­ed news sto­ries, with Prav­da Nihon rou­tine­ly repost­ing as many as 250 pro-Rus­sia items daily.
  • A News­Guard audit found that the 10 lead­ing gen­er­a­tive AI tools repeat­ed false infor­ma­tion on con­tro­ver­sial news top­ics 35% of the time on aver­age, with par­tic­u­lar­ly high fail rates for Inflec­tion at 56.67% and Per­plex­i­ty at 46.67%.
  • Analy­sis of respons­es to posts about Russ­ian influ­ence oper­a­tions revealed that approx­i­mate­ly 32% of neg­a­tive replies orig­i­nat­ed from Rus­sia-aligned accounts, with 94 such accounts respond­ing with a total of 218 com­ments and quote posts.

Russia Uses Chatbots for Disinformation in Global Influence Operations

Rus­sia has trans­formed its dis­in­for­ma­tion strat­e­gy by flood­ing the inter­net with mil­lions of mis­lead­ing arti­cles designed to cor­rupt AI chat­bots rather than tar­get­ing human audi­ences direct­ly. The Moscow-based Prav­da net­work oper­ates approx­i­mate­ly 182 domains across 74 coun­tries, pub­lish­ing an esti­mat­ed 3.6 mil­lion pro-Krem­lin arti­cles in 2024 alone that aggre­gate con­tent from Russ­ian state media and pro-Krem­lin influ­encers. Research from the Amer­i­can Sun­light Project revealed that this net­work appears specif­i­cal­ly designed to tar­get web crawlers and AI train­ing datasets through what experts term “LLM grooming”—the sys­tem­at­ic injec­tion of dis­in­for­ma­tion into large lan­guage models.

News­Guard’s test­ing of ten major AI chat­bots found they repeat­ed false Prav­da net­work nar­ra­tives 33 per­cent of the time, with plat­forms includ­ing Chat­G­PT, Google Gem­i­ni, Microsoft Copi­lot, and Meta AI direct­ly cit­ing Prav­da arti­cles as sources. Beyond data poi­son­ing oper­a­tions, Russ­ian influ­ence net­works exploit open-source AI mod­els like Meta’s Lla­ma 3 to gen­er­ate fic­tion­al news sto­ries at scale. The GRU-backed Copy­Cop net­work, led by Flori­da fugi­tive John Mark Dougan oper­at­ing from Moscow, uses self-host­ed AI mod­els to cre­ate hun­dreds of fake web­sites imper­son­at­ing legit­i­mate news out­lets while avoid­ing West­ern con­tent mod­er­a­tion systems.

French agency Vig­inum first exposed the Por­tal Kom­bat oper­a­tion pro­duc­ing low-qual­i­ty mul­ti­lin­gual con­tent that repack­ages Russ­ian state media claims, ensur­ing AI mod­els incor­po­rate these nar­ra­tives into respons­es. The Atlantic Coun­cil’s Dig­i­tal Foren­sic Research Lab con­firmed that Prav­da con­tent has infil­trat­ed Wikipedia cita­tions, cre­at­ing infor­ma­tion laun­der­ing path­ways that extend far beyond chat­bot out­puts. This sys­tem­at­ic approach to cor­rupt­ing infor­ma­tion ecosys­tems rep­re­sents a fun­da­men­tal threat to the integri­ty of AI-pow­ered tools that mil­lions of users rely upon for news and information.

The share of false infor­ma­tion in lead­ing chat­bots near­ly dou­bled from 18 per­cent in 2024 to 35 per­cent in 2025, accord­ing to News­Guard’s Real­i­ty Check find­ings. This esca­la­tion rep­re­sents what researchers describe as a fun­da­men­tal shift in infor­ma­tion warfare—moving from imme­di­ate pro­pa­gan­da impact toward long-term sys­tem­at­ic cor­rup­tion of AI train­ing sys­tems that will shape dis­course for years to come.

Exter­nal References:

Dis­claimer: The Glob­al Influ­ence Oper­a­tions Report (GIOR) uti­lizes AI through­out the post­ing process, includ­ing the gen­er­a­tion of sum­maries for news items, intro­duc­tions, key points, and, often, the “con­text” sec­tion. We rec­om­mend ver­i­fy­ing all infor­ma­tion before use. Addi­tion­al­ly, all images are gen­er­at­ed using AI and are intend­ed sole­ly for illus­tra­tive pur­pos­es. While they rep­re­sent the events or indi­vid­u­als dis­cussed, they should not be inter­pret­ed as real-world photography.