menu-close
RussiaMay 12 2025, 4:50 am

Malicious Russian Bot Farm Poisons AI Systems with Australian Election Propaganda

Mali­cious Russ­ian AI influ­ence oper­a­tions have been exposed in the lead-up to the Aus­tralian fed­er­al elec­tion. On May 2, 2025, ABC News revealed that a pro-Russ­ian influ­ence net­work called Prav­da Aus­tralia has been pub­lish­ing hun­dreds of pro­pa­gan­da arti­cles dai­ly in what experts describe as a sophis­ti­cat­ed attempt to train AI chat­bots with Krem­lin nar­ra­tives and increase divi­sion among Aus­tralians. The arti­cle begins:

A pro-Russ­ian influ­ence oper­a­tion has been tar­get­ing Aus­tralia in the lead-up to this week­end’s fed­er­al elec­tion, the ABC can reveal, attempt­ing to “poi­son” AI chat­bots with pro­pa­gan­da. Prav­da Aus­tralia presents itself as a news site, but ana­lysts allege it’s part of an ongo­ing plan to retrain West­ern chat­bots such as Chat­G­PT, Google’s Gem­i­ni and Microsoft­’s Copi­lot on “the Russ­ian per­spec­tive” and increase divi­sion amongst Aus­tralians in the long-term. It’s one of rough­ly 180 large­ly auto­mat­ed web­sites in the glob­al Prav­da Net­work alleged­ly designed to “laun­der” dis­in­for­ma­tion and pro-Krem­lin pro­pa­gan­da for AI mod­els to con­sume and repeat back to West­ern users.

Read more: https://www.abc.net.au/news/2025–05-03/pro-russian-push-to-poison-ai-chatbots-in-australia/105239644

Key Points

  • Prav­da Aus­tralia sig­nif­i­cant­ly increased out­put to 155 sto­ries dai­ly since mid-March, just before the elec­tion was called.
  • News­Guard test­ing found 16 per­cent of AI chat­bot respons­es ampli­fied false nar­ra­tives when prompt­ed with Aus­tralia-relat­ed disinformation.
  • Krem­lin pro­pa­gan­dist John Dougan con­firmed the strat­e­gy in Jan­u­ary, boast­ing his web­sites had “infect­ed approx­i­mate­ly 35 per­cent of world­wide arti­fi­cial intelligence.”
  • Intel­li­gence experts say the oper­a­tion shows lim­it­ed human engage­ment but rep­re­sents Rus­si­a’s long-term approach to infor­ma­tion war­fare against West­ern democracies.

Chat­bots have emerged as pow­er­ful tools in the land­scape of influ­ence oper­a­tions, with recent devel­op­ments such as Taiwan’s use of the Aun­tie Meiyu chat­bot to counter Chi­nese dis­in­for­ma­tion and Ukraine’s deploy­ment of Telegram chat­bots for iden­ti­fy­ing pro-Russ­ian agi­ta­tors illus­trat­ing their dual poten­tial for both defense and manip­u­la­tion. Russ­ian dis­in­for­ma­tion net­works have increas­ing­ly tar­get­ed major AI chat­bots, suc­cess­ful­ly flood­ing them with pro-Krem­lin nar­ra­tives and dis­tort­ing the infor­ma­tion pre­sent­ed to users; this tac­tic lever­ages the vul­ner­a­bil­i­ties of large lan­guage mod­els, which can be manip­u­lat­ed through coor­di­nat­ed cam­paigns that seed mis­lead­ing con­tent online.

Dur­ing elec­tions, lead­ing chat­bots have been found to pro­vide inac­cu­rate or mis­lead­ing infor­ma­tion about vot­ing, rais­ing con­cerns about the impact on vot­er con­fi­dence and turnout. The issue of trust is fur­ther com­pli­cat­ed by how chat­bots dis­close their non­hu­man iden­ti­ty: while trans­paren­cy can reduce trust and engage­ment in high-stakes con­texts, it may fos­ter more pos­i­tive respons­es when chat­bots are unable to resolve user issues. As both state and non-state actors refine their use of auto­mat­ed con­ver­sa­tion­al agents, the glob­al infor­ma­tion envi­ron­ment faces mount­ing chal­lenges in main­tain­ing trust, accu­ra­cy, and the integri­ty of pub­lic discourse.

Exter­nal References:

  1. Exclu­sive: Russ­ian dis­in­for­ma­tion floods AI chat­bots, study finds

  2. Chat­bot info on U.S. elec­tions is inac­cu­rate, mis­lead­ing and could keep vot­ers from polls, report finds

  3. Trust me, I’m a bot – reper­cus­sions of chat­bot dis­clo­sure in dif­fer­ent ser­vice contexts

Disclaimer

The Glob­al Influ­ence Oper­a­tions Report (GIOR) employs AI through­out the post­ing process, includ­ing gen­er­at­ing sum­maries of news items, the intro­duc­tion, key points, and often the “con­text” sec­tion. We rec­om­mend ver­i­fy­ing all infor­ma­tion before use. Addi­tion­al­ly, images are AI-gen­er­at­ed and intend­ed sole­ly for illus­tra­tive pur­pos­es. While they rep­re­sent the events or indi­vid­u­als dis­cussed, they should not be inter­pret­ed as real-world photography.