menu-close
RussiaSeptember 25 2025, 3:25 am

Russian Troll Farm Uses Meta’s Llama 3 AI for Disinformation Operations

A Russ­ian troll farm pow­ered by Meta’s Lla­ma 3 AI tech­nol­o­gy is cre­at­ing hun­dreds of fake web­sites to spread pro-Krem­lin pro­pa­gan­da, accord­ing to new research. On 18 Sep­tem­ber 2025, Cybernews report­ed that the Copy­Cop dis­in­for­ma­tion net­work, led by Flori­da fugi­tive and for­mer deputy sher­iff John Mark Dougan, uses Meta’s open-source AI mod­els to gen­er­ate fic­tion­al news sto­ries tar­get­ing West­ern audi­ences with anti-Ukrain­ian nar­ra­tives. The arti­cle begins:

Copy­Cop, a Russ­ian influ­ence net­work dis­cov­ered ear­ly last year, uses Meta’s Lla­ma 3 to cre­ate hun­dreds of fake web­sites and serve up pro-Krem­lin polit­i­cal com­men­tary on them. The mas­ter­mind? A for­mer deputy sher­iff from Flori­da. US cit­i­zen and fugi­tive John Mark Dougan fled to Rus­sia in 2016 after gain­ing polit­i­cal asy­lum in Moscow. He soon began work­ing as a dis­in­for­ma­tion pur­vey­or sup­port­ed by the Krem­lin. Accord­ing to Insikt Group, Record­ed Future’s threat research divi­sion, Dougan is more specif­i­cal­ly a tool for the Main Direc­torate of the Gen­er­al Staff of the Armed Forces of the Russ­ian Fed­er­a­tion (GRU), pub­lish­ing con­tent pre­pared by the Moscow-based Cen­ter for Geopo­lit­i­cal Exper­tise (CGE). The AI-pow­ered Copy­Cop net­work pla­gia­rizes main­stream media con­tent, turns it into polit­i­cal­ly-biased pro­pa­gan­da, and auto­mat­i­cal­ly spreads it around using inau­then­tic media out­lets in the Unit­ed States, the Unit­ed King­dom, or France.

Read more: https://cybernews.com/news/copycop-russia-fsb-fake-news-network/

Key Points

  • For­mer Flori­da deputy sher­iff John Mark Dougan leads GRU-backed Copy­Cop dis­in­for­ma­tion net­work from Moscow
  • Net­work uses self-host­ed Meta Lla­ma 3 mod­els to avoid West­ern AI cen­sor­ship and gen­er­ate pro-Russ­ian content
  • At least 200 fake web­sites cre­at­ed imper­son­at­ing news out­lets and polit­i­cal orga­ni­za­tions across mul­ti­ple countries
  • Oper­a­tion spreads false claims about Ukrain­ian Pres­i­dent Zelen­sky mis­ap­pro­pri­at­ing US funds and oth­er anti-Ukrain­ian narratives

John Mark Dougan: Russia’s AI-Targeted Influence Operations Through Former US Marine

John Mark Dougan rep­re­sents a sig­nif­i­cant evo­lu­tion in Russ­ian influ­ence oper­a­tions, oper­at­ing as a for­mer U.S. Marine and law enforce­ment offi­cer who has become a key ampli­fi­er of Russ­ian dis­in­for­ma­tion from his base in Moscow. Dougan’s DC Week­ly and asso­ci­at­ed fake news net­works spread false claims about Ukrain­ian Pres­i­dent Zelen­sky using Amer­i­can aid mon­ey for lux­u­ry pur­chas­es, with these fab­ri­cat­ed sto­ries direct­ly influ­enc­ing U.S. Con­gres­sion­al debates about mil­i­tary sup­port for Ukraine. His oper­a­tion demon­strates how Russ­ian pro­pa­gan­da plat­forms mas­quer­ade as Wash­ing­ton-based pub­li­ca­tions while actu­al­ly oper­at­ing from Moscow servers to tar­get Amer­i­can audi­ences with care­ful­ly craft­ed disinformation.

The scope of Dougan’s influ­ence net­work extends far beyond indi­vid­ual sto­ries, encom­pass­ing what researchers describe as a sophis­ti­cat­ed ecosys­tem designed to sys­tem­at­i­cal­ly poi­son AI sys­tems with pro-Krem­lin nar­ra­tives. Dougan has boast­ed that his web­sites infect­ed approx­i­mate­ly 35 per­cent of world­wide arti­fi­cial intel­li­gence, demon­strat­ing the strate­gic shift toward tar­get­ing AI chat­bots rather than just human audi­ences. His back­ground as a for­mer Marine who fled to Rus­sia in 2016 after fac­ing crim­i­nal charges relat­ed to mas­sive doxxing cam­paigns against pub­lic offi­cials illus­trates how Moscow has suc­cess­ful­ly recruit­ed dis­af­fect­ed Amer­i­can for­mer offi­cials for sophis­ti­cat­ed pro­pa­gan­da oper­a­tions that now tar­get both tra­di­tion­al media con­sump­tion and AI train­ing datasets.

Dougan’s oper­a­tions reveal the tac­ti­cal evo­lu­tion of Russ­ian infor­ma­tion war­fare toward what experts describe as long-term AI manip­u­la­tion strate­gies. The Prav­da Net­work oper­ates rough­ly 180 large­ly auto­mat­ed web­sites designed to laun­der dis­in­for­ma­tion for AI mod­els to con­sume and repeat back to West­ern users, rep­re­sent­ing a fun­da­men­tal shift from imme­di­ate pro­pa­gan­da impact toward sys­tem­at­ic cor­rup­tion of infor­ma­tion sys­tems that will influ­ence future dis­course. His use of arti­fi­cial intel­li­gence tools to gen­er­ate con­tent, com­bined with his knowl­edge of Amer­i­can media con­ven­tions and local con­cerns, enables the cre­ation of dis­in­for­ma­tion that appears cred­i­ble to both human and AI audi­ences while advanc­ing Moscow’s strate­gic objec­tives of under­min­ing demo­c­ra­t­ic process­es and sow­ing divi­sion with­in West­ern societies.

Exter­nal References:

Dis­claimer

The Glob­al Influ­ence Oper­a­tions Report (GIOR) employs AI through­out the post­ing process, includ­ing gen­er­at­ing sum­maries of news items, the intro­duc­tion, key points, and often the “con­text” sec­tion. We rec­om­mend ver­i­fy­ing all infor­ma­tion before use. Addi­tion­al­ly, images are AI-gen­er­at­ed and intend­ed sole­ly for illus­tra­tive pur­pos­es. While they rep­re­sent the events or indi­vid­u­als dis­cussed, they should not be inter­pret­ed as real-world photography.