menu-close
RussiaSeptember 15 2025, 5:48 am

Moscow Uses Deepfake Technology to Target Lindsay Graham

Russ­ian intel­li­gence agen­cies have begun employ­ing deep­fake tech­nol­o­gy to fab­ri­cate evi­dence aimed at dis­cred­it­ing for­eign politi­cians. The Robert Lans­ing Insti­tute report­ed on Sep­tem­ber 9, 2025, that Moscow is using the Turk­ish news­pa­per Aydın­lık to tar­get US Sen­a­tor Lind­sey Gra­ham through fab­ri­cat­ed videos. The oper­a­tion aims to under­mine Amer­i­can sup­port for Ukraine ahead of Sen­ate elec­tions by cre­at­ing fake nego­ti­a­tions between Ukrain­ian offi­cials and Gra­ham. The arti­cle begins:

Russ­ian intel­li­gence agen­cies respon­si­ble for influ­ence oper­a­tions have begun to employ arti­fi­cial intel­li­gence, par­tic­u­lar­ly deep­fake tech­nol­o­gy, to fab­ri­cate “evi­dence” aimed at dis­cred­it­ing for­eign politi­cians. One recent exam­ple appeared in the Turk­ish news­pa­per Aydın­lık, known for its sen­sa­tion­al­ist lean­ings. The out­let pub­lished a video pur­port­ing to show Andriy Yer­mak, the head of Ukraine’s pres­i­den­tial office, Ukraine’s com­man­der-in-chief Olek­san­dr Syrsky, and U.S. Sen­a­tor Lind­sey Gra­ham. Even Aydın­lık acknowl­edged that the authen­tic­i­ty of the footage could not be confirmed.

Read more: https://lansinginstitute.org/2025/09/09/deepfake-diplomacy-how-moscow-uses-turkish-media-to-target-lindsey-graham/

Key Points

  • The fab­ri­cat­ed video bears hall­marks of Russ­ian dis­in­for­ma­tion includ­ing absence of meta­da­ta, Russ­ian lan­guage inter­face ele­ments, unnat­ur­al facial move­ments, and delayed eye-mouth syn­chro­niza­tion typ­i­cal of AI manipulation.
  •  Aydın­lık news­pa­per is close­ly linked to the Vatan Par­ty of Doğu Per­inçek, a long-time advo­cate of “Eurasian­ism” who has appeared on Russ­ian media plat­forms and par­tic­i­pat­ed in nation­al­ist forums in Moscow.
  • This con­tin­ues a pat­tern from 2023 when Russ­ian state media cir­cu­lat­ed a manip­u­lat­ed video of Gra­ham’s meet­ing with Zelen­sky, splic­ing his words to sug­gest he said “the Rus­sians are dying — the best mon­ey we’ve ever spent”.
  • The jour­nal­ist behind the piece, Yiğit San­er, attend­ed the Inter­na­tion­al Rus­sophile Move­ment Con­gress in Moscow in 2024 and lat­er vis­it­ed occu­pied Mar­i­upol, with his report­ing fre­quent­ly ampli­fied by Russ­ian state media.

Deepfakes and Influence Operations: Rising Tool of Disinformation

Deep­fake tech­nol­o­gy has become a flex­i­ble weapon in glob­al influ­ence oper­a­tions, under­min­ing pub­lic trust in insti­tu­tions and reshap­ing polit­i­cal dis­course. Russ­ian oper­a­tors recent­ly spoofed West­ern out­lets with AI-gen­er­at­ed arti­cles and syn­thet­ic celebri­ty voice-overs to influ­ence audi­ences around major inter­na­tion­al events. A coor­di­nat­ed strat­e­gy known as Oper­a­tion Under­cut used fab­ri­cat­ed video and audio deep­fakes to erode West­ern sup­port for Ukraine by mim­ic­k­ing cred­i­ble news sources.

In Asia, efforts emerged where ~manip­u­lat­ed clips in the Philip­pine elec­toral con­text~were cir­cu­lat­ed to sway vot­er sen­ti­ment. Even Euro­pean politi­cians were duped by a staged video call imper­son­at­ing Navalny’s chief-of-staff, illus­trat­ing how syn­thet­ic media can infil­trate diplo­ma­cy. Beyond these cas­es, researchers warn that AI-enabled influ­ence oper­a­tions are becom­ing scal­able across elec­tions world­wide, with mod­els capa­ble of gen­er­at­ing per­sua­sive text, images, and videos tai­lored to local contexts.

Cam­paigns in India already demon­strate how mil­lions of deep­fakes can flood social plat­forms to dis­tort polit­i­cal dis­course, while U.S. ana­lysts point to the risk of for­eign actors weaponiz­ing such tools in the 2024 elec­tion cycle. Togeth­er, these devel­op­ments reveal how author­i­tar­i­an and oppor­tunis­tic actors are embed­ding syn­thet­ic media into dis­in­for­ma­tion arse­nals to frac­ture demo­c­ra­t­ic process­es, polar­ize soci­eties, and under­mine media credibility.

Exter­nal References:

Dis­claimer

The Glob­al Influ­ence Oper­a­tions Report (GIOR) employs AI through­out the post­ing process, includ­ing gen­er­at­ing sum­maries of news items, the intro­duc­tion, key points, and often the “con­text” sec­tion. We rec­om­mend ver­i­fy­ing all infor­ma­tion before use. Addi­tion­al­ly, images are AI-gen­er­at­ed and intend­ed sole­ly for illus­tra­tive pur­pos­es. While they rep­re­sent the events or indi­vid­u­als dis­cussed, they should not be inter­pret­ed as real-world photography.