X

How To Spot State-Backed Trolls On Reddit

January 5th, 2022 15:05

US media report­ed last month that a con­sor­tium of British, Amer­i­can, and Euro­pean researchers have cre­at­ed a tool called Troll­Mag­ni­fi­er to uncov­er state-backed trolls on Red­dit, a social media web­site. Accord­ing to a Wired report:

Decem­ber 1, 2021 Boot­in­bull [the name of a Red­dit user, ed.] con­tin­ued adding to Red­dit, alter­nate­ly post­ing cute pic­tures of dogs along­side debat­ing Chi­na and Russia’s future role in the world. But Boot­in­bull wasn’t real—at least not in our tra­di­tion­al under­stand­ing. They were a Russ­ian troll account, like­ly paid by the state to try and upend con­ven­tion­al online dis­course and push the country’s talk­ing points to the mass­es. The account is one of 1,248 iden­ti­fied by researchers from a con­sor­tium of British, Amer­i­can, and Euro­pean uni­ver­si­ties as Russ­ian-spon­sored trolls, oper­at­ing on the world wide web. The aca­d­e­mics iden­ti­fied the accounts as ques­tion­able after track­ing the behav­iors of 335 users iden­ti­fied by Red­dit as trolls back in 2017. Red­dit con­tin­ue to track spam­bots and trolls, pro­cess­ing 7.9 mil­lion reports of “con­tent manip­u­la­tion” in the sec­ond quar­ter of 2021. “These are accounts that are con­trolled by actu­al peo­ple,” says Gian­lu­ca Stringh­i­ni, assis­tant pro­fes­sor at Boston Uni­ver­si­ty, and one of the researchers who iden­ti­fied the troll accounts using a tool they call Troll­Mag­ni­fi­er. The tool is an arti­fi­cial intel­li­gence mod­el trained on the behav­ior of known Russ­ian troll accounts, and it pur­ports to be able to iden­ti­fy new, still uncov­ered troll accounts active on Reddit.

Read the rest here.

The Wired report refers to an aca­d­e­m­ic paper pub­lished in Decem­ber. Accord­ing to the paper’s abstract:

Decem­ber 1, 2021 Grow­ing evi­dence points to recur­ring influ­ence cam­paigns on social media, often spon­sored by state actors aim­ing to manip­u­late pub­lic opin­ion on sen­si­tive polit­i­cal top­ics. Typ­i­cal­ly, cam­paigns are per­formed through instru­ment­ed ac- counts, known as troll accounts; despite their promi­nence, how­ev­er, lit­tle work has been done to detect these accounts in the wild. In this paper, we present TROLLMAGNIFIER, a detec­tion sys­tem for troll accounts. Our key obser­va­tion, based on analy­sis of known Russ­ian-spon­sored troll accounts iden­ti­fied by Red­dit, is that they show loose coor­di­na­tion, often inter­act­ing with each oth­er to fur­ther spe­cif­ic nar­ra­tives. There- fore, troll accounts con­trolled by the same actor often show sim­i­lar­i­ties that can be lever­aged for detec­tion. TROLLMAGNIFIER learns the typ­i­cal behav­ior of known troll accounts and iden­ti­fies more that behave sim­i­lar­ly. We train TROLLMAGNIFIER on a set of 335 known troll accounts and run it on a large dataset of Red­dit accounts. Our sys­tem iden­ti­fies 1,248 poten­tial troll accounts; we then pro­vide a mul­ti-faceted analy­sis to cor­rob­o­rate the cor­rect­ness of our clas­si­fi­ca­tion. In par­tic­u­lar, 66% of the detect­ed accounts show signs of being instru­ment­ed by mali­cious actors (e.g., they were cre­at­ed on the same exact day as a known troll, they have since been sus­pend­ed by Red­dit, etc.). They also dis­cuss sim­i­lar top­ics as the known troll accounts and exhib­it tem­po­ral syn­chro­niza­tion in their activ­i­ty. Over­all, we show that using TROLLMAGNIFIER, one can grow the ini­tial knowl­edge of poten­tial trolls pro­vid­ed by Red­dit by over 300%.

Read the full paper here.

The GIOR has exten­sive­ly cov­ered influ­ence oper­a­tions by state-backed online trolls.