IIT Home Page CNR Home Page

Better Safe Than Sorry: an Adversarial Approach to improve Social Bot Detection

The arms race between spambots and spambot-detectors is made of several cycles (or generations): a new wave of spambots is created (and new spam is spread), new spambot filters are derived and old spambots mutate (or evolve) to new species. Recently, with the diffusion of the adversarial learning approach, a new practice is emerging: to manipulate on purpose target samples in order to make stronger detection models. Here, we manipulate generations of Twitter social bots, to obtain - and study - their possible future evolutions, with the aim of eventually deriving more effective detection techniques. In detail, we propose and experiment with a novel genetic algorithm for the synthesis of online accounts. The algorithm allows to create synthetic evolved versions of current state-of-the-art social bots. Results demonstrate that synthetic bots really escape current detection techniques. However, they give all the needed elements to improve such techniques, making possible a proactive approach for the design of social bot detection systems.

Proceedings of the 11th International ACM Conference on Web Science (WebSci'19), Boston, USA, 2019

Autori esterni: Angelo Spognardi (Sapienza University of Rome), Stefano Tognazzi (IMT School for Advanced Studies Lucca)
Autori IIT:

Tipo: Contributo in atti di convegno
Area di disciplina: Computer Science & Engineering

File: Cresci, 2019, Better Safe Than Sorry - an Adversarial Approach to improve Social Bot Detection.pdf

Attività: Social Media Analysis