Maurizio Tesconi specializes in the study of social networks, also dealing with the issue of fakes and applications in the world of intelligence
They have only existed for 10-15 years, yet today it would be difficult to imagine everyday life without being able to consult them. For billions of people around the world, social networks are the first places where they can share something new in their lives or engage in heated discussions on current events.
The importance of the phenomenon has not escaped researchers, who have been studying the behavior of people on platforms for some time. Among these are the computer scientists of the Cyber Intelligence research unit of the Institute of Informatics and Telematics, who have been analyzing data from the web and social networks for years.
“I started studying social media around 2010. The topic fascinated me immediately”, recalls Maurizio Tesconi, head of the research unit. Among the first projects stemming from this interest is Social Trend, a web application which, starting from social network data, draws up a popularity ranking of the most prominent accounts (journalists, actors, newspapers, parties, etc.) and monitors its evolution in time.
From emergency management to intelligence
Later, the researchers thought of exploiting the large amount of information that can be obtained from social networks to manage emergency situations. “The idea behind the Social sensing project is to use people as social sensors in the event of a calamity or natural disaster,” explains Tesconi. Thus, for example, geolocated tweets that come from a city struck by an earthquake, if properly gathered and processed, can provide useful information for those who need to provide assistance. And they can do it much faster than traditional channels, for example by reporting the most affected areas or the presence of injured people. The experiments have demonstrated the effectiveness of this type of system.
Starting from these encouraging results, Tesconi and colleagues decided to apply the same method in the world of intelligence. Because, after all, a terrorist attack can, in many respects, be compared to a calamity: when a bomb explodes, it is necessary to understand, in the shortest possible time, in order to help, if there are injured persons and where they are.
From the studies in the field of intelligence, a collaboration with the Ministry of the Interior and the State Police was also established, which lasted several years. The joint laboratory was called Craim and saw CNR researchers committed to creating useful tools for investigations, from facial recognition techniques to analysis on social networks.
Bots – on social media and in finance
“Still remaining in the field of intelligence, we therefore started dealing with bots,” continues Tesconi. Bots are automatic accounts, programmed with a specific purpose. On social networks these are generally fake profiles, behind which real robots pretend to be people, also carrying out malicious activities, such as scams or manipulation of online discussions.
“We started by studying the so-called fake followers, very simple social bots, which have the purpose of artificially inflating the number of followers of aspiring influencers.” The goal was to develop a detection program that is capable of automatically identifying fake accounts.
During the experiments, computer scientists encountered the difficulty in finding examples of fake users, essential for training recognition algorithms, which work with the technique of machine learning.
“So one day we even found ourselves buying fake followers”, he smiles. Over the years the studies have continued and the techniques have become increasingly refined. The range of action of the detection algorithms developed by the Cyber Intelligence research unit has also expanded, and is now no longer limited to self-styled influencers. “We also have a research line regarding finance and one concerning virtual currencies, which are affected by the influence of bots and fake accounts much more than you might imagine,” explains Tesconi.
Within the same context, researchers are pursuing a research line that aims to improve the distinction between bots, humans and trolls.
“Because bots are getting more and more sophisticated and distinguishing them from humans in the flesh is increasingly complicated. In recent months we have been experimenting with a new technique that checks the content of the sentences tweeted by the various profiles, and through an analysis of the text, expresses an opinion on the authenticity of the profile.”
The peculiarity of this latest study lies in the use of deepfake technology, which exploits large computing powers and immense amounts of data.
Disinformation and fake news
In addition to fake accounts, as we well know, there is fake content. The Cyber Intelligence research unit has also been involved in disinformation for some time now. One of the latest studies on the subject was published in the prestigious Plos One magazine, and has as protagonist the 2019 European elections. The scientists followed various types of accounts and studied the contents they posted, taking care to distinguish between the trustworthy sources of information and those of dubious reliability.
By reconstructing the network of interactions, they realized that the sites promoting disinformation often remain rather self-referential. In other words, bad information circulates less than we might think. With the same method, Tesconi and his team are now focusing on the theme of the coronavirus, in order to reveal what is the most retweeted content on the subject during the most agitated moments of the emergency. “Finally, again regarding the Covid-19, we have just discovered a group of suspicious users who used the keywords related to the emergency to amplify promotional messages and prompt other users to visit product sales channels that have nothing to do with the virus emergency”.