How to understand the misdeeds of disinformation?

How to understand the misdeeds of disinformation?

Disinformation is a scourge that relies on human bias

As we established in our article on the impact of fake news on the economy, information manipulation is built around three distinct notions that interact together to give fake news its viral character. To distinguish and understand them, it is necessary to look at their two main characteristics. The authenticity of the information and the motivations of the person sharing it:

  • Misinformation is false information spread by an agent who thinks it is authentic.
  • Disinformation is the deliberate sharing of false information to cause harm and/or profit.
  • Malinformation is the dissemination of authentic information to harm another.

In many disinformation campaigns, the three concepts interoperate to strengthen their impact and dissemination. As we will see in the second part of this article, false information can be re-shared by people who believe it to be true, so we speak of disinformation. At the same time, misinformers can publish authentic documents that support the published intoxications, adding the notion of disinformation to their campaign.

Moreover, disinformation is constructed to spread as quickly and widely as possible. According to an MIT study, fake news is 70% more likely to be retweeted on Twitter than authentic information. Also according to MIT, false information spreads 6 times faster and in a chain of retweets, it circulates 10 to 20 times faster.

The Spread of Fake News: Surprising Factors and Cognitive Biases

First of all, Soroush Vosoughi, one of the authors of this study, considers that "people react to fake news more with surprise and disgust", whereas he considers that real information "elicits responses more generally characterized by sadness, anticipation, and trust." Finally, while the study's authors "cannot say that novelty is the cause of retweets," the hypothesis that the surprise effect of fake news helps to promote its spread remains very plausible for the latter.

In addition to the surprise that fake news tends to generate, it seeks to play on our cognitive biases. The study, A Survey of Fake News: Fundamental Theories, Detection Methods, and Opportunities, co-authored by Xinyi Zhou and Reza Zafarani of Syracuse University, points out, "Social and psychological factors play an important role in fake news gaining public trust and further facilitating its spread." According to them, individuals are "irrational and vulnerable" when it comes to determining truth from falsehood or when they are overloaded with misleading information. A 2010 study by Victoria Rubin found that the human ability to detect deception is only slightly better than chance. One hundred experiments conducted on 1,000 participants determined that detection rates were between 55% and 58%, with an average accuracy of 54%.

Furthermore, repeated exposure to false information will cause a validity effect on the user. Regularly exposed to content, because of the algorithms of the social networks which favor the diffusion of contents likely to please him, as by the publications of his social circle, the individual will grant credit to this redundant information. According to Kuran and Sunstein, the validity effect is due to the fact that "individuals tend to adopt ideas expressed by others when those ideas gain popularity in their social circles."

Without a piece of information being relayed by his or her close circle, the individual tends to believe in fake news if it confirms his or her pre-existing beliefs, opinions, or assumptions. This is the confirmation bias that we mentioned in a previous article. It is in this sense that Warren Buffet declared that "human beings are the best at interpreting any new information so that their previous conclusions remain unchanged."

Finally, social networks, on which the dissemination of information is easier and cheaper than in the traditional press, but also more direct and with a much higher degree of virality, offer disinformers a favorable environment to conduct their information wars, whether political, economic, or social.  

"War has lies as its foundation and profit as its spring.”

The Art of War, Sun Tzu, chapter 7.

Is disinformation a silent weapon?

Disinformation is a political and economic weapon, used at all levels of our societies. We find examples of disinformation on a macroscopic scale with the process of interstate diplomatic warfare that manifests itself through interference in elections, as well as on a smaller scale, through, for example, the manipulation of penny stocks (which are small-cap, high-volatility stocks traded on over-the-counter markets).

"We have interfered [in the US presidential elections], we are doing it and we will continue to do it. Carefully, precisely, surgically, in a way that is unique to us." These are the words of Yevgeny Prigozhin, founder of the paramilitary militia Wagner and close to Vladimir Putin, as the bundles of evidence pointing to Russian interference in the 2016 U.S. presidential campaign have multiplied. If true, this admission demonstrates the power of disinformation to weigh in on a story. If, on the contrary, this statement is false, it demonstrates how disinformation is a formidable weapon in diplomacy and geopolitics.

The case of Cambridge Analytica illustrates the mechanics of certain disinformation campaigns as well as their political and economic consequences. It all began in 2014 when the social network Facebook established a research partnership with the University of Cambridge. One of the university's researchers, Aleksandr Kogan, founded Global Science Research LTD, which in turn partnered with Cambridge Analytica to exploit the data of millions of Facebook users.

How? First, the researcher developed a personality questionnaire for which participants were paid 5 dollars by Cambridge Analytica, including through micro-work sites, and had to associate their Facebook account to participate. To be eligible, the user had to have a Facebook account and be an American voter. In addition, upon registration, Facebook's privacy policy allowed for the collection of their data as well as their friends' data. Thus, with an initial panel of 1,000 participants, 160,000 profiles were obtained.

After several months of work, 270,000 Americans had completed Kogan's questionnaire, and 87 million profiles of respondents' friends were collected. The results of the questionnaires were correlated with the data from the collected profiles to establish psychometric profiles that defined many personal characteristics, including political, religious, and sexual orientations, as well as their personalities. This data was then used during Donald Trump's campaign, allowing him to define his campaign trips, the themes discussed during his rallies and the messages distilled in his social network posts. Above all, the profiling carried out by Cambridge Analytica would have allowed better targeting of political advertisements on these networks, including false information.

To quantify the volume of false information disseminated on social networks, we can look at the study entitled Influence of fake news in Twitter during the 2016 US presidential election published in the journal Nature by Alexandre Bovet, a researcher at the Oxford University Mathematical Institute, and Hernán A. Makse, Professor Emeritus at the Levich Institute. By analyzing 171 million tweets, published by 11 million accounts that were expressing themselves in the 2016 US presidential campaign. They establish that the share of "false tweets and extremely biased information is equal to 12%" and that out of 30.7 million tweets redirecting to a news organization, 25% of them convey false information. In detail, 10% lead to sites "containing fake news or conspiracy theories" and 15% to sites that disseminate "extremely biased information".

Cambridge Analytica's involvement in the manipulation of public opinion and disinformation was not limited to the US presidential elections. Christopher Wylie, the company's former director of research, claims that "without Cambridge Analytica, there would have been no Brexit." Like the English company, AggregateIQ (AIQ) had Robert Merce as a shareholder and benefited from his data on Facebook users. Still according to Christopher Wylie, contributed to the campaign of "Leave.eu" collecting nearly a million dollars to target British voters. He adds that "without AggregateIQ, the camp of 'Leave' could not have won the referendum, which was played to less than 2% of the vote. A version to put in front of the conclusions of the investigation conducted by the Information Commissioner Office, which concluded that Cambridge Analytica "has done limited work for the Leave.eu campaign, beyond its involvement in the analysis of data from members of the UKIP party. In addition, the exhibits analyzed demonstrate "a degree of skepticism within the company about the accuracy and reliability of the operations underway."

This episode, which began in 2014, will have provided proof of the power of disinformation to manipulate the democratic life of states and the conduct of elections where citizens have a factual and informed opinion. To serve, among other things, political interests, those who wage these (dis)information wars do not hesitate to play on the fears of public opinion, following the example of the Covid-19 pandemic. By hitting all the countries, it has generated a plethora of disinformation. The origin of the virus or the objectives of the anti-Covid policies, no aspect has escaped misinformation and these contestations around sanitary policies have trickled down to other areas.

How to fight against disinformation?

Fighting against disinformation is in Buster.Ai's DNA. Its founder, Julien Mardas, considers that "information can be a weapon of mass destruction" and has set himself the objective "that it can, on the contrary, be a vector of education and economic development". To achieve this, "information professionals must be equipped with adequate tools to ensure that they limit the spread of false information," according to Julien Mardas.

This vision is embodied in the tool developed by Buster.Ai, which offers SaaS and API solutions to both financial and media players and fights, in fact, against the economic and social damage of disinformation.  

How does Buster.Ai prevent the spread of disinformation?

Winner of several international scientific competitions, including the prestigious Fever.Ai, the algorithms developed by Buster.Ai are based on deep learning artificial intelligence, which will allow the machine to learn to read. The machine is, therefore, able to read the documents submitted to it and compare them, represent them in their symbols and compare them with millions of other documents in our databases from news agencies, media, the academic and scientific worlds, statistical data, or tabular data to extract from these millions of data, the authentic passages, that support or refute the initial request Buster.Ai is thus able to propose a verdict in a faster and more efficient way, at the service of humans.

Buster.Ai positions itself as a partner of organizations to protect them from disinformation. The human being remains at the heart of the ecosystem and remains the master of the decision. Artificial intelligence does not replace human beings, it supports them and increases their possibilities tenfold. The solutions proposed by Buster.Ai allow going further, and deeper in the analysis and the data mining while offering the capacity to process larger volumes of data. The API connection participates in this quest for performance and proactivity in the face of large volumes of information to be processed while adopting a proactive attitude in monitoring their distribution.

Buster.Ai offers demonstrations of its tool.

Contact us to learn more about our solution, and discuss use cases and the needs of your organization.

Sources of the article:

https://arxiv.org/abs/1812.00315

https://www.researchgate.net/publication/227762315_On_Deception_and_Deception_Detection_Content_Analysis_of_Computer-Mediated_Stated_Beliefs

https://www.researchgate.net/publication/228607880_Availability_Cascades_and_Risk_Regulation

https://www.nature.com/articles/s41467-018-07761-2

https://www.liberation.fr/planete/2018/03/26/christopher-wylie-sans-cambridge-analytica-il-n-y-aurait-pas-eu-de-brexit_1639006/

https://www.lemonde.fr/pixels/article/2020/10/08/l-implication-de-cambridge-analytica-dans-la-campagne-du-brexit-etait-limitee-tout-comme-l-efficacite-de-ses-outils_6055263_4408996.html