Fake news is penetrating ever deeper into the layers of social media, in an inevitable and worrying way. A new and pioneering study in the journal Science offers some reflections on this information that can overcome the truth at any cost.
The term fake news is not new, as curious as it may seem. It dates back to the 19th century, when English-speaking countries referred to widely circulated rumors. The truth is that even before journalism was harmed by fake news and the population became divided, several writers of the time spread false information through press releases in order to harm their enemies.
Who would have thought that much later this same absurd strategy would populate social media and be decisive for the behavior of so many people.
Who wins in the game between truth and lies?
Specifically on Twitter, the Science magazine vietnam viral telegram study analyzes the most important news stories in the more than 126,000 messages tweeted by more than 3 million users over a decade and reveals a grim but unsurprising fact: the truth cannot compete head-to-head with rumors. This “fake news machine” reaches many more people, attracts more attention and penetrates deeply into the network because it spreads quickly.
Even though we are used to seeing fake news connected to political figures, including as a type of wrong strategy, this way of spreading lies is also connected to ordinary citizens and their behavior. According to Soroush Vossoughi, a data scientist at MIT who has been studying fake news since 2013, rumors do not only come from so-called bots , automated robots that spread fake news on social media.
But it seems that this situation may also reflect human nature in its simplest form. Incredibly, it seems like a Pandora's box has been opened and now we are witnessing fake news springing up from all sources, which inevitably raises many questions.
If fake news is now a social problem, the red alert has been sounded by anthropologists and scientists in general. The words of the group of 16 scholars shed light on the path we must follow: “We need to redesign our information ecosystem in the 21st century to reduce the spread of fake news and address the underlying pathologies.”
We've identified the problem, now we need to solve it
With this information, we took the opportunity to ask, “How do we redesign this so-called news ecosystem and spread the truth?” Although the Science researchers initially focused on Twitter, there are some thoughts on Facebook, YouTube, and other media. Basically, any platform that amplifies information and is a fertile ground for the spread of lies.
Did you know that a rumor initially impacts an average of 1,500 people six times? This is because they go viral faster and encourage immediacy on social media. Still talking about Twitter, users of this platform seem to choose to share fake news.
Scientists analyzed the accounts that originated the rumors, whether individuals with more followers or not, verified profiles, etc., and the rumors were 70% more likely to be retweeted. And as we mentioned before, bots may even be largely responsible for stimulating behavior, but what was noticed is that from 2006 to 2016, the starting points were mostly from people to people.
An attack, conspiracy theories and loss of control
The researchers looked at specific rumors that occurred among Americans to better understand what triggered online falsehoods and found an interesting development.
On April 15, 2013, two bombs exploded near the Boston Marathon route, killing three people and injuring others. A shocked and distressed public gave way to conspiracy theories, which began to spread like wildfire on Twitter and other media.
To the point that its peak occurred on April 19, when the governor of Massachusetts had to make an announcement to millions of people, asking them to stay at home and leave the hunt for the terrorist in the hands of the police and their investigation.
Unfortunately, the request was not enough, as the fear added to the population's seclusion found a “lifeline” in sharing ideas about the outside world, between true news and others that were not at all realistic.
An algorithm capable of mapping how fake news spreads
A student who focused his studies specifically on social media joined forces with Vossoughi, based on what he had just experienced. An algorithm was then developed that could extract facts that were more likely to be true. Three attributes were then taken into consideration: the author’s ownership (was this information fact-checked?), the type of language used, and how the data was propagated.
Using third-party fact-checking sites, they found lists of online rumors that had spread over the 10 years of their studies. Cross-referencing all this data with a tool called “gnip,” they found that among the rumors were parts of real text or images.
This makes it much easier to believe and share quickly among followers. In addition, fake news seems “fresher”, newer and more urgent than real news. The research team noticed, for example, that fake news tweets would appear in a user’s feed days before they had received all the retweets .
In short, this factor has probably crossed everyone's minds when they think about fake news. Fake news evokes a lot of emotion, whether in those who believe it or those who reject it, knowing that it is not true. Fake news tweets provoke a lot of surprise and disgust, which generates discussion on the network, according to the study.