Living in a time when all kinds of information are at our disposal does not mean that we are all interested in knowing it, nor that all of this information is based on evidence.
Of course, in the midst of a pandemic we have many examples of this type of misinformation spreading across social networks at practically the speed of light.
[ Estudio revela porqué algunas personas prefieren buscar información y otras no ]
There is incorrect or misleading information about the development of the epidemic, as well as effective vaccines or means to reduce the risk of infection with Covid-19.
So without a doubt, all possible means to combat this misinformation are useful at this time.
Mathematics Against Misinformation
A group of researchers in mathematics and computer science has developed a statistical model so that a computer can detect false or misleading information in social media posts, in events such as a pandemic or natural disasters.
So they used statistical methods instead of machine learning, or machine learning, as it is known in English.
Researchers led by Professor Zois Bokovalas American UniversityThey decided not to use it machine learning, because while the algorithms they use can be useful for this kind of purpose, they often give a result that comes out of the “black box”: that is, the steps the system took to reach the decision are completely unknown.
Using statistical methods, this process is more transparent, and even researchers and programmers can compare whether decisions made by a system are similar to those made by someone deciding that something is misinformation.
As Bukovallas says,: “We want to know what the machine is thinking when it makes decisions, and how and why that agrees with the humans who trained it.”
In this case, to see if a computer is figuring out whether a tweet is misinformation, it sticks to what people perceive as misinformation.
As Bukovallas concludes: “We don’t want to ban someone’s social media account because the model makes a biased decision.”
The decision is in humans
To use a model like the one developed by these researchers, the system must first be “trained” so that it can make decisions from there.
The work of the model will be good, as the information feeds it so that it “learns”, in this case to recognize the misinformation. Of course, biases can often be introduced into this procedure, which the people who feed the data have.
To avoid these biases, the researchers in this case created a series of rules for them to label tweets as misinformation, and even used the advice of a sociolinguist to prevent the language patterns in which the tweets were written from influencing the decision to categorize them. Like misinformation.
Once the developers were clear about these rules and examples, they submitted them to the system so that it could identify the misinformation in other posts by itself.
Although the sample is small, it is important to achieve a high-precision result, using a statistical model that is more transparent in how the machine makes certain decisions.
However, as Boukouvalas comments, these types of models can be a support for avoiding misinformation, but the main task remains human: “Through our work, we design machine learning-based tools to alert and educate the public to eliminate disinformation, but we strongly believe that humans should play an active role in not spreading disinformation in the first place.”
“Social media evangelist. Student. Reader. Troublemaker. Typical introvert.”
More Stories
“Those who go to museums but do not see an oak tree in the countryside should blush.”
Michoacana Science and Engineering Fair 2024, When the Call Ends – El Sol de Zamora
Dr. Miguel Kiwi, winner of the National Science Award, gives his opinion on nanoscience in Chile