Mr. Schüz, the fight against false information on the internet is a stubborn one. There have been many attempts in the past to curb the spread of fake news on social networks. What distinguishes yours from the others?
Benjamin Schüz: The exciting thing about our approach is that we don’t go into the content of the posts in social networks so much. We are interested in sharing and forwarding information – and above all as a social act. We have expanded the social processes that are so important for the functioning of Twitter, Facebook or Instagram to include targeted feedback on not forwarding: users not only see what friends and acquaintances “like” or share. In our study, they received feedback on how often content was seen but not shared or marked with a “like”. As a result, our study participants considered it less acceptable to pass on misleading information.
–
Prof. Dr. Benjamin Schüz
Focus on prevention and health promotion
–
–
–
–
–
How can you imagine this function exactly?
For example, if you’re active on Twitter, you’ll get a notification of how often a post has been marked “liked”. All we do is turn this principle around: the posts are marked with a note that, for example, the message has been viewed 1000 times but only shared 20 times. For our study, we injected false information into the regular Twitter feeds of our study participants and then provided them with this additional information. The effect worked particularly well in combination with the note that has been appearing on Twitter for some time now when false information is suspected: it warns users that it could be fake news. However, this is often not enough to prevent further spread. But the combination seems to work well when the user finds out: That’s probably not true, and the personal circle doesn’t agree either.
So it’s all about social acceptance?
I agree. It is social information that users receive according to the motto: You don’t do that in my social environment. A major problem in the context of the spread of misinformation is that one is left alone with the critical study of sources – this is difficult for many. Therefore, with our study of social processes, we wanted to help reduce the spread of misinformation. It was exciting for us to see that this short piece of social information changes social norms in people.
The spread of misinformation is closely related to the algorithms of the networks. Users move in their own bubbles, in which their opinions are confirmed. Can’t your function also trigger the opposite effect – that incorrect information is classified as credible because it is liked and shared all the more?
This is an important point – this is similar to the flag for misinformation about Covid-19 that Twitter introduced. That has to be curated to some extent, by Twitter itself or by other users. So someone has to classify the message as false beforehand. Simply putting our function in the algorithm could well backfire.
Who were the people on whom you tested your function?
They were all English speakers because most of the misinformation is in English. The participants, while not representative, come from a relatively broad background of education and political orientation. In total, we conducted the study with 1417 people.
Did you find differences in the different age groups in your study? A study by the New Responsibility Foundation last year showed that older people are often less knowledgeable about news than younger people and are therefore more likely to be taken in by false reports.
We could not determine any influence of the different age groups or the different media competence. But political leanings have played a role in how often misinformation has been propagated.
In what way?
We only roughly surveyed whether the subjects would position themselves more liberally or conservatively. The people who classified themselves as more conservative shared more misinformation.
How realistic do you think that such a mechanism will actually be introduced in social networks such as Twitter or Facebook and Instagram?
Technically, it’s not particularly difficult, as we’ve shown through the study – especially on Twitter. However, there are other limitations: For example, whether it is of interest to the operators of the social networks if a central function such as the sharing of content is prevented. I’m not good at assessing that. That would have to be decided by the legislature or the social networks themselves.
Why did you conduct the study with Twitter? Facebook and Instagram are significantly more popular in Germany.
There was a pragmatic reason for this: Twitter is easier to integrate for studies. It is an open network that offers an interface for academic projects like ours. Facebook, on the other hand, is a closed network in which content can often only be viewed in closed groups – there are reports that Facebook even wants to exclude scientists. Unfortunately, this different approach to independent research also means that Twitter is clearly overrepresented in the ratio of studies to user numbers. But we are currently working on conducting the study for Facebook as well.
To what extent are your previous findings also applicable to Facebook or Instagram?
Anything that builds networks around this kind of social mechanism, that you get feedback on how many times something has been shared or liked, lends itself to our role.