Abuse in sports online communities. How do we fight it?

Abuse in sports online communities. How do we fight it?

Let's take a deep look at the latest studies about sports fans’ toxic behaviour online.

According to the reports

Since 2020, international sports associations, trade unions, and independent analysts have paid attention to the number of hate attacks on athletes that happen during competitions. This awareness has led to extensive research about abusive behaviour in online sports communities. FIFA, FIFPRO, the Conversation, World Athletics, Australian sporting organisations, eSafety, NBPA, and others presented proprietary studies.

Studies have been conducted about different sports in different countries. The nuances of them are different, but the essence is the same: the core of online abuse is homophobia, racism, sexism, ableism, islamophobia, transphobia, and threats. These are listed in order of frequency of occurrence from the most to the least if you look at the situation as a whole. But in various leagues, different phobias and forms of discrimination come to the fore.

“In recent times we have seen female athletes, those from Aboriginal and Torres Strait Islander and diverse cultural and multilingual backgrounds increasingly becoming the targets of unimaginable online abuse, hatred, misogyny and racism.” — eSafety Commissioner Julie Inman Grant, Australia

So, the EURO 2020 Final, ending with the English team losing and the Italian team winning, showed an unacceptable level of racism among European and English fans. The African Cup of Nations 2021 Final highlighted fans’ homophobia. Misogyny and sexism traditionally accompany women’s competitions. The Summer Olympic Games in Tokyo, which were the first gender-balanced (women’s participation was 49%), showed the highest level of sexism. According to studies, 87% of all abusive online messages were addressed to women.

Is it because of social media?

Toxicity breeds toxicity. In the beginning, bullying is directed against athletes, then spreads to other fans, training staff, and athletes’ relatives and friends. Toxic comments and messages have no limitation period because they are posted in public accounts on Twitter, Instagram, and other social media; therefore, they continue to get impressions and be shared.

You might notice abusive comments if you've seen sports clubs' and certain athletes' pages and accounts even once, especially during a match or after a loss. Also, you might want to pay attention to Twitter threads, which have become popular and reached the top.

If you are not a social media user, you may know from traditional media that athletes and sports organisations have joined boycotts of social media more than once. In April 2021, the Premier League, the English Football League and the Women’s Super League boycotted social media for four days because of persistent disregard for abuse problems. However, two months after the EURO 2020 Final, the English team faced a wave of abuse that prompted FIFA to take action.

Real steps

Many people think that the reason for abusive behaviour on social media is anonymity, but it's not all that simple. Social media is not a space to be completely anonymous. After the bullying waves during the EURO 2020 Final, a lot of online abusers were called to the police.

Gareth Southgate consoles Bukayo Saka, who missed England’s final penalty in the shootout ©AFP

Twitter and Instagram remain the spaces where bullying breeds bullying, regardless of whether the identity of the authors of abusive threads is often fixed or easily revealed. Triggers for bullying can be a game score, a player’s mistake during the game, or even an athlete's photo on their personal account. When anything can be a reason, there are no reasons at all.

Following research, FIFA plans to launch a moderation service to search for toxic posts across social media. According to the Threat Matrix company, the two-level check will allow them to solve complex cases (as with the Italian player whose nickname is Gorilla. In this case, the gorilla emoji is acceptable when it is a toxic marker in racism cases).

Pitfalls to avoid

Of course, this is not the first attempt to fight toxicity in online spaces. For example, in 2021, Xinning Gui, PhD at Pennsylvania State University, made a presentation at the Conference on Human Factors in Computing Systems. She shared the instruments used to struggle with abuse in the cybersport space and how players utilise them for personal value. For example, they flag game competitors, not abusers.

Sifan Hassan of Team Netherlands celebrates after winning the gold medal in the Women’s 10,000m Final on day fifteen of the Tokyo 2020 Olympic Games (Photo by Cameron Spencer / Getty Images via CFP)

The dishonesty of some users is not the only pitfall in fighting online abuse. One tough case was the recent Beijing Olympics, where, along with the hate comments towards athletes, real censorship was applied to Internet users. So, many European athletes claimed they weren’t ready to publicly say anything about negative issues related to the Olympics until they left China. In general, the Chinese system for checking online messages is aimed at posts and comments expressing an anti-government point of view, not for checking abuse.

How to find a balance? How do you create a trusted online space for comfortable communication, avoiding abuse, and still remembering freedom of speech?

***

If you are looking for a way to increase engagement and provide additional content on your website or app, remember the safety of your users. Please fill out the form below. We will be happy to help you find the best solution.

Get in Touch

If you want to partner with us or get more details, please schedule a demo meeting by filling in the form. We'll get in touch with you shortly.