
27 March 2024
The ways AI tools protect social media users and sports fans… If they really do that
FIFA Pro and Matrix Threat provided the last report on how they protected female athletes from online abuse during the Women’s World Cup. This report showed many significant conclusions and one big gap.
Sports organisations' approaches to protect athletes with or without social media platforms' participation
Two years ago, FIFA and FIFA Pro presented the AI tool to protect players on social media after the European final between the UK and Italian national teams, which was full of abuse issues. This tool works as an integrated solution that athletes or clubs can turn on and link to their accounts on the leading social media platforms. All comments and posts with tags to this account that can be perceived as abusive or hateful will be hidden and reported, and the information on the author’s account will be gathered to de-anonymise them.
FIFA uses this tool during significant international events. After two years, it’s possible to compare results and the level of discrimination during male and female football tournaments and make common conclusions on what is happening in sports communities on social media and how the social media platforms react to the reported issues.
©FIFA
According to the last reports after the Women’s World Cup 2023 and its compartments with World Cup 2022, the level of discrimination, abuse and hate speech is higher for female tournaments than male ones.
“0.14% of posts and comments captured for analysis during the FIFA Women’s World Cup 2023 by the monitoring system were confirmed to be abusive (7k out of 5.1m). 0.10% of captured posts and comments during FIFA World Cup Qatar 2022 were confirmed to be abusive (19k out of 20m).“
Also, the types of hate speech have changed. If during the World Cup Qatar, the most popular hatred comments were linked to racism, the most popular types of abusive comments during the Female tournament are sexism and homophobia. The report even shows separate graphs for different countries, so we can see that haters worldwide find their own reasons to hate. All of this information is useful and significant, and this system can help athletes and other sports representatives avoid direct hate speech. However, we should remember that there is much more hate on social media outside official accounts, and authors of such comments do not so often tag the persons they address. So, while protecting just athletes’ and teams’ accounts, we don’t protect users, fans, families and friends of these athletes, as well as the unofficial accounts of these people.
This last report contains a crucial point: X (formerly Twitter) and Instagram could ignore half of the reports until moderators pushed them directly.
“X has by far the highest volume of dentified, targeted discriminatory, abusive or threatening content, with initial takedown rates of 39%. Where there was additional escalation to X after failure to act on immediate automated reporting, a takedown rate of over 95% was recorded — highlighting the value of direct platform engagement. The ability to escalate was not made available by all platforms.”
Abusive comments on social media are a pervasive issue for various sports organisations. To tackle this problem, the French Tennis Federation (FFT) offered free access to the Bodyguard AI social media monitoring software during the Roland Garros tennis Grand Slam tournament. Similar to the FIFA tool, participants in the tournament could use this tool to moderate comments in real time. It also analysed responses in less than 200 milliseconds.
What can you do to protect your audience?
While AI-powered moderation tools are impressive, we must acknowledge that they only treat the symptoms of online harassment, not the underlying problem. Organisations have taken steps towards a solution by launching investigations and penalising those responsible for cyberbullying. Still, this approach is impractical for regular users who simply want to follow sports media accounts or engage with other fans.
So, problems that are not solved:
- If it is not a team's or player's official account, protection doesn’t work.
- Regular users are not protected.
- Social media platforms themselves do not participate in the process of protection.
Because we provide public community chats, we see the importance of fast and furious moderation tools for all participants. So, we are developing tools that can catch all abusive and hateful comments and messages regardless of who or for whom they were published. We have three ML models, each more suitable for different cases. These tools can recognise various languages and check every message based on the particular language dictionary.
Also, Watchers AI tools can be set to the specific topic of messages, because chats about horror movies and sports betting should be moderated differently. Our models try to find the best way for each type of chat. Partners can also set the model's strictness level, depending on the particular platform’s policy or the average user's age.
So, abuse issues can be solved in separate social building spaces, but what about the gigantic social media? Will the number of such cases be increasing from tournament to tournament, from day to day? Will see.
***
Connect with us to discuss real-time AI-powered moderation to protect your users and viewers and build a trusted and healthy space for them in live public and private chats.