Misogyny - there’s an algorithm for that
The Twittersphere - between Covid-19 conspiracy theories, the dire state of the economy and looming elections in the US and here at home, its gone feral in a whole new way.
But there's one user group that always gets the worst of it when it comes to vile abuse on social media platforms - women.
Microbiologist Dr Siouxsie Wiles, who has served an incredibly valuable public service role with her science communication through the pandemic, described the abuse she has received online in a recent documentary, Siouxsie and the Virus.
She has been belittled, her qualifications questioned, her hair and weight fixated upon. That's just the tip of the iceberg. Some of the trolling and abuse are even more personal and even threatening.
What's the answer to toxic social media culture? A team of scientists at Queensland University of Technology think technology could play a bigger role in tackling the fire hose of Tweets and posts flowing onto the web 24-7.
Filtering the filth
Associate Professor Richi Nayak, Professor Nicolas Suzor and research fellow Dr Md Abul Bashar, have developed an algorithm they claim can filter millions of Twitter messages to detect misogynistic content.
"At the moment, the onus is on the user to report abuse they receive. We hope our machine-learning solution can be adopted by social media platforms to automatically identify and report this content to protect women and other user groups online," says Professor Nayak.
Accurately flagging abusive tweets, particularly those including threats of harm or sexual violence, could reduce harm online. But could it be trusted not to identify and flag legitimate content that happens to mention or discuss the issue of abuse itself?
"The key challenge in misogynistic tweet detection is understanding the context of a tweet. The complex and noisy nature of tweets makes it difficult," adds Nayak.
The research team mined a dataset of one million tweets and then refined these by searching for those containing one of three abusive keywords - whore, slut, and rape. Using a deep learning algorithm, Short-Term Memory with Transfer Learning, they started with a base dictionary and built up the system's vocabulary.
The team trained the algorithm as it went, the machine "learning and developing its contextual semantic understanding over time," says Nayak.
That human input allowed the algorithm to become able to differentiate between abuse, sarcasm and friendly use of aggressive terminology.
The nuances of meaning
"Take the phrase 'get back to the kitchen' as an example - devoid of context of structural inequality, a machine's literal interpretation could miss the misogynistic meaning," says Nayak.
"But seen with the understanding of what constitutes abusive or misogynistic language, it can be identified as a misogynistic tweet."
"Or take a tweet like 'STFU BITCH! DON'T YOU DARE INSULT KEEMSTAR OR I'LL KILL YOU'. Distinguishing this, without context, from a misogynistic and abusive threat is incredibly difficult for a machine to do," Nayak adds.
Ultimately, the researchers, who refined down those one million tweets to 5,000 that were characterised as misogynistic were able to come up with an algorithm that identifies such content with 75 per cent accuracy.
The team describe the work as "labour intensive", but say the algorithm could be used by the big social media platforms, including Twitter as a more effective way to stop abusive content from doing harm on their platforms.
Essentially, women wouldn't even see the bulk of misogynistic and threatening tweets because they would be quickly identified and blocked.
"This modelling could also be expanded upon and used in other contexts in the future, such as identifying racism, homophobia, or abuse toward people with disabilities," says Nayek.
"Our end goal is to take the model to social media platforms and trial it in place. If we can make identifying and removing this content easier, that can help create a safer online space for all users."
If you need help or want to discuss abusive behaviour, contact:
Safe to Talk - 0800 044 334 or text 4334
Te Ohaakii A Hine - 0800 883 300
Rape Prevention Education - 09 360 4001
If it is an emergency dial 111
You must be logged in in order to post comments. Log In