This month Jess Philips MP called for an end to online anonymity, meaning social networks would be required to know the real identities of their users. She did so after receiving 600 rape threats online in one night, actually it was more, but Jess stopped counting.
Online abuse is often aimed at minority groups. Black and Asian women MPs received 35% more abusive tweets than their white colleagues in the run up to the election. This figure excludes tweets to Diane Abbott, who alone received almost half the abusive tweets sent to female MPs. Amnesty International’s #ToxicTwitter campaign argues that Twitter is silencing women. By failing to adequately respond to reports of violence and abuse it leads women to self-censor what they post, to limit their interactions, or to leave the network altogether.
I feel conflicted about social media. I frequently use it, I have to for work, but I’m never sure what I’m going to see there. Social networks sometimes spread hope, share knowledge or create opportunities. Yet they surface so much hate. So I ask the question: Can we make social media a better space to be?
The extent of trolling (i.e. posting an abusive or off-topic message in a social network to provoke an emotional reaction) has spread far. For example, the NSPCC found that one in eight young people have been bullied on social media and one in four say they have experienced something upsetting on a social media website.
There is also the filter bubble effect. This is when users see ever more of the content they agree with, whilst opposing views are filtered out by an algorithm developed to find us more of what we like. At worst there is the Cambridge Analytica (CA) scenario. It is alleged that CA accessed 50 million Facebook profiles without permission to target political adverts. It is also alleged Facebook waited years to shut down their access. There is a resounding need for the tech giants to step up to their responsibilities, but more on that later.
At best social networks take us outside of our bubbles. They connect us to people with lived experience different to our own. In this way social networks broaden our perspective and become a space for change.
The #MeToo movement exposed the extent of sexual assault and harassment leading to both prosecutions and meaningful change in the corporate world. The #NeverAgain movement, lead by Florida teens after the Parkland School shootings has shifted US perspectives on gun-law reform.
A current example of the positive power of social media is the #JusticeForNoura campaign. Noura, a teenage girl in Sudan, was subjected to a forced marriage and horrific abuse by her husband. When Noura killed her husband in self-defence she received a death sentence. An online petition to save Noura’s life has gathered 1.4 million signatures so far and has mobilised the European Union support her case. International outcry is the only hope to influence the Sudanese authorities. You can still sign the petition. Postscript: On 26 June Noura’s death sentence was repealed and she was instead sentenced to five years in prison.
When a newspaper announced last week that it would stop including expert qualifications Dr Fern Riddell tweeted this:
My title is Dr Fern Riddell, not Ms or Miss Riddell. I have it because I am an expert, and my life and career consist of being that expert in as many different ways as possible. I worked hard to earned my authority, and I will not give it up to anyone.
— Dr Fern Riddell (@FernRiddell) June 13, 2018
The tweet evoked negative responses including one that it was ‘immodest’ to use her ‘Dr’ title. Fern responded by creating the #ImmodestWomen hashtag. It has now been tweeted by thousands of users to celebrate learning. It has empowered many women to add ‘Dr’ or ‘Prof’ qualifications to their Twitter profiles.
We’ve seen that social networks often galvanise communities and inspire change. Yet why do people so often revert to conflict on social networks?
Well, researchers at Cornell University have some answers. They found that people can be influenced to troll others under the right circumstances. The first influence is their mood. People in a negative mood are more likely to troll and trolling ebbs and flows with the time of day and day of week. It is most frequent late at night, least frequent in the morning and peaks on Monday.
The second factor is the context, if a discussion begins with a ‘troll comment’, then it is twice as likely to be trolled by other participants later on, compared to a discussion that does not start with a troll comment. The more troll comments in a discussion, the more likely that future participants will also act as trolls. The researchers developed machine learning which correctly forecast whether a person was going to troll about 80% of the time.
The researchers concluded that trolling is situational and such behaviour can end up spreading from person to person. But this also means that there is an opportunity to influence the situation, for example other research found that Twitter bots making negative responses to racist tweets influenced future behaviour in some scenarios.
So, where next? Social media companies have grown at an explosive rate. Now the UK and US governments are playing catch up. In the UK this will include a social media code of practice which Facebook, Google and Twitter are expected to sign up to. Also there is an acknowledgement that the companies need to act more responsibly:
“But it’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse… That goes for fake news, foreign interference in elections, hate speech, in addition to developers and data privacy. We didn’t take a broad enough view of what our responsibility is, and that was a huge mistake.” Mark Zuckerberg, Facebook CEO, 2018.
In this context it is possible that the social media companies will be compelled to address online trolling. Their technologists are certainly well equipped to create the solutions.
Instagram is by reputation the social network most focused on wellbeing. It has rolled out filters to automatically delete specific words or emojis and to hide offensive comments. It also enables users to add their own keywords to these filters. In another example Twitter has raised its game. It added features to reduce abuse by collapsing tweets that might be abusive, allowing users to mute other accounts, or to mute tweets containing a specific word. Just this week Twitter acquired Smyte, which is an anti-abuse tech company with capabilities in reducing cyber-bulling, fake accounts, hate speech and trolling.
Could it be that we’re approaching a turning point? Are we developing the will, and the technology, that is needed to help people create better, safer, social networks?