Artificial Intelligence

Protecting our social media platforms using AI

The internet is vast and growing exponentially by the day. The world has never been more connected. Social media networks and communities are formed every day from all corners of the globe. The freedom to interact with like-minded individuals has never been greater.

Sounds like a virtual utopia, so what’s not to love?

Of course, great freedom comes with great responsibility and not everyone plays by the rules. Social media communities can be as toxic as they are good. Young people are increasingly subjected to unprecedented cyberbullying and hate speech that takes away all the fun and compromises mental well-being. Parents can no longer guarantee that their children are safe online.

This problem needs to be addressed, sooner than later… and it turns out that AI can play a big part.

So how can AI help? Well, the intelligence in AI can take a huge leap to protect our social media platforms. Machine learning algorithms armed with social media data can scrutinise written posts and subsequently ringfence harmful content or indications of bullying. It relies on a prevalent AI technique known as natural language processing (NLP) that — as the name suggests — works seamlessly with unstructured free text.

NLP uses simple but effective mathematical principles to break down the written text into corpora (clusters of words) to create multi-dimensional vectors where each vector is a numerical representation of the words.

Once armed with mathematical representations of social media content, popular machine learning algorithms can be tried and tested on various use cases. Here are some examples of how NLP can help protect our social media platforms:

  • Sentiment analysis to flag bullies: Assigning a numerical weight to the emotions and intent of written text can bring toxicity to the forefront. Analysing profanity and hate speech can identify bullies — especially if they are targeting a single person. As it happens, sentiment analysis is easy to implement and mostly relies on a pre-existing corpus of words with assigned emotive values. For example, the word “hate” is assigned the value of 1.
  • Blocking or redacting sensitive content for users under 18: Facebook, for example, allows users under the age of 18 to register. Unfortunately, this can result in teenagers seeing inappropriate content originating from older users. NLP could filter and remove unsuitable content before it is added to a young person’s feed.
  • Flagging hate speech: AI can bring profanity, racism, homophobia and many more flavours of hate speech to the attention of social media administrators using simple but very effective NLP techniques. Cross-checking social media posts against a giant corpus of “bad” words can easily pinpoint users posting hate speech. Chances are this is already being used on Facebook and Twitter to some degree.

Using the techniques above, artificial intelligence can be an effective tool to make social media platforms a safer place for all users across age groups.