This article discusses topics of extremism and radicalisation. Due to this; text, themes, and content relating to far-right extremism are present in this article. Please continue with care. See the following resources if you witness or experience the effects of radicalisation: Samaritans – Call 116 123 | ACT Early | | Prevent advice line 0800 011 3764.  In addition to the above, this research discusses the effects of violent far-right extremism. Such research could also be applied to violent far-left extremism or violent far-centre extremism, and as such the research doesn’t exist to demonstrate that one political ideology should exist over the other. Instead it serves as an example of how violent extremism, in all of it’s forms, is important to address across all ideologies. 


As part of my PhD research at Bristol University I’ve developed Pinpoint, a machine learning binary classification framework, used to identify violent far-right extremism… Wait, what does that actually mean? Well, when given a string of text (like a Tweet, Facebook, or Reddit post) Pinpoint is able to instantly (or at least close to instantly) identify if that text likely contains violent-far-right extremist ideology, rhetoric, or speech.



Now, this may sound awfully Minority-Report-esk, so the first question on your mind may be “who does this actually benefit?”. As well as supporting law enforcement and social network organisations to rapidly identify violent or violence provoking speech online, it also protects end users (that’s you and I) from having to read and experience such violent and extremist content.

Much research has already been performed into classifying and identifying Islamic and other extremist ideologies online, however until recently, little research had been undertaken in far-right extremism. So why is it important to identify and limit far-right extremism? Below is an excerpt from some of my research:

On the 6th of January 2021 a mob of supporters of the 45th president of the United States, Donald Trump, stormed the United States Capitol Building in an attempt to overturn the 2020 presidential election results. Among the participants of this riot were; US Republican party officials, political donors, and far-right extremists and militias – including members of: the Proud Boys, the Oath Keepers, QAnon, the Groyper Army, and others. The events at this riot lead to the deaths of five people and the injury of approximately 140 police officers. This event is just one example of how far-right extremism has gained traction over the last decade and how radicalised online communication can lead to real world terrorism events.

In 2019 The Global Terrorism Index, published by the Institute for Economics and Peace (IEP), reported a 320% rise in the total number of far-right terrorism incidents in the West (particularly in Western Europe, North America, and Oceania). This threat is not only held within the US, but also in other western nations such as the UK. In 2020 MI5 Director, General Ken McCallum, stated that while right-wing terrorism was not at the same scale as Islamic terrorism that it was, however, growing and ~29% of late-stage terrorist attack plots in 2020 had been orchestrated by right-wing extremists .

This shows that investing in new and innovative techniques, as well as building on current methods, is critical in helping to protect the UK and it’s people against the threats of far-right extremism…

Pinpoint breaks the classification of text (social media posts specifically) down into three main categories, these being:

  • Radical Language – How a post is written, including the actual letters and words used as well as the capital letter and violent word frequency
  • Psychological Signals – The tone of the message such as anger, sadness, anxiety, power, risk, reward, and other sentiments
  • Behavioural Features – How a user interacts with others in their post such as mentions and hashtags

Pinpoint was trained from messages and posts on the Parler social media network (that were harvested from the site before it originally went offline after the January insurrection in the US). Parler is self-referred to as a freedom of speech social network and is associated with Donald Trump supporters, conservatives, conspiracy theorists, and far-right extremists. In addition to this, the Stormfront internet forum, a neo-Nazi Internet forum and the internets first major racial hate site, was also used for training the classifier. All machine learning models can be measured based off their accuracy and other measures, when using Behavioural Features alone (which the Kaggle worksheet relies on) this model has the below scores. In layman’s terms this boils down to it being good, but not perfect.

  • Accuracy: 0.7368404073424881
  • Recall: 0.6270593997684567
  • Precision: 0.7848464582288358
  • F-Measure: 0.6971362094997648

Pinpoint is available completely open source, and I’ve also created a Kaggle (a data science platform) Python workbook that will allow anyone with an account the ability to test the trained machine learning model with text of their own. This workbook can be seen below.

For more information on Pinpoint and other research in the future, sign up to my mailing list below.


Stay up to date with research

Receive updates on my articles, books, courses, and more by signing up to the mailing list or by receiving page notifications (configured in the bottom right).