On 11 March 2020, the World Health Organization (WHO) declared the newly discovered coronavirus disease (COVID-19) to be a pandemic. COVID-19 has upended virtually every aspect of life around the world. One of the immediate implications of the pandemic was a significant increase in the amount of time individuals spent online. It will take some time to fully realise the effects of this spike in activity. But it was immediately evident that radicalisation to violent extremism (RVE) could be effected. In the western world, the online milieu now dominates recruitment efforts, and online platforms have become central to the mobilisation of extremist groups. Engagement in forums affords extremist groups the opportunity to bring adherents, fence-sitters, or the merely curious further into the fold and increase radicalisation. The purpose of this study was to present the results of preliminary analyses of how the pandemic influenced posting behaviour across a range of extremist platforms.
A key element in facilitating radicalisation processes leading to violent extremism, online hate speech has been considered among the most difficult areas to regulate on social media platforms. Challenges include the absence of a clear legal standard upon which companies can base their policies, as well as the importance of context in determining whether or not a post containing harmful words constitutes hate speech. The lack of clarity surrounding what constitutes hate speech together with inconsistent enforcement may enable users to abuse the platform to advance hateful ideologies. Such loopholes allow extremist users like white supremacists to use social media as a tool to effectively distribute their message, hoping that when their rhetoric reaches the right user, online messages will turn into real-life attacks against what they deem ‘the enemy’. They do so by normalising their narratives through the ‘echo chambers’ facilitated by these platforms’ algorithms. That is, through online spaces customised for the user via recommendations, shared content, and filtration, which remove alternative or challenging worldviews, thus facilitating the radicalisation and engagement processes. Several incidents in the past years have shown that “when online hate goes offline, it can be deadly.” For violent white supremacists like Dylann Roof, Anders Breivik, or Robert Bowers, online hate certainly did not stay online. So, the question is: how can social media platforms balance free expression and user/human rights protection while preventing online radicalisation leading to violent extremism?