Videogames, gaming-related content, and gaming (-adjacent) platforms are a new key area of discussion for those seeking to understand how extremist actors use the digital space to further their aims. Livestreamed attacks, the development of videogames with radical ideological content, the potential to use in-game chats and Discord servers to disseminate propaganda, and the link some right-wing perpetrators had to parts of the gaming community have placed gaming-related content and spaces at the heart of the discourse on contemporary extremism. While much more research is needed to understand the potential link between gaming and extremism, why extremist actors are present in gaming spaces, and how gaming topics are used for propaganda, it is undeniable that extremist content is present on gaming (-adjacent) platforms and is seen by many users in these spaces. Therefore, it is reasonable to ask whether gaming (-adjacent) platforms – such as Discord, Steam, Twitch, and DLive – could (or should) be used for P/CVE measures as well. The following recommendations are based on a recent RAN paper on this issue and are meant as an invitation to think about the development and implementation of such measures as there are currently few examples of P/CVE initiatives in these spaces.
The Australian far-right is using a diverse range of online tools to fundraise and solicit donations. In this ecosystem, the messaging app Telegram plays a significant role — providing an entry point into a broader content and financial network, and facilitating international connections.
In a recent report for ASPI, I examined a sample of nine Australian Telegram channels that share right-wing extremist (RWE) content and found they were connected to more than 20 different funding mechanisms, platforms and tools. This preliminary map included live-streaming platforms such as DLive and Entropy that allow ‘tipping’ and other support. Financial opportunities were promoted by the sale of merchandise and subscription platforms such as Patreon and SubscribeStar. Donation requests via Buy Me a Coffee and PayPal, or via various cryptocurrency wallets, were also posted in the channels for purposes ranging from legal fees to general support for content creation.
Last weekend marked two important dates in the history of jihadism: the 20-year anniversary of the 9/11 attack by al-Qaeda (AQ) and the start of the trial in Paris of the 13 November 2015 attacks carried out by Islamic State (IS). This article aims to take stock of the progression of AQ’s propaganda following the emergence of IS: to what extent do the two groups’ digital strategies embody their ideological differences and reflect their organisational evolution? How do they diverge? Despite the weakening of both group’s media capabilities, how has AQ’s media managed not to be overshadowed by IS’s stronger brand and more frequent propaganda machine?
In November 2019, the European Union celebrated a small victory: over 26,000 items of self-proclaimed Islamic State (IS)-supporting content, as well as channels and groups, were referred from nine online service providers, thanks to an action coordinated by Europol’s European Union Internet Referral Unit (IRU). Facebook has every reason to be pleased, too. They claimed a 99.6% proactive rate on terrorist organisations in Q1 2021, which means less than 1% of this type of violating content is reported by Facebook users. However, all these positive figures may paint a distorted picture of reality: Europol’s scope only concerns official propaganda posted by designated terrorist organisations, so any unofficial radicalised content remains under the radar (Amarasingam, 2020). In the same vein, social media companies’ takedown of IS activity probably leaves aside great volumes of more general violent jihadi content that is not explicitly branded as IS propaganda and, therefore, slips through the net of content moderation much more easily (Conway, 2020). Indeed, in addition to terrorist organisations, there are also private communicators, particularly on social media, who are supporters of violent jihad but who do not explicitly express their adhesion to any extremist organisation. Such actors communicate radicalised content in the midst of regular, non-extremist posts. By sharing content that moves away from IS’ military propaganda, such Global Jihad Movements (GJM) supporters engage in the fragmentation of violent Salafist propaganda and in a more diffuse online presence. As we will see, myth and eudaimonic content are two key types of posts in this process.
On 24 July 2021 in Australia, against the backdrop of a city and state sinking further into crisis, a rally of thousands took place. The central concept was ‘freedom’. This was a local manifestation of global conspiracist efforts, with a small German group seeding a wave of rallies around the world via social media. In Australia, a growing handful of figureheads has emerged, some with concerning ties to violent extremism and cultic groups like QAnon.
Extremist content is frequently coded–-both to escape moderation and to signal to others with similar ideologies. In many cases, the nature of this content is far from explicit, couched in humour or sarcasm that avoids direct violation of platform policy.
These challenges are exacerbated by the increasing proliferation of non-text content formats where existing moderation tactics may not be as effective—including music.
Consider the far-right musical and aesthetic genre Fashwave, a portmanteau of ‘fascism’ and ‘wave’ first widely reported on in 2016. Recent research, conducted in part for GIFCT, found that Fashwave songs and images continue to appear online.
In 2020, the EU Counter-Terrorism Coordinator issued a warning that extremists are increasingly present in digital gaming spaces and in 2021, the EU Terrorism Situation and Trend Report detailed that right-wing extremist actors in particular use both videogames and related platforms to disseminate their propaganda. While much of the potential nexus between gaming and extremism remains severely under-researched, the following Insight provides a preliminary overview of the gaming (-adjacent) platforms currently used by extremist actors and both strategic and organic reasons for their presence there. It is based on a recent RAN publication.
Videogames, gaming-related content, game aesthetics, gaming (-adjacent) platforms, gamification, and their potential link to digital extremist content and digitally-mediated radicalisation processes, are an increasingly popular topic in extremism research. Gamification, defined as “the use of game design elements in non-game contexts,” i.e. the transfer of points, badges, leaderboards, rankings, quests, and other game features into spaces not usually considered as games, is one of the gaming-related mechanisms that have been identified to make extremist online content more appealing, entertaining, and ‘fun’. Gamification has also been discussed as a potential avenue to make digital P/CVE content more engaging and increase the likelihood of such content being seen in a digital environment over-saturated with information and entertainment content. However, both the theoretical and empirical basis for a theory of the ‘gamification of radicalisation’ is slim and often relies on anecdotal evidence. One avenue to expand the theoretical foundation of this issue is to examine how different user types may react to and are affected by certain game elements. Based on a recent, more detailed discussion of user types in digital gamified radicalisation processes published in Perspectives on Terrorism, this Insight presents five ideal user types and their potential interaction with gamified extremist content.