A newly established digital agency was unveiled in Japan on 1 September 2021. The agency, located in an Akasaka skyscraper, is unprecedented in every way. Under the direct supervision of the prime minister, with 120 of its 500 officials hired from the private sector, the new agency has the authority to manage IT system budgets across Japanese government ministries.
A key element in facilitating radicalisation processes leading to violent extremism, online hate speech has been considered among the most difficult areas to regulate on social media platforms. Challenges include the absence of a clear legal standard upon which companies can base their policies, as well as the importance of context in determining whether or not a post containing harmful words constitutes hate speech. The lack of clarity surrounding what constitutes hate speech together with inconsistent enforcement may enable users to abuse the platform to advance hateful ideologies. Such loopholes allow extremist users like white supremacists to use social media as a tool to effectively distribute their message, hoping that when their rhetoric reaches the right user, online messages will turn into real-life attacks against what they deem ‘the enemy’. They do so by normalising their narratives through the ‘echo chambers’ facilitated by these platforms’ algorithms. That is, through online spaces customised for the user via recommendations, shared content, and filtration, which remove alternative or challenging worldviews, thus facilitating the radicalisation and engagement processes. Several incidents in the past years have shown that “when online hate goes offline, it can be deadly.” For violent white supremacists like Dylann Roof, Anders Breivik, or Robert Bowers, online hate certainly did not stay online. So, the question is: how can social media platforms balance free expression and user/human rights protection while preventing online radicalisation leading to violent extremism?