European project under Citizenship, Equality, Rights and Values programme

Together Against Online Hate Speech (TAOH)

Together Against Online Hate Speech (TAOH) tackles the impact of hate speech in online public spaces by empowering civil society organisations, media, law enforcement, public authorities and individuals to address polarisation and create safer, more inclusive digital spaces. The main goal is to use AI-driven data collection and analysis alongside trainings to strengthen the resilience of civil society, media and authorities in reducing hate speech, particularly targeting ethnic minorities, women, gender-diverse and LGBTQ+ individuals. Targets of hate speech will benefit from more accessible resources for support and offline spaces. Initially focused on the Netherlands, Germany and Spain, TAOH’s methods will be replicable across the EU.

Together Against Online Hate Speech combines proven AI-enabled technological tools with proven human-led dialogue interventions and policy influence to effectively understand and address hateful and discriminatory online speech. The goal of this approach is to address the full spectrum of engagement needed to counter hate speech comprehensively. Content moderation and reporting to authorities is important to address the type of content that is illegal or violates platform policies. However, punishment and content deletion is not enough because hate speech is often legal, for example using coded language or dog whistles. These instances are just as harmful and need to be engaged with. Our approach allows the project to address the full spectrum of hate speech by combining a range of methodologies relevant to individuals, organisations and law enforcement, as well as for policy. 

Project activities

Participatory social media analysis

To understand the frequency and pattern of hate speech across platforms, the Together Against Online Hate Speech will rely on participatory social media analysis, guided by Build Up’s framework on Evidence of Divisive Behaviour on Social Media. We will use Phoenix, a specialised social media analysis toolset developed by Build Up and datavaluepeople, to scrape relevant data from social media platforms in order to gather and tabulate the content from relevant media and CSO accounts. We will then train and apply a complex classification model using machine learning to categorise hate speech in a nuanced way in three languages: Dutch, German and Spanish. 

The data is visualised through a dashboard, enabling participatory analysis with CSOs, media outlets, law enforcement and public officials. This user-friendly tool helps non-analysts identify patterns of hate speech over time across various social media platforms. During training sessions, CSOs and media outlets will learn how to refine and apply the dashboard’s insights, while webinars introduce law enforcement and public officials to its findings. This deeper understanding of which social media content is most vulnerable to specific forms of identity-based hate speech allows CSOs and media outlets to allocate their resources more effectively. Law enforcement and public officials, in turn, will know where to focus their efforts in tackling illegal hate speech, while developing policy responses that address even the legal but harmful patterns of discriminatory behaviour online. Regular monthly data updates, shared with all stakeholders, will help monitor changes over time and assess whether the implemented strategies are effective.

The social media analysis is geared towards understanding the specific types of identity-based hate speech in detail. The data itself cannot be sex-disaggregated – e.g. to analyse how many people of different genders post about certain issues. This is because the gender or age which account holders indicate on social media cannot be verified, and, at times, platforms assign gender based on unethical processes (assuming gender based on interests). This project’s purpose is to improve equality and inclusivity in online public spaces, therefore monitoring and evaluation methods will include specific indicators to assess the impact of the project on reducing hate speech relating to gender, sexual orientation, ethnicity and religion. Targets and upstanders will have the opportunity to provide continuous feedback that will influence the ongoing implementation of the project.

Empathy-based counterspeech 

We will apply evidence-based counterspeech methodologies to address the nuanced form of hate speech that cannot be dealt with through reporting or content moderation. The aim is to influence the social norms on platforms to re-establish the norm that hate speech is not socially acceptable. Depending on the content of the posts, this will entail engaging with the person who posted harmful content to help them change their behaviour, to draw a line to make it clear that certain behaviours are not socially acceptable and by uplifting positive behaviours through likes or posts that provide alternative narratives. The underlying sociological theory is to break the Spiral of Silence, where people are less likely to express opinions if they believe they are in the minority. The approaches used in this project are rooted in research proving that empathy-inducing counterspeech is more effective than other forms of counterspeech (Hangartner et al., 2021) as well as the proven methodologies #ichbinhier and The Digital Us have been using.

The empathy-based counterspeech approach will build the basis for the training modules for CSOs and media outlets, ensuring that staff not only acquire counterspeech skills but also gain an understanding of the psychological mechanisms that underpin effective interventions. In addition to fostering empathetic engagement, the training will provide strategies for responding to hate speech in ways that avoid amplifying harmful content, such as careful management of engagement with highly visible or algorithmically boosted posts. This approach mitigates the risk of unintentionally increasing the visibility of problematic content.

Following the training, community managers from CSOs and media outlets will receive mentoring as they apply the strategies developed during the sessions in real-time. This ongoing support ensures that their skills are continuously refined and adapted to emerging challenges. By directly linking the project’s conceptual methodology with the daily operations of CSOs and media outlets, this approach aims to integrate these practices into standard community management strategies, ensuring long-term sustainability.

Ethical and safety considerations are key to our approach. We will provide individual and community care to improve the resilience of both targets of online hate speech and individuals who engage in counterspeech. Our proven approach focuses on self-awareness on stress responses and resourcing for resilience, based on the Strategies for Trauma Awareness & Resilience (STAR) approach. 

Arts-based processing of online violence 

Together Against Online Hate Speech uses art as a method to provide offline spaces where people can process how the hate-filled online space affects them. These spaces serve as open and safe environments for people who are negatively affected by hate speech, offering a platform to express and receive support for their experience of harm caused by the toxic online space. Engaging in creative visual arts together with others helps create community and allows targets to feel seen. These real-life spaces are conceived as pop-up spaces in frequented public environments (e.g. markets) in order to reach people where they are in their daily lives. The spaces will complement the online interventions, allowing participants to share experiences and interact with others facing similar challenges. The presence of a trained arts facilitator will ensure a supportive and creative atmosphere, guiding participants where needed and fostering meaningful conversations around the arts table that promote healing and resilience. These spaces are essential to provide tangible, face-to-face support alongside the project’s digital efforts. 

The artistic outputs will also be connected to the project’s research and outreach components. Insights gathered from how participants reflect on hate speech will feed into the project’s exploration of relevant typologies. The outputs, whether visual art or testimonies, will serve as primary data to complement the project’s qualitative analysis, adding a human-centred layer to the desk- and data driven methodologies. In addition, the visual artworks will be used for the project communication and awareness campaign.

Awareness campaign

Reaching a broad general public is critical to the project’s ambition of a whole of society effort to address hate speech. For that, the project will run an awareness campaign, maximising public engagement by using targeted, accessible materials such as videos, podcasts and an online art exhibition of the outputs from the pop-up art spaces. Emphasis will be placed on creating practical tools for CSOs, media professionals, activists and policymakers to raise awareness about online harms, share resilience strategies with the general public and provide advice on how to deal with online hate speech. Public events, both online and offline, will foster interactive participation, while tailored content will ensure relevance across diverse audiences. This approach aims to build awareness and encourage action, ensuring long-term project impact.

The awareness campaign reflects the participatory ambition of the project. We will not only distribute materials widely but also include interactive digital formats. This creates a feedback loop where the public is contributing to refining the project strategy over the project period, for example by reacting to the resources shared or commenting on the campaign outputs. This format ensures that the broader public feels ownership over the project’s goals, further amplifying its societal impact.

Interested to get involved?

Get in touch!

Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.