• Thursday, 21 November 2024

Red flags suggest if hate speech created by human or AI, expert tells MIA

Red flags suggest if hate speech created by human or AI, expert tells MIA

Skopje, 2 March 2024 (MIA) - This year, almost half the world or 74 countries, including North Macedonia, have elections, a period when the atmosphere among political opponents heats up and hate speech intensifies at social media, along with the level of disinformation.

MIA discusses the topic with Lydia El-Khouri, Project Manager at Textgain, who took part at international symposium "Media Literacy in the Age of AI: Redefining the Possible,” organized within the USAID Media Literacy project “YouThink”, implemented by IREX together with the Macedonian Institute for Media (MIM), the Institute of Communication Studies (ICS), and the Youth Educational Forum (YEF).

El-Khouri has been working in the field of free speech for over 20 years, but also in media programmes dedicated to diversity, inclusion, media literary and hate speech in the Balkans, Europe, South Caucasus, Middle East, Northern Africa.

She explains the functioning of the European Observatory of Online Hate, which is part of Textgain's activities to detect hate speech at social media. The Observatory operates in about 30 languages and is currently working on a lexicon in Macedonian and Albanian languages, which could later be used to detect hate speech in North Macedonia.

Below is the full text and video of the interview:

What is Textgain's line of work? You are focused on hunting hate speech at social media through the European Observatory of Online Hate. How does this platform work, which social media do you follow and in which countries?

- Textgain is a spin-off of the University of Antwerp in Belgium, so we are a very varied group of people who are computer scientists, data scientists, law enforcement, people who work in civil society, like me, and also sociologists. We are a mixed bunch who want to use artificial intelligence for positive social change. We monitor social media for trends that can either be connected to hate speech or disinformation. One of our largest projects is the European Observatory of Online Hate, which is basically a research dashboard that people can use to analyze data. It is automatically generated for them to look at different things on social media. It is in almost 30 languages, it's EU wide, the dashboard is in 24 working languages of the EU, as well as Russian and Arabic because they are significant languages to Europe, and now of course, with our project, Macedonian and Albanian too. It monitors 12 social media platforms from mainstream platforms like Facebook, Instagram, Twitter, which has moved less to the mainstream, and to other fringe platforms, we call them fringe because they are more to the edge and usually unregulated and unmoderated, or less regulated and moderated, where toxic messages or speech is much easier to find.

Can AI hunt hate speech without any negative effects, having in mind it is a combination of frequently used words, but the word itself may not constitute as hate speech if not in context. How much can AI get the context without causing harm?

- We have a methodology that we use and we don't start with tech, we start with human beings. We work with annotators all over Europe, and Albania and Macedonia as well, and the people we work with are looking on social media for the words and phrases that are used and they rank the words. For example, 0 is offensive, not illegal - not hate speech, but might be used in a context that is hateful, and 4 is violent or threatening language. There might be occasionally instances where messages are flagged up, that are false positives, but we have a lot of technology that we use to try and mitigate that to ensure that it happens as little as possible and that we focus on the messages that will help us understand what is happening online and social media, and to do something about it. The human factor is crucial. It needs to start with human beings, then creating the words that are actually being used online, not using Google Translate or any other technology to create the bank of words. Then it runs through our algorithm, which is transparent and then it comes back to human beings to use that data and information in the best way they possibly can.

Социјалните мрежи станаа расадник на екстремизмот кој се користи за поткопување на демократијата, заклучи експертска група во Давос и повика на реформи што ќе ја вратат довербата во власта и

Is the human factor important in this process?

- The human factor is crucial. It needs to start with human beings, then creating the words that are actually being used online, not using Google Translate or any other technology to create the bank of words. Then it runs through our algorithm, which is transparent and then it comes back to human beings to use that data and information in the best way they possibly can.

You are also developing technologies to hunt disinformation. How successfully can AI define disinformation without the human factor involved?

- The need for the technology needs to come from the humans that are involved, whether it is civil society, law enforcement or academia. We've developed lots of different tools and techniques to find and detect disinformation. We are part of the European wide network called European Digital Media Observatory (EDMO) that is set up to identify disinformation. It's such an important part, particularly this year 74 countries in the world are having elections, almost half the world, and the level of disinformation is a great cause for concern and something that we are constantly working on. We have to constantly innovate and adapt to the situation because the situation is changing all the time.

You mentioned elections, we are heading for double elections in North Macedonia, a period when the atmosphere among political opponents heats up. Is the European Observatory used in electoral processes?

- We haven’t used the EOOH dashboard in the lead up to elections before, we have other tools and technology that we use. But it can be certainly used, I mean, over the next few months we will be working with Macedonian journalists and academics who will have access to the dashboard, they can use it to develop their own research questions. So, they can look at different platforms in Macedonia, see where toxic conversations are taking place and analyze it using our scraping tool.

You are working with local linguists in Macedonian and Albanian language. Can you tell us something more about these activities?

- So, as I come back to the human in the loop and that we mentioned at the beginning, it’s essential to work with local native speakers using the languages that are used in Macedonia, so, we are working with four linguists, who’ve been putting together a lexicon of over six thousand words that are either offensive to threatening, highly violent language that’s used online and their help in building the lexicon of words means that we can automatically detect and quantify the problems in and around hate speech.

Is this an ongoing process, have you collected all the words?

- We reached six thousand words and we’re still collecting, we will wrap up the annotation part at the end of March, but we’ve had a fantastic collaboration with ICS who have developed the project and three academic universities with linguists in Macedonia as well.

Can AI create hate speech?

- Yes, of course it can, and it does I’m sure in whatever format it comes, and that’s part of the reason why we do what we do, to try and get to the source of artificially generated hate or human generated hate and to do something about it, whether that means informing law enforcement if there are threats to violence, or talking to civil society about how they can do more about it, because I think particularly with civil society there’s a great role for them to play in taking data and analyzing the data and seeing what needs to be done about the situation on social media. And we’re really delighted that Macedonian civil society is taking this opportunity.

You are a professional, but how can a common citizen recognize hate speech that is not created by humans but by AI?

- It’s hard to say universally, but there are red flags. For example, if there are grammatical mistakes in the text it suggests maybe a bot is using it, maybe it’s using a translating software to just spread toxic information or to disrupt the society or disrupt democracy, so that’s one red flag you should look out for. But also there are generic signs you often find with people who are using social media to stir up or to disinform have very generic @ signs so it can be followed by a number or a very fake looking profile picture. These are all things you should be aware of and I would really recommend that you report them to the platform if you think somebody is acting like a bot or sowing dissent or causing disruption, report to the platform, they have a duty to do something about it. It depends on the platform, some do more than others, but the European Union has just last week brought in a law called the Digital Services Act that will hold social media platforms to account.

Ana Cvetkovska

Photo and video: MIA