A few weeks ago, Twitter admitted that its algorithm was racially and sexist biased. In order to become more inclusive, the social network sought the help of its users. Explanations.
Make money on Twitter
In 2018, Twitter introduced a new algorithm for cropping images to avoid cluttering Twitter threads, especially by reducing their size. Last May, however, under pressure from its users, the social network had to admit that its algorithm had a racist bias in favor of whites. According to an internal study, the algorithm would prefer whites (+ 4%) to other skin colors and women (+ 8%) to men.
Likewise, the language embedded in a photo (as with the memes) would be decisive for the algorithm that favors the English language over other languages such as Arabic.
Following this media outcry, Twitter decided to involve its community in tweaking its flawed algorithm by announcing the creation of a competition, the Algorithmic Bias Bounty Challenge. The concept is very simple: Help Twitter recognize racist and sexist prejudices in its algorithm, with a reward of up to 3,500 euros (for the first prize).
Rumman Chowdhury, director of the algorithm ethics, transparency and accountability team behind the competition, explained that the platform cannot overcome these challenges on its own and that the giants of the web often only realize later that their algorithms can harm one or more communities. Above all, Chowdhury does not hide his ambition:
With this new challenge, we aim to set a precedent on Twitter and in the industry for the proactive and collective identification of damage caused by algorithms inadvertently.
In order to take part in the challenge, participants had to create a rating from the cropping template and code used, which Twitter shared. The results were announced on Sunday August 8th during the DEF CON AI Village Workshop.
The winner of the competition organized by Twitter is a Swiss student, Bogdan Kulynych, who uses artificial intelligence to create several realistic faces of the same species and using several variables (age, skin color, gender, weight, etc.) thus make it possible to identify the variables that the Twitter algorithm preferred.
To say the least, Twitter uses a very effective communication technique. Because by including its users in the troubleshooting of its algorithm, the company protects itself from possible scandals. In fact, the dissatisfied user who denounces the racist and sexist prejudices of the algorithm becomes an ally of Twitter, which openly strives for an algorithm that is as ethical as possible.