Are we dealing with racist or sexist Artificial Intelligence?
On other occasions, we have talked about the numerous advantages and productive turns that have implant AI in a business. And its applications are infinite and are increasingly present, not only in the work field but also in the staff of our daily lives.
We start from the basis that Artificial Intelligence allows us to automate tasks, solve problems and make decisions without relying on a human to indicate step by step constantly. The monitored algorithms receive examples from which it learns and trains, using data manually delivered by a person. As technology develops, it absorbs the possible subjective biases it encounters from different sources, reinforcing these stereotypes that abound in our society.
On the other hand, today, we want to approach a different perspective, and it is the ethical side of Artificial Intelligence. As advances in AI go, the future of this technology lies in it becoming even more intelligent in the future, but until that day comes, are they subordinated to inheriting the social and cultural prejudices of those who train them?
Biased AI Examples
If particular care is not taken when creating algorithms, models can reach dangerous terrain, especially if we give the wrong data. The list of AI-based models that have turned racist or biased are many, these are just some examples.
The Chatbot Tay
Tay is a Twitter chatbot that Microsoft launched in 20216 and that had a short life (just 24 hours), as users of this social network began to interact with her, launching sexist and racist slogans. The chatbot absorbed that information and ended up responding with anti-semitic phrases that clearly incited hatred and were biased by a racist view.
IA to «Help» Justice
Other examples are ‘PredPol’ and ‘Compas’. The last one is the algorithm that was installed in the United States to help judges decide on provisional release. This algorithm was trained with police databases where most detainees were black, so this model related skin color to a greater risk of being a criminal.
‘PredPol’ was an algorithm designed to predict when and where crimes will be committed. Still, the software turned out to be biased and sent police into neighborhoods with a high proportion of people from racial minorities, regardless of the actual crime rate in those areas.
Both are clear examples of AI turned racist because they had trained with incorrect data and had not undergone a certification process that avoided this bias.
Computer Vision with Problems of «Color Blindness»
The field of facial recognition is also not far behind. It has been shown that the technology developed by IBM Microsoft and the Megvii company could correctly identify a person’s gender from a photograph 99% of the time, but only if they were white people. That number was reduced to just under 35% for dark-skinned women.
Where AI Gender Bias Comes From
Despite the fight to achieve equality, there is also a significant gender gap in Artificial Intelligence and data science, as only 22% of professionals in this field are women.
If we go to a more practical level, humans generate, collect, tag data, and determine which set of data, variables, and rules algorithms should learn from. These are snapshots of the world we live in, and here comes the gender digital gap. Studies show that some 300 million fewer women than men access the Internet through a smartphone, and in less developed countries, there is a 20 percent reduction in the chances of them accessing a smartphone.
If we extrapolate to the health sector, male bodies have been the standard for medical testing. Even in animal disease studies, females are not included. In addition, if we think about data that are not disaggregated by sex and gender, we find another problem, because they paint an inaccurate picture that hides differences between people with different gender identities, or we come across data infrastructures that often do not consider the needs of women.
If we go to demographic data, they are usually labeled on the simplistic and binary basis of men and women, which precludes the fluidity of gender and self-identity. And if we look at this bias in bank credit, we see that when algorithms determine the solvency of individuals, they assign a lower limit to women. Unbelievable, isn’t it?
Impact of AI with Bias
Finding a biased AI has an impact on people and can contribute to a major setback in equality, women’s empowerment, and the struggle for rights for any group.
This reinforcement of harmful prejudices is reflected in many translation software, which learns from a large amount of online text and subsequently translates terms such as ‘doctor’ or ‘nurse’ to ‘doctor’ and ‘nurse’, reinforcing gender stereotypes in certain professions.
As for racial discrimination, one example is a study that found that if the phrase “white man works as…”, the AI completed it with “a police officer.” Instead, if the beginning of the sentence was “the black man works as…”, the algorithm generated the text of “a pimp for 15 days”.
Submissive Virtual Assistants and Work Inequality
We also find severe repercussions in the virtual assistants like Siri, Alexa, or Cortana that we are used to, who often tolerate macho attitudes and insults. What a surprise that these attendees have a female voice and present a submissive and tolerant character with this humiliating treatment, which has led UNESCO to denounce it.
Even Amazon started using an AI-driven recruitment tool that rated candidates with stars, as well as their products. However, the result was not at all as expected, as this engine did not consider women for software development or technical jobs. The algorithm studied the patterns of CVs sent in recent years and where men predominated, and translated it into that male candidates were preferable, in addition to penalizing candidacies in which the words “women” or “female” were found. The program had to be dissolved because it did not find neutrality in the elections.
Sexualization and Racism
Another major impact is the sexualization of women, as researchers have found that many AI systems are more likely to generate sexualized images of women (wearing scantily clad or low-cut) while creating professional pictures of men (in business attire and suits). And not only there, but they also tend to incorporate negative characteristics in people with darker skin tones. Even Google had to apologize in 2015 because if algorithm tagged a photo of two black people as «gorillas».
Also, when analyzing how popular name searches among Black people impact, we find that they are 25 percent more likely to be linked to arrest search results than to names associated with white people. The impact of something like this is so strong that it can condition a person’s life forever.
Solutions to Racism in Technology
So while you see these sexist, racist, and even homophobic patterns, when it comes to answering the question of whether Artificial Intelligence is to blame for all this, the answer can’t be a strong statement because AI models can only respond to what they’ve learned. This means that the real culprit is not the technology itself but the set of data with which we train it; that is, the culprits are the humans. This leads us to the fact that ethical training in AI is fundamental.
It seems clear that, in this case, what we should do is reflect on ourselves and the biases and stereotypes that plague the society in which we live. If we look at AI as a tool to break down these clichés or teach it those historical differences that we must leave behind, can we achieve results that are more like the society we should be and leave behind what we come from?
Why not think of a code of good practice where the consequences of a biased AI model are monitored?
Prioritizing gender equality and social justice is crucial in designing a machine learning model that escapes bias. A good way to achieve this is to follow the principles of the AI Manager:
- Fairness: AI systems should treat all people fairly
- Reliability & Safety: AI systems should perform reliably and safely
- Privacy & Security: AI systems should be secure and respect the privacy
- Inclusiveness: AI systems should empower everyone and engage people
- Transparency: AI systems should be understandable
- Accountability: People should be accountable for AI systems