Site icon The Choice by ESCP

Just Like Us, Machines have Biases. But This Can Change.

AI code of ethics

Feature photo by Magele/AdobeStock

Artificial intelligence forces humans to face their own cognitive biases. The way of conceiving and programming machines is subjective, based on the experiences and social environment of the programmer.

As AI is now ubiquitous in both our personal and professional lives, it can have a major impact on society, especially concerning discrimination. Since human bias is at the source of the discrimination, a way to reduce it may be to teach the engineers who are doing the AI programming.

A Data-based Revolutionary Technology

What is artificial intelligence and how is it biased? Imagine a situation in which there is a need to develop a programme that is capable of determining whether an animal is a dog or a cat, based on a photo. In the past, developers would code “if”, “when” and “else” statements using criteria such as height, fur, and color to determine the nature of the species in the photo. Today, as Sonia Abecassis Le Lan of IBM explains, artificial intelligence uses sets of data provided to the machine to teach it what a cat or a dog should look like. With thousands of photos of cats and dogs, the machine learns to recognise the two different species thanks to similarities between the photos. The machine is able to solve the problem by answering statistically which solution has the biggest probability of being the right one. It could, for example, conclude that the photo has a 95% probability of representing a dog. This is called machine learning and it is the foundation of artificial intelligence. 

An Impressive Yet Flawed Technology

Since machine learning and algorithms are fed with important amounts of data, their predictions are subject to bias because of a lack of diversity or irrelevance in the data. When using for example a facial recognition algorithm, if the database does not include enough diversified entries, it may not be able to correctly identify the different faces. Jeremy Patrick  Schneider from IBM Interactive compares AI to a child who only knows what it has been taught: if the child has lived his whole life in a room, it will only know how to behave in that one room. When going outside for the first time, the child will not be able to react accordingly to various situations, because of his lack of experience. The scope of information that is fed into the machine is thus important in order to deal with multiple situations that might occur.  Another problem linked with artificial intelligence, according to Schneider, is that the technology has spread very quickly, not giving enough time for scholars, politicians and others to thoroughly and rigorously test it and implement regulations to restrict biases.

From Humans to Machines: How Our Biases Affect AI

The technology itself is not the one to be blamed, the humans who created it are. Every human being has unconscious biases that are developed through their life:  their culture, education, and experiences combine together to create cognitive biases. Machines merely reproduce them. Mickael Dell’ova at Ubisoft gives the example of a well-intentioned colleague who wanted to make an inclusive video game by adding a lesbian character as the main protagonist of their triple A grand strategy game. His colleague thought it would be inclusive if the lesbian character had stereotypical short hair and a Perfecto leather jacket. This cartoonish representation, despite the colleague’s best intentions, shows a clear unconscious bias. Such biases are mostly accidental, but being aware of them could help reduce their frequency.

If machine bias is ignored, products will be biased as well.

The Changes That Can Be Made

The first improvement that could be made is to give artificial intelligence access to diversity when fed with data.  Irène Balmès underlines that algorithm designers need to carefully select the data given to AI and to clean it of bias through multiple tests and checks. More diversified teams would be a real asset in bias detection. Team diversification is only one solution. Educating the teams by providing training on ethical matters can help curb biases. As Balmès says, it is unfortunate that there is not more communication between the fields of engineering and social science. Technology has always been questioned from a philosophical perspective. As concerns artificial intelligence, it is fundamental to think about the ethical issues as well and include the political perspective. Finally, to aid in the creation of diverse team members, the recruiting process should be reviewed, from the job description down to the hiring phase.

What To Remember

Machines have biases. They merely reproduce the cognitive biases that algorithm designers have, which are deeply rooted in our society because of systemic discrimination. But this can change with more diversity and inclusion initiatives in the technology industry. We would like to take this as an opportunity to reflect on our own judgments and ways of thinking, and how they can affect others around us.

The above article has been derived from a webinar organised by  LGBT  Talents,  with the following speakers:  Sonia  Abecassis  Le  LanIrène  BalmesJeremy  Patrick  Schneider and Mickaël Dell’ova.

Exit mobile version