Have you ever wondered why artificial intelligence-powered digital assistants like Amazon’s Alexa, Apple’s Siri, and Microsoft’s Cortana are called women?
Artificial intelligence (AI) and machine learning (ML) have now entered almost every aspect of our lives like an ‘invisible hand’. It processes massive amounts of data and makes key decisions in real time.
For example, it recruits people, decides who to give credit to, makes crime and criminal predictions, advises people to listen and watch, decides who should see which ad in the digital world…
Today, it is very difficult to find a field where artificial intelligence is no longer involved. Some researchers suggest that this is a new type of infrastructure. It is a strong infrastructure that is not physical or visible. It is at the center of the decision processes of all social relations, organizational practices, and actions…
Now back to the first question! Why are these digital assistants gendered with female names? Actually, the answer is simple and dramatic. Because of the gender stereotypes that place women in the role of service and assistant, albeit in the digital world!
Artificial intelligence and ‘algorithmic bias’!
Technology makes our life easier. It provides efficiency at many points. Artificial intelligence applications are like this. It offers numerous benefits.
There is another side of the issue that needs to be discussed. These technological developments grow by being fed by society’s relationship sets. They are also driven by value judgments and patterns. In other words, issues such as deeply rooted prejudices can gain a solid place. Discrimination and inequality for humanity can become ingrained in these emerging technologies as ‘algorithmic bias‘.
Therefore, not everyone has the opportunity to benefit equally from the advantages offered by technology. Algorithms may produce discriminatory results against certain categories of individuals, usually minorities and women. This algorithmic bias can further fuel existing social inequalities, especially regarding race and gender.
There are many examples of this being discussed around the world. Amazon’s hiring AI app was discontinued. It was found to be sexist. This is similar to Goldman Sachs being investigated by regulators. They were using an algorithm on Apple credit cards that allegedly discriminates against women by giving men more credit lines than women.
The issue is critical not only for humans, but also for all companies that embed artificial intelligence into their business models, especially in terms of facing reputational risks. Companies that entrust their decision processes to artificial intelligence need to be prepared for risks that they have not managed before.
Code of ethics in AI
Let’s continue with the new questions. Who decides how to mirror this set of values and prejudices to AI? How is it controlled? Who is held accountable? How is the situation in terms of accountability, transparency, fair audit?
When people do something wrong, it has consequences before authority and society. The simplest form of shame and guilt arises. (Or at least, we morally expect it to be.) Although the concepts of justice and equality are concrete and universally one and only, it may not be possible to make decisions based on these concepts with algorithms. Or, at the other extreme, it could be that algorithms turn into a much more autocratic compass with a mission of overcontrol. So, the subject is not so easy on the axis of values!
Artificial intelligence works by ‘learning’ from datasets: Algorithms are created to mine data, analyze it, identify patterns and do it. Datasets can come from any number of sources; photographs, health data, government data, or social media profiles.
Social prejudices and inequality are often embedded in such data, and artificial intelligence will not propagate social values such as justice unless programmed directly into it. So, if an AI recruitment system relies on previous recruitment data, where very few women were hired in the past, the algorithm will continue that pattern.
On the other hand, data can also be biased due to omissions. Datasets can bypass entire audiences without internet history or social media presence, credit card history, or electronic health records, leading to skewed or biased results.
This debate is not new, of course. In 2016, in the World Economic Forum, especially “humanity” and “equality” were listed as one of the ethical issues of artificial intelligence. UNESCO has published a code of ethics in the digital world.
Today, after reviewing all the ethical elements, we list the general principles that should be emphasized. These are especially important for artificial intelligence. The principles include transparency, justice and equity, non-harming, responsibility, privacy, kindness, freedom and autonomy, trust, sustainability, dignity, and solidarity.
Artificial intelligence and sustainable healthy future!
Technology applications such as artificial intelligence will offer different solutions at many points, especially regarding climate and environmental issues. However, for a sustainable and healthy future, not only climate and environmental issues, but also rooted issues of humanity such as justice, equality and freedom need to be handled with the same care and sensitivity.
As artificial intelligence gradually pervades our lives, we must address these problems. It will be essential to connect artificial intelligence in a way that solves these fundamental problems. We should focus on solving them rather than fueling them.
Discover more from ActNow: In Humanity We Trust
Subscribe to get the latest posts sent to your email.