Machine learning, as the name suggests, is the process of training machines to respond to certain stimulus
by training them with certain algorithms and exposing them to various scenarios and recording their reaction. In simple words, earlier humans used to perform experiments on animals like rats, by injecting them with certain drugs to record the effect of the drugs, due to the “legalities” humans have shifted the focus of their cruel experiments by training non-living machines to do their bidding.
If humans are training machines, then it certainly raises the question whether machines will be biased or not? Today’s racial wars which are happening are all because of the biasness of human cultures. If those who are training machines were to be influenced by these cultures, it would be safe to assume the same cultures are to be reciprocated into the machines.
In the future, machines are going to have a high position in society, such as job recruitment algorithms, service robots, emergency-line operators etc. In these types of positions it is crucial for machines to maintain a neutral opinion and not be influenced by factors such as gender, color, ethnicity etc.
Currently, 76% of all Computer science related job positions are occupied by males and only 24% are occupied by females. This means machines are interacting more with males than females. Other than gender, human behaviour varies by ethnicity and race which is reflected onto the decisions made by algorithms and machines.
When machines have to make important decisions such as who gets a loan or admission into college, instead of filtering candidates by financial state or academic skills they end up filtering them by ethnicity or their zip or postal code which is an example of racial prejudice, even making small decisions such as who’s order gets served up is decided by race since it will depend how the manufacturer of the robot has trained the robot.
You might be thinking that what I am saying is not possible, but in reality it has already happened. Microsoft recently developed a Twitter Chatbot called “Tay” who had an active online presence for only a few hours before it was taken down. Tay was coded to talk and behave like an average US teenager and was therefore trained with the urban dictionary as well so that it included swear words and abbreviations in its tweets. Within a few minutes of conversation with the online world, Tay became biased and posted several tweets on her opinions. One of her tweets mentioned that feminists should be burned in hell and another tweet supported Hitler’s ideologies. Within hours Microsoft had to take down Tay from the internet.
It’s safe to assume that apart from Machine and AI dominance, Machine Biasness is going to be another threat to society as we know it. Simple tasks such as allocating jobs and even crucial decisions such as who gets emergency service first will be at the discrimination of machines who will have taken after human behavior.
By OMOTEC Student – Sidharth Jain, JNIS IBDP 2022