• Marco Eissa

Unpacking unethical algorithms, biases, and more

In an earlier article, we explained why models fail in production. Today, we take a deep dive into what happens when models fail in production and unpack the human cost of it all – especially in the Middle East.


Reuters/Morteza Nikoubazl, Unsplash/Markus Spiske, Drea Sullivan

This year an astonishing 96% of leading companies reported that their Big Data and AI efforts were yielding results or at least having a pilot in production (Big Data and AI Executive Survey, 2021).Of these companies, Financial Services and Healthcare are two of the top three industries adopting AI (AI Adoption in Enterprises, 2021).


Adopting AI means better services and operations, which would directly benefit both organizations and their customers, but the cost of something going wrong is drastically different for both parties. For organizations, the AI tries to optimize the success metric which usually pushes the decision towards lower risks and more confident outcomes. While this makes sense from an organization’s point of view, a rejected hire or loan could be a life-changing event to an individual.



Having no recourse


In 2016, the Verge investigated an algorithm applied in more than half of the US states and many countries in “What Happens When an Algorithm Cuts Your Healthcare”. The algorithm that decides how much healthcare people should get made the decision to drastically reduce the care hours of some people without giving a clear cause of why this decision was made. For example, Tammy Dobbs, a woman with cerebral palsy who can’t move without a nurse’s assistance, got a verdict of 32 aid hours a week reduced from the 56 recommended by a human healthcare visit. Many others reported that they needed to be hospitalized because their home care was no longer adequate.


The human factor was totally removed from the equation: a nurse would only input the case’s data and the computer would give the verdict of how many hours they deserve. There were no recourse measures to be taken. An attempt to fight this verdict in court turned into a disappointment because there was no way to effectively challenge the system since no one knew how information factored in the system’s final verdict.


@ The Verge-Artist: William Joel

Upon looking under the hood, it was found that from a long list of factors that the assessor asked about, only 60 factored in the algorithm’s verdict. On each of these factors a slight change in score on a few items could lead to a dozen of care hours reduced.



This showed that a system that has no published standards or clear understanding of how things work under the hood gives no chance for the aggrieved parties to challenge the verdict of the system.




This problem is aggravated by bureaucracy, where many of the government-collected data either had no process for correction or was tedious and slow, which means that an algorithm’s decision could take months or years to correct.


Danah Boyd commented: “Bureaucracy has often been used to shift or evade responsibility…. Today’s algorithmic systems are extending bureaucracy.”



Feedback loops


Without proper design, algorithms can create feedback loops where a prediction made by the algorithm can reinforce actions taken in the real world. This results in the amplification of the features that derived the initial prediction, leading to more certain predictions in the same direction, creating a feedback loop that’s very hard to break.


Let’s take a look at credit scoring, for example. Financial institutions use spend / pay behavior of their customers to assess how financially trustworthy they are. This process has become almost absolutely data-driven, where your financial data - along with many other factors - are piped into an algorithm that scores your “trustworthiness” to the institution. This “trustworthiness score” controls all your future transactions with this institution like getting a loan, your credit card limit etc.


A case where some financially bad decisions lead to someone being late on multiple payments or even the case where your financial information is being misused by someone supposedly trustworthy would plummet your credit score. This decision might make it much harder for you to recover, as your credit score could affect your potential hire, ability to buy a car or a property, or getting a loan for a business. This could lead to a downward spiral where the algorithm’s decision is reinforced more and more, getting you in a cycle that you are unable to break. This is critically important for people with low income, where an increased spending can’t be easily mitigated by paying from your savings since it is not an option.


Usually, institutions don’t publicly share how their models/decisions work or what data is being fed to the model. If data other than financial transaction history is being fed to the model, this could open the door to many biases where one race or one area of residence gets a lower credit score—a problem where the individual behavior is no longer individual and is stereotyped by the collective behavior of a group. Needless to say, none of it is fair.



Individualism and Aggregation bias


Predictive models assume that people with similar traits will do the same thing, but is this a fair assumption?


While this can accurately predict the aggregate or group behavior, for an individual the equation is different: it depends solely on the person’s historical behavior. However, even this can be argued to be invalid; one example I personally love that explains this bias is by Martin Kleppmann who says:


“Much data is statistical in nature, which means that even if the probability distribution on the whole is correct, individual cases may well be wrong. For example, if the average life expectancy in your country is 80 years, that doesn’t mean you’re expected to drop dead on your 80th birthday. From the average and the probability distribution, you can’t say much about the age to which one particular person will live. Similarly, the output of a prediction system is probabilistic and may well be wrong in individual cases.”

In short, features that can promote group behavior will lead to predictions that are unjustifiably generalized and contradict the concept of human individualism.


Systematic Biases


Biases can creep into data in many forms, many of which can go unnoticed, leading to a machine learning model learning those biases. One clear source of bias is that people, companies, and societies are themselves biased; no matter the progress achieved in counteracting it, some biases always manage to slip in.


We can imagine the hiring processes of companies that usually select data from specific areas of residence or specific universities. An algorithm trained to filter potential applicants would be biased because the entire data history is biased.



CW: Another shocking and infamous example is when Google’s image auto-categorization classified a user’s photo of black friends as gorillas. Google made an even worse fix by just banning “gorillas” from the allowed categories. The exact same bias re-emerged on Facebook a few days ago when black people were tagged as ‘primates’ .


Many publicly available datasets which are used often to train machine learning models have biases embedded in them whether as class distributions, hidden correlations, or other forms. As an example, most available datasets for human faces are biased toward white people (Fair Face). This seemingly innocent distribution imbalance easily leads to gender misclassification for other races: women are more likely to be classified as men in Middle Eastern communities, black people and latinos. We encountered a similar case working on one of our products, Azka Vision, where veiled women were misclassified as men as a result of bias embedded in international dataset where women are mostly not veiled.

Fig.1: FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age

As a solution, a simple rebalance of class representation was introduced in FairFace and mitigated the issue (Fig.1). More examples can be observed in machine translation where translating non-gendered languages into gendered languages depicted stereotypes of gender roles. For example, a doctor would be associated with a “he” while a nurse or a teacher is associated with a “she”. Another in salary estimation where women are estimated to have less than men for the same role.

These data biases influenced one of the infamous language models: GPT-3. The model was trained on the text bodies from the internet, wikipedia and a corpus of books. In the publications for GPT-3 it was observed that some religions are associated with negative adjectives; for example, the internet’s islamophopia managed to transfer to the model showing “terrorism” as one of the most associated words with Islam. In the paper “Persistent Anti-Muslim Bias in Large Language Models,” the authors explore how language models simply learned the internet’s islamophobia and projected it on most generated text including the keyword Muslims.


Fig.2: T.B Brown, 2020

Fig.3: A.Abid, 2021

These ethical issues are few of many that may arise when getting a machine learning model into production and can lead to consequences ranging from simply shutting down the model all the way to a major lawsuit. Working with algorithms might entail ease in evading responsibility since there is simply no one to blame.


Hopefully, shedding light on those major ethical concerns can help raise awareness and shift the responsibility to people developing these models, prompting them to actually do the right thing: take all possible precautions and test for all edge cases and biases, allow for model interpretability, monitor to detect instances of possible unfair judgement and open a channel where the human factor can enter the loop and correct course where they can catch problems and mitigate them before some serious damage is done.




References:


Abid, Abubakar & Farooqi, Maheen & Zou, James. (2021). Persistent Anti-Muslim Bias in Large Language Models.


Brown, Tom & Mann, Benjamin & Ryder, Nick & Subbiah, Melanie & Kaplan, Jared & Dhariwal, Prafulla & Neelakantan, Arvind & Shyam, Pranav & Sastry, Girish & Askell, Amanda & Agarwal, Sandhini & Herbert-Voss, Ariel & Krueger, Gretchen & Henighan, Tom & Child, Rewon & Ramesh, Aditya & Ziegler, Daniel & Wu, Jeffrey & Winter, Clemens & Amodei, Dario. (2020). Language Models are Few-Shot Learners.


Karkkainen, K., & Joo, J. (2021). "FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation." 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 1547-1557, doi:10.1109/WACV48630.2021.00159.


Loukides, Mike, (2021). "AI Adoption in the Enterprise 202", O'Reilly.


NewVantage Partners, 2021, Big Data and AI Executive Survey.


Usmani, Azman, (2021). "Just Three Sectors Took Home 75% Of Venture Capital Funding In 2020", Bloomberg.


Vincent, James, (2018). "Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech", The Verge.








145 views

Recent Posts

See All