What Can We Do About Bias in ML/AI Systems?

By Steven Holt on July 7, 2021 (Last Updated on March 21, 2024 )

Get latest articles directly in your inbox, stay up to date

** UPDATED MARCH 2024 **


We’ve been on a big journey together already into the topic of Bias in ML/AI. First, we discussed the intricacies of bias, and the interwoven web that is bias and cognition, both in how we process information ourselves and in how ML/AI Systems process information. Next, we went over the details of many of the ways that different types of bias can arise and grow within ML/AL Systems. Hopefully, by now you’ve also had some time to consider the questions we posed at the end of Part 2. Now, to end our journey together, let us move our focus away from the problems, and spend some time discussing some of the possible solutions we can begin working on.

What Can We Do About Bias in ML/AI Systems

BUILDING BETTER DATA

If we’re being honest with you, this is a lot easier said than done. But focusing our energy on building better training data, is likely one of the most practical technical solutions available to us at this time. As we’ve already discussed, there are many ways that the training data we use can impact the outcomes that our ML/AL Systems resolve to, and bias towards.

Too little data leaves the ML/AI crippled right from the get-go. Too much data and the ML/AI is increasingly likely to find itself focused on paths that consistently produce results, but results that we may not consider desirable or optimal.

Finding the correct balance for the right amount of data, and the right type of data, is going to be an ongoing challenge.

If we’re being honest with ourselves, it is likely an effort we will never truly “complete”. But every step forward along this path is going to have cascade benefits to countless downstream efforts, across every industry which already has implemented ML/AI Solutions, or will implement ML/AI Solutions in the future.

The juice is going to be worth the squeeze.

We can also learn from the previous experiences of other academic fields. In the early days of modern biomedical research, it was discovered that the lack of variety in the genetic diversity of the animals used as test models lead to numerous, and in some cases disastrous, biases to creep into their growing bodies of research. Along the same line, it has been argued that lab animals, bred in sterile lab conditions, are insufficient models for testing the impacts of diseases or medications on the immune system, given that the unnaturally-sterile environment in which they exist, from conception to evaluation, is nothing like what could be expected in the reality they attempt to replicate.

From these two examples, we can see that things such as Data Homogeneity and Data Sterility may seem like excellent things to seek in theory, but clearly have significant downsides in practice. Surely there are countless other examples like this that we can learn from.

The quest for better training data is both a monumental challenge and a necessity for advancing ML/AI technology. Achieving the right balance in data quantity and quality remains an ongoing pursuit. Drawing inspiration from diverse fields, such as biomedical research, we learn the importance of genetic diversity and environmental factors in producing accurate, representative models.

We don’t need to reinvent the wheel here, we just need to be intentional and thoughtful in the practices we apply around the development and use of training data.

BUILDING BETTER ALGORITHMS

Given the “Black Box” nature of much of the innerworkings of the current ML/AI Systems, this is an extremely challenging dimension to tackle. We just don’t really know what is happening a lot of the time. So, what hope do we have of trying to remove bias from something we can’t even begin to explain in any detail? Thankfully, we have this cool new tool available to us when it comes to trying to approach previously “unsolvable” problems.

The very tools we’ve been scrutinizing this entire time. Machine Learning and Artificial Intelligence may hold the very keys we need to fix the algorithm problems in Machine Learning and Artificial Intelligence.

I know, I know … this seems like a cop-out! “Just toss the newest buzzword at it!” Please hear me out though! I promise I’m going to make sense of how this is possible.

Again, as most good scientist will do, I’m going to lean on the lessons-learned from other disciplines. In this instance, a seemingly unlikely candidate. Business Marketing; or more specifically, Market Research.

As we’ve discussed at length, humans are notoriously biased. We really can’t help it. It’s just part of how we function.

Yet sometimes we want to try and extract unbiased information from people, or at least as close to unbiased as we can possibly get. Queue one of the most well-known evaluations in the field of Market Research … The Blind Taste Test. In blind taste testing, test designers attempt to remove as many of the unnecessary distinguishing traits that might potentially influence an individual’s responses.

All branding and distinguishing features are hidden from the subjects in order to control for bias. Sometimes even going so far as to physically blindfold the subjects, literally rendering them as good as blind for the duration of the experiment.

(You gotta do what you gotta do sometimes, you know?)

Final Quote 2

So how can we replicate this type of “Blind Testing” for ML/AI Systems to try and remove bias? This is where we can now discuss how we can us ML/AI to help us improve ML/AI, in terms of controlling for bias. It’s not going to be so easy as to “blind” an ML/AI system to prevent bias, given that the bias is both internal, and iterative.

But what we can do, is build forms of verification around the evaluation of results by throwing more than one ML/AI System at the same problem. We can’t blind the system to the problem, but we can blind the systems from seeing each-other solve the same problem.

Running multiple identical ML/AI Algorithms against slightly different Training Data, as well as running many slightly different ML/AI Algorithms against the same Training Data will provide us a wide spread of varying resolutions to the same core problem.

Assuming we do it many hundreds of millions of times. Then, by comparing and contrasting these various solutions-sets, using ML/AI Systems, we can begin to identify the types of bias that specific variations tend to result in. It’s an enormous challenge. But one well suited to the strengths of ML/AI. We just need to be wary of the biases that the Bias Checking Systems might themselves carry.

Yeesh. We sure have our work cut out for us on this one.

BIOLOGICAL CIRCUIT BREAKERS

A much more practical option to prevent the possible undesirable outcomes from the biases in ML/AI Systems is to implement “Biological Circuit Breakers” into the system. You know, basically just put some humans in between the decision, and the corresponding action that would follow. It’s not the most eloquent solution to the problem, but having people evaluate the ML/AI generated suggestions, prior to any actual actions being taken, is an easy way to protect ourselves from the worst of it.

If nothing else, it can be the stop-gap solution we can implement now, while we work on the more advanced options discussed previously. This will allow us to keep pushing the boundaries of what we can get from ML/AI, without taking on too much unnecessary risk of over-empowering these same systems. It’s not that we don’t trust them, it’s that we already know better than to put too much trust in a single authority.

Scientists don’t re-perform and validate the experiments of other scientists because they don’t trust them. (Ok, maybe sometimes…) They do so because they all accept that errors and bias are possible and likely.

So, for the betterment of the practice as a whole, it is in everyone’s best interest to practice this type of validation and re-validation in a best effort to account for these possible errors and bias.

It’s a pragmatic solution to a confounding challenge, and there’s no shame in doing the same in the ML/AI fields. It’s not perfect, but we know it works.

Final Quote 1

SETTING BETTER EXAMPLES

Perhaps one of the least discussed solutions, but most obvious is to start setting better examples for the ML/AI systems. As we’ve previously touched on, the majority of the data that is used to train ML/AI Systems is based on data collected about us and our actions. This puts us in a situation of being the role models for our own ML/AI Systems. It’s hard to expect ML/AI to behave better than we do ourselves, given it’s learning from us. To replicate what we do and how we do it, faster and closer each time. The better role models we can be, the better results we can expect. Perhaps this is the tallest order of them all in terms of the solutions we have proposed today. If you have 10 people in a room, you have 10 distinct moralities in a room. What’s right and wrong can be difficult to agree upon, given the distinct context from which each of us will approach any given scenario. Yet despite this, there ARE things which we can roughly consider “Universal Morals”, which the majority of us can agree upon. Surely, with time and effort, we can accomplish something similar with our “smarter” systems of the future.

If you’ve joined us for each segment in our 3-part series exploring ML/ AI bias, congratulations! Give yourself a pat on the back for making it all the way to the end of this journey with us! From the introspective to the existential, we’ve been through it all!

By now, you’ll have a much deeper understanding of the subject of Bias in ML/AI Systems. We’ve explored the fundamentals of Bias itself, and the numerous ways in which bias can be introduced into ML/AI Systems. Sneaking in via Training Data and Learning Algorithms, or simply inadvertently being blindly brought along by the very biased human developers.

We’ve discussed how bias is truly essential to functional and capable ML/AI solutions, and we’ve cast aside some of the myths around so-called “bias-free” solutions of the future.

Finally, to bring our journey to an end, we’ve explored some of the ways that we can begin approaching building solutions around some of the problems that we know bias is causing in our current ML/AI Systems. There are many more problems and solutions that we could explore already, and there are going to be so many more to come as we advance our capacity and capabilities in ML and AI. But; with a mindset of continuous incremental progress,

I for one am confident that the best is very much yet to come.

If you haven't read the previous two blogs from our ML/AI Bias series, be sure to check them out below! 

Machine-Learning-Bias Machine-Learning-Bias

 

Download our insights paper to learn how data bias gets into ML/AI systems and what you can do about it.

Data-Bias-Insights-Paper-Downloadhttps://info.obsglobal.com/ml-data-insights


This blog's author, Steven Holt, is a Senior ETL Developer with our Digital Transformation Practice. 

Please add your inquiry or comments for Steven in the form below and he'll be sure to get back to you!

Submit a Comment

Get latest articles directly in your inbox, stay up to date