This is how AI bias really happens—and why it’s so hard to fix
Bias can creep in at many stages of the deep-learning process, and the standard practices in computer science aren’t designed to detect itRead full story
AI bias is the same as human bias because AI is created by humans. How can we expect AI not to be biased when humans themselves are (often) unaware of their own biases? Garbage in. Garbage out.
#AI #psychology #technology #deeplearning #futureist
Spot on. The issue of AI bias is very similar to bias, prejudice and racism in society. Both are based on inheriting skewed historical opinions and are reinforced by individuals who influence and direct you at the point of learning.
I’ve come to the conclusion that “fixing” AI bias is like “fixing” human bias. It’s less of an outcome than it is a whole system of differences. What works with humans is challenge, exceptions, reflection, openness and transparency. The same will work with AI. And it will never, ever be done.
This. This may be one of the biggest issues we face in the next ten years. Mostly because we are ideologically divided. How do we define “fairness”?
Seriously, people are attached to their concept of fairness as they are to their party. There’s not a lot of budging. Buckle up; we’re in for a ride.
Today’s must read ...
The overconfidence of AI developers has resulted in unrealistic expectations and significant collateral damage. My hope is that regulators will require AIs to demonstrate safety, efficacy, and lack of implicit bias prior to use. This is analogous to the requirement for medicine, but should be much less costly to implement.
It’s important to understand what kinds of discrimination you want and don’t want in your model. Because after all, discriminating between categories is often what you’re trying to do with ML, albeit not on certain features.
As a result of these and other risks, the third-party validation service market for machine learning models is likely to grow.
AI is just another word for statistics and sometimes it’s easier to discuss these concepts when you swap the words. The most interesting part of the paragraph is “There are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices.” The latter is a huge problem for AI because machines have no idea whether patterns should be there. If an AI algorithm were to choose CEOs, it would pick 6’3 whites men because statistically
AI is just another word for statistics and sometimes it’s easier to discuss these concepts when you swap the words. The most interesting part of the paragraph is “There are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices.” The latter is a huge problem for AI because machines have no idea whether patterns should be there. If an AI algorithm were to choose CEOs, it would pick 6’3 whites men because statistically that’s the obvious historical pattern.
This is an excellent overview of how bias gets baked into deep learning systems. Quite relevant to today's landscape of personalization, predictive analytics and content delivery. it will certainly be important to build awareness and transparency to the potential pitfalls, as these incredible technologies continue expanding into our lives
Sounds familiar? Education matters here as well.
Wonderful advancements are being made. And we can’t expect AI not to have any bias since we have biases ourselves. In fact, biases and judgements are what comprise our society. We should still strive to have a “more perfect union” and hopefully research to curtail AI will prevail. The larger danger from AI is the lack of personal touch we need. Do week seek to be a society governed by keywords and algorithms or do we preferred to be unencumbered by definitions? AI may sacrifice effectiveness for efficiency- and that’s a shame.
“Fixing’ discrimination in algorithmic systems is not something that can be solved easily,” but at least this article will help you understand it.
Why would we naturally assume that AI wouldn’t have inbuilt bias? It seems entirely normal that a machine capable of teaching itself anything would naturally develop a sense of judgment - which is a good thing. So long as the input stimuli are governed by logic what could possibly be bad about objective judgement? We shouldn’t be using this pejorative description of judgement and bias.
Ouch! I get the point, but this author throws so many failing premises as examples it's hard to take the article seriously without completely ignoring the content and synthesizing legitimate support.
Bottom line, though, true ai will be fine as it continues to digest additional data. By its nature, it continues to shift it's analysis by additional raw data variables. Skin pigments, loan repayments (at what amounts or interest rates) are just unjudged numbers, vanilla variables, to computer learning. Its digital, it has no emotions.
Peter...we must do better, period. With humans there is bias, but also other emotions that get mixed in. If we can train AI to ignore specific attributes by incorporating all facets into the algorithm, maybe we can sidestep some of this. The biggest problem is that the programmers are not representative enough to create algorithms without inherent biases.
This is a great primer on how a platform is designed. What’s notable again on the topic of AI/ML is that because of the pressure to get these platforms into the market, the fact that it’s so difficult to eliminate bias, the fact that we aren’t training the developers on ethics, makes it easy to conclude that there will be a lot more problems before it gets better.
The less biased human mind is the autistic mind, AI engineers should give it a try and let autistic people do the job. It might also increase inclusion and diversity, but hey, it would take quite a lot of outside the box initiatives to actually do it.
The issue is a long way from being fixed. Some seem to believe that "transparency" could rectify the problem, but transparency is irrelevant to conversations about deep learning and modern AI. These algorithms are created by other machines and their thought processes are alien to humans. Transparency only reveals how little we know about the technology we've created, if we want to rectify AI bias we'll have to search elsewhere.
It is the problem of prejudice itself. You can become more aware of prejudices you have, but can anyone ever identify them all? Let alone, erase it from what became embedded into the culture and writing that AI learns from.
“‘Fixing’ discrimination in algorithmic systems is not something that can be solved easily [...] It’s a process ongoing, just like discrimination in any other aspect of society.”
AI won’t be a technology that we can build and contain in the current stage of technology, however it will most likely show its face in about 20-50 years of proper development and careful research and consideration. Mind you once it’s built it will surpass every piece of technology in a matter of weeks.
The key to surpassing the pitfalls of inherent AI bias is finding system programmers who are good enough to train AI into recognizing it's happening. A true underlying "lookahead" that then tricks AI into not falling into those patterns. As pointed out already, since human programming is the root to creating AI, I'm not sure this will ever be possible without handing the keys over to Skylab.
A very thurough article. What I gleaned from it was that the dynamic nature of society makes it nearly impossible for programmers to keep up. On a different note, if one day there is the capability for an AI program to keep up, would it ever be able to truly gain an educated opinion on matters? Should it be aloud to try?
“If the algorithm discovered that giving out subprime loans was an effective way to maximize profit, it would end up engaging in predatory behavior even if that wasn’t the company’s intention.”
Framing the problem and setting constraints has a big part to play in AI bias.
The ignorance of social context seems to be a new challenge for AI (Artificial Intelligence)
Ethical issues arise when AI autonomous systems algorithms was developed with bias.
We have to treat AI just like humans. AI is biased, so are we, human beings.
If you want to build an AI, bias is something that’s going to happen no matter what... I think maybe instead of treating it like an AI, you’re going to have to treat it like a child, you’re going to have to hold its hand for years and years until it understands the difference of bias versus non biased and why it has those prejudices in the first place (metaphorically speaking, of course).
The article goes on to define how #AI builds bias step-by-step. But, not all #bias are easy to identify. Gender and race can come out more clearly than, say, your deep ingrained moral values or propensity to a group or sect. AI is built on top of human values. We are more complex than who we believe we are.
There's bias in them there AIs'! Late to the game but after reading this article and getting 2/3 through the comments I had a catch in my breath and the hair stood up on my arms.
This is something I am interested in confirming
Shaping the future...
This appears to give some credence to the argument that humans do not cause climate change because the computer models are flawed.
Bias and discrimination is not all bad in some instances its preferred.
A community of leaders, subject matter experts, and curious minds bringing nuance back to how we talk about the news.
No content overload: our editors will curate the most notable and discussion-worthy pieces for you every day.
Don’t just read the story, tell it: contribute your ideas and experience to the dialogue.