Skip to navigationSkip to content

Yes, artificial intelligence can be racist

By Vox

Alexandria Ocasio-Cortez says AI can be biased. She’s rightRead full story

Comments

  • Also share to
  • It’s not AI that’s racist. It’s the unconscious bias of people who train the AI models that keeps in to AI. It’s also the data that’s fed to AI makes it racist as Amazon learned. If you give AI the data about successful executives at the company as a truth set and all Exec’s happen to be old while guys. Guess what AI is going to screen for. Let’s not blame AI. We have to be conscious of our own biases and consciously train AI to remove these biases.

  • To date, AI has been developed according the standard model of shipping products as soon as they work and letting users sort out the bugs and design flaws. That was a fine model for iPhone Apps, but is totally inappropriate to large scale, strategic apps like AI. Such apps should be required to demonstrate both safety and efficacy, in a model similar to pharmaceuticals.

    There have been dangerous products before. The Food & Drug Administration was created in the early 20th century to protect consumers

    To date, AI has been developed according the standard model of shipping products as soon as they work and letting users sort out the bugs and design flaws. That was a fine model for iPhone Apps, but is totally inappropriate to large scale, strategic apps like AI. Such apps should be required to demonstrate both safety and efficacy, in a model similar to pharmaceuticals.

    There have been dangerous products before. The Food & Drug Administration was created in the early 20th century to protect consumers against fraudulent patent medicines. Later the FDA creates regulations to restrict development of dangerous viruses and to prevent human cloning. In the middle of the 20th century, Congress passed laws to regulate the manufacture, distribution, and disposal of chemicals, again to protect the public interest.

    We can already see the potential for abuse in AI, whether in mortgage origination, facial recognition, or social media. How much damage will we tolerate before we regulate?

  • When I read this I thought “does AOC know about everything?” And then I remembered she is good with controversy. I’m on the same side of this issue.

    AI/ML is everywhere, and it will be bigger faster than all the hype—but garbage in is garbage out, there should be no question there is bias.

    Say it another way—we’re not able to prove that there isn’t bias, and we can’t understand the consequences, because we can’t even understand how AI is making the decisions.

    Two things that need attention

    When I read this I thought “does AOC know about everything?” And then I remembered she is good with controversy. I’m on the same side of this issue.

     

    AI/ML is everywhere, and it will be bigger faster than all the hype—but garbage in is garbage out, there should be no question there is bias.

     

    Say it another way—we’re not able to prove that there isn’t bias, and we can’t understand the consequences, because we can’t even understand how AI is making the decisions.

     

    Two things that need attention now: 1) We need strong bias training for the builders and data scientists, and right now colleges teach the tech, but not the ethics. And 2) we need to build systems that make sure we can deconstruct/interpret/comprehend what the machines are doing.

     

    I know there are plenty of people making inroads on both issues, and work being done in the EU re: GDPR-related policies to require transparency, but it can’t come soon enough.

  • AI learns from data from the world of humans, reflecting any bias in the data. Bias in the data can come from multiple sources. It’s not just racism but sexism and many other biases. Some are things we wouldn’t even consider to be harmful. For example, an AI will search a corpus of writing about the natural world and conclude that flowers are more pleasant than bees. This may be true to humans but isn’t a slam dunk for the rest of the natural world.

    A couple of years ago, I Googled “ceo image

    AI learns from data from the world of humans, reflecting any bias in the data. Bias in the data can come from multiple sources. It’s not just racism but sexism and many other biases. Some are things we wouldn’t even consider to be harmful. For example, an AI will search a corpus of writing about the natural world and conclude that flowers are more pleasant than bees. This may be true to humans but isn’t a slam dunk for the rest of the natural world.

    A couple of years ago, I Googled “ceo image” and found that the first image of a female in the display of images was CEO Barbie... a plastic doll who’s never actually been a CEO. A year ago, the same search returned Ginni Rometty as the first woman CEO image but a way down the list at 8th. In November I performed the same experiment and a woman first appears at image number 5. I was feeling better! Today she is at number 28. Is this biased or representative given how many CEOs are female? This is hardly scientific but it highlights that the real issue is “why?” There is no way to get an explanation from Google which means there is no true accountability for dealing with bias.

    That’s the real issue for any company using AI for anything.

  • Just ask yourself what kind of artificial intelligence David Duke would design if he had the controls. Now think of the many layers of racism not nearly that blatant or direct. Imagine the kind of unconscious bias a home seller or job recruiter might introduce into a decision like that. Now imagine that bias trapped in a system people believe is free from bias. AI has the potential to power the next generation of discrimination in ways that are nearly impossible to diagnose - or to convince people

    Just ask yourself what kind of artificial intelligence David Duke would design if he had the controls. Now think of the many layers of racism not nearly that blatant or direct. Imagine the kind of unconscious bias a home seller or job recruiter might introduce into a decision like that. Now imagine that bias trapped in a system people believe is free from bias. AI has the potential to power the next generation of discrimination in ways that are nearly impossible to diagnose - or to convince people to confront or change. AOC is three steps ahead of her peers on this one. As she recommends, we can try to get ahead of the curve and use all that processing power for good...however a programmer might define that.

  • Quartz.... This is the 2nd strike. You really shouldn't run this race-baiting garbage from the far-left wing of the Democrat party. This is not only stupid, it shows the depths of your willingness to promote the socialist agenda. #Strike2. You are wanting to be successful in this crowded market? You are killing your chances.

  • Employers are searching for opportunities to maximize productivity and minimize costs. As a result of goals that place business needs before people needs, algorithm implementation can result in undesired consequences.

    Representative Ocasio-Cortez speaks from experience in the Education and Hospitality industries about how our jobs are changing. The scary truth is that automation is needed to hire faster and better than people who make decisions. When chat bots start teaching lessons at school

    Employers are searching for opportunities to maximize productivity and minimize costs. As a result of goals that place business needs before people needs, algorithm implementation can result in undesired consequences.

    Representative Ocasio-Cortez speaks from experience in the Education and Hospitality industries about how our jobs are changing. The scary truth is that automation is needed to hire faster and better than people who make decisions. When chat bots start teaching lessons at school, and touch screens recommend orders at restaurants, where is the middle class worker left?

    Data scientists must consider where to draw the line -- when does the algorithm disregard humanity altogether, violating statutes on disabilities, veteran status, and sexual orientation? How can an AI distinguish between gestures for emotions that could be inflicted by anti-depressant medication in the courtroom?

    We must be humain with our decision to employ automation, robots, and AI in industries. If we don't, then we are at risk of losing our humanity.

  • Perhaps the AI is revealing data and statistics and making recommendations based on trends. Is data racist? Is it racist if data reveals a positive or negative conclusion related to race on a particular subject?

  • First, we should educate ourselves so that we can doubt results from AI/ML, before starting to blame these technology.

  • This is like saying kids might be able to find porn with Google. Or like saying Russians could use Facebook to persuade our opinions. In technology, you get out what goes in; maybe our politicians should stop trying to fault innovation for humanity's flaws... We might actually improve society if they work on addressing the root causes of issues rather than the tools that enable us.

  • AOC is saying nothing but the obvious and we know that already since forever. What's the news people?

  • Innovation is here to help us achieve and conclude things faster and efficiently. AI is simply analyzing data available out in the world. And who is fit to judge what data was racist/biased if not by facts of what happened in the past and information available to be viewed? People are always trying to find fault in innovation when in fact it’s simply humanity's flaws.

  • This article expresses the exact same concerns about computer models used to predict climate change. Scientists that claim the models are flawed due to "automated assumptions" (AOC's words) are labeled deniers, non-believers and worse. Miss TippyTops is praised for her "wisdom" because she helps the racist agenda.

    I would bet money AOC can't definite the term "algorithm" and explain their uses.

  • Slightly tricky to incorporate an element of randomness into AI to avoid bias. You will certainly run a risk of things going horribly wrong. If however the learning is adaptive to the user, the I in AI may just be spot on.

  • Certainly it’s correct to say that AI can be designed to be biased, just as it’s also correct to say that AI can be designed to have no bias. What AOC and Vox seem to miss is that it’s possible to create AI with absolutely no bias - but not possible to do that with a human. Not AOC, nor any Vox reporter can be made bias free. But neither is screaming that they’re what needs more regulation.

  • This is exactly why we shouldn’t have google or Facebook being the forefront of AI research. These research shouldn’t be only coming from California but rather from all over the world.

  • AOC would pull out the race card if she encountered a group of white kids building a snowman..! Race is the only card Democrats hold.

  • Scientific studies properly conducted are based on data. On that basis science made progress and we all live longer and better. Bad studies may be biased and lethal. AI made an huge impact on our lives .computers are used daily by billions and play such an important role in activities from education to star explorations. It replaces low skill workers with better and faster machines.but creates more high skill working positions.An unmanned plane replaces one pilot with thirty highly skilled operators.

  • I would hope that as AI evolves that it tracks the evolution of its human masters. Personally I don’t want or expect AI to be better than human, instead it should enhance the lives of humans.

  • In America, everything is racist...it seems to me...

  • We also need datasets from more than the G7 countries training each ML system.

  • Today one could call a white rat racist!

  • I'm starting to get the impression that A.O.C. - is a Democrat A.I. program..

  • If you missed it AI is all about data in and there WILL be some bad stuff coming our way.

    Check out what they did at MIT

    http://norman-ai.mit.edu/

  • of course

  • And what's equally interesting is that people like Ocasio-Cortez (who like to point out bias in people and AI) are just as biased. We are all living in "glass houses" and "throwing bricks" at each other. It's about time we acknowledged that everyone descriminates and exhibits bias. It does not necessarily mean it has to be a negative behavior, rather one that helps us navigate our lives.

  • Trying to find perfect in the latest batch of A.I. is a little narrowminded. Maybe she is just data mining topics to get media attention. Yeah A.I. can be racist as the data is coming for the messy history of civilization. A.I. can still evolve forward to a better flavor of consciousness. I will still love whatever comes to be. Also there are lesson, deep insight that we can learn from.

  • "The computers learn from their creators — us."

    This is a fact, A.I will evolve according to human behavior and perhaps try to fix this, if we make an effort to be better people then maybe A.I wouldn't seem scary to alot if people...

  • Actually, she is incorrect, at least as we understand the term. To be “racist” typically implies intent and an underlying emotional bias towards an individual or group, based solely on immutable physical characteristics. AI is not racist, rather it uses data to make judgments that may reflect existing racist legacy, and therefore perpetuates the criteria it uses to make decisions. AI is flawed, and has a long way to go for sure, but her comments don’t really help.

  • Now we have something that has been added to be a racist...!

    AI Studies the data from the human world and so it is going to reflect the same in its data. There are multiple sources that lead to such causes.

  • This is obvious and not interesting. What would be interesting is how to solve the problem.

  • EVERYTHING can be biased. But no worries about AI since the world is ending in 12 years from climate change!!! LOL!!!

  • Duh. Anything made by man will be whatever man is.

  • Oh! Please elucidate!

  • Well, since we are constantly redefining definitions of words like ‘racism’ to fit a narrative, I’m sure this article is dead on.

  • Very interesting!

Want more conversations like this?

Join the Quartz community for all the intelligence, without the noise.

App Store BadgeGoogle Play Badge
Leaderboard Screenshot

A community of leaders, subject matter experts, and curious minds bringing nuance back to how we talk about the news.

Editors' Picks Screenshot

No content overload: our editors will curate the most notable and discussion-worthy pieces for you every day.

Share Screenshot

Don’t just read the story, tell it: contribute your ideas and experience to the dialogue.