Skip to navigationSkip to content
Close

Facebook’s algorithm makes some ads discriminatory—all on its own

Read more on Quartz

Contributions

  • These days I tune out when I hear that AI does this, results are predictable and reflect what we already know about how humans interact with machines. But not this one. Novel and important research that demonstrates how difficult it is to tackle from the outside, without access to underlying data structure

    These days I tune out when I hear that AI does this, results are predictable and reflect what we already know about how humans interact with machines. But not this one. Novel and important research that demonstrates how difficult it is to tackle from the outside, without access to underlying data structure and algorithm.

    “The researchers carefully constructed their study to try to discover whether Facebook’s algorithmic decisions were the cause of the gender-biased audiences for ads. To test this, they ran ads with images of either stereotypically male or female things, but with the images made almost entirely transparent; they’d appear all white to humans, but computers could still “see” the underlying image. Facebook still steered the ads with these modified images to specific audiences: ones containing, for instance, football went to men, and makeup supplies to women. That effect could not have occurred based on human reactions, since the photos appeared the same to humans.”

  • AI is only as good as the data it is trained on and the ethics of the data scientists that create the algorithm. We have created racial and gender bias in our jobs and products and decades of negative reinforcing advertising has made this worse. If this is the dataset FBs AI is using to inform optimal

    AI is only as good as the data it is trained on and the ethics of the data scientists that create the algorithm. We have created racial and gender bias in our jobs and products and decades of negative reinforcing advertising has made this worse. If this is the dataset FBs AI is using to inform optimal ad placement then this is hardly surprising! What is sadly less surprising is that FB has not built in the checks and guardrails to protect against historic bias.

  • Food for thought from Roger McNamee: "One of the problems of the fully automated systems at FB is that they will meet demand without consideration for legal and societal issues. There are no circuit breakers, guard rails or safety nets. People suffer real consequences because the people who run the platform

    Food for thought from Roger McNamee: "One of the problems of the fully automated systems at FB is that they will meet demand without consideration for legal and societal issues. There are no circuit breakers, guard rails or safety nets. People suffer real consequences because the people who run the platform optimize for growth at all costs."

  • This is an example of how machine learning can be bias. It is not the algorithm fault but because certain weights or sorting algorithm have biased them. This has to do with the coder having their own bias on how things should be categorized.

    There needs to be a clear data driven answer for categorizing

    This is an example of how machine learning can be bias. It is not the algorithm fault but because certain weights or sorting algorithm have biased them. This has to do with the coder having their own bias on how things should be categorized.

    There needs to be a clear data driven answer for categorizing data in certain ways or else we will run into this problem again.

  • One of the problems of the fully automated systems at FB is that they will meet demand without consideration for legal and societal issues. There are no circuit breakers, guard rails or safety nets. People suffer real consequences because the people who run the platform optimize for growth at all costs.

  • Looks like FB has more problems than "just" worrying about spotting fake news...

    Is this spotting an early problem of the use of AI? Potential human biases that impact AI learning in a fundamental way?

  • AI learns what you teach it.. Garbage in garbage out. Never on its own, I don't think we are there yet. 🧐

  • This has been a major issue in the ad tech market for more than a couple of years. A lot of this is because there is not proper normalization and content enrichment, especially since the taxonomies that now drive the advertising technology industry are actually closed off by the majors. More recently

    This has been a major issue in the ad tech market for more than a couple of years. A lot of this is because there is not proper normalization and content enrichment, especially since the taxonomies that now drive the advertising technology industry are actually closed off by the majors. More recently has been brought to the attention of media by Ben Carson through an actual lawsuit

    https://share.qz.com/news/2357624

  • All on its own.

  • The issue becomes whether employers and landlords could end up with complaints against them because their ads unintentionally had a disparate impact on some protected class or other. It’s a lot easier and cheaper to go after the little guy than a multi billion dollar company.