Skip to navigationSkip to content

Why Tech Platforms Don’t Treat All Terrorism the Same

By WIRED

Critics say Facebook, YouTube, and Twitter are quicker to block content from ISIS than from white nationalistsRead full story

Comments

  • Also share to
  • Google and FB are in the business of selling behavioral predictions, the quality of which is proportional to the specificity of signals from each user. Civility is a mask which obscures users’ underlying values and emotions, so the platforms do whatever is necessary to get past it, stimulating lizard brain emotions (eg, outrage, fear). Ugly, emotional content is good for business, so the platforms’ algorithms are structured to promote it, which is why the platforms insist on user policing of content

    Google and FB are in the business of selling behavioral predictions, the quality of which is proportional to the specificity of signals from each user. Civility is a mask which obscures users’ underlying values and emotions, so the platforms do whatever is necessary to get past it, stimulating lizard brain emotions (eg, outrage, fear). Ugly, emotional content is good for business, so the platforms’ algorithms are structured to promote it, which is why the platforms insist on user policing of content. When that becomes politically untenable, they fall back on moderation, which doesn’t work, but leaves the business model untouched.

    The divisive, destructive aspects of internet platforms is overwhelming the good. If we want to fix that, we must force fundamental changes in the business model. I recommend starting two initiatives:

    - making 3rd party commerce in private data illegal. This means credit card transactions, location, health and wellness, browsing history and anything related to minors.

    - force changes in algorithms to end promotion of socially inappropriate content. Enforce it with transparency and daily audits.

    These are big changes, but necessary. The internet platforms have brought this on themselves.

  • Having worked extensively in digital communities with zero tolerance for white supremacy, I know firsthand how hard, yet important, intentional moderation can be.

    At quartz, as we create the structures around our community, we work hard to balance “freedom of speech” with a space where all people are respected.

  • Roger McNamee nails it! Read through to his recommendations on how to address:

    "Google and FB are in the business of selling behavioral predictions, the quality of which is proportional to the specificity of signals from each user.

    Civility is a mask which obscures users’ underlying values and emotions, so the platforms do whatever is necessary to get past it, stimulating lizard brain emotions (eg, outrage, fear). Ugly, emotional content is good for business, so the platforms’ algorithms are structured

    Roger McNamee nails it! Read through to his recommendations on how to address:

    "Google and FB are in the business of selling behavioral predictions, the quality of which is proportional to the specificity of signals from each user.

    Civility is a mask which obscures users’ underlying values and emotions, so the platforms do whatever is necessary to get past it, stimulating lizard brain emotions (eg, outrage, fear). Ugly, emotional content is good for business, so the platforms’ algorithms are structured to promote it, which is why the platforms insist on user policing of content. When that becomes politically untenable, they fall back on moderation, which doesn’t work, but leaves the business model untouched.

    The divisive, destructive aspects of internet platforms is overwhelming the good. If we want to fix that, we must force fundamental changes in the business model. I recommend starting two initiatives:

    - Making 3rd party commerce in private data illegal. This means credit card transactions, location, health and wellness, browsing history and anything related to minors.

    - Force changes in algorithms to end promotion of socially inappropriate content. Enforce it with transparency and daily audits.

    These are big changes, but necessary. The internet platforms have brought this on themselves."

  • The challenge for the social media Giants is that so much of their technology is focused on meta tags, URL and keywords using natural language processing. Especially for content that is on YouTube or video in Facebook, if the video content is described in vectors for display, then the video must then be rasterized into pixels in real-time image by image across the video, in order to ascertain what the content is.

    While this is not a trivial task, the technology is available, and is supported by

    The challenge for the social media Giants is that so much of their technology is focused on meta tags, URL and keywords using natural language processing. Especially for content that is on YouTube or video in Facebook, if the video content is described in vectors for display, then the video must then be rasterized into pixels in real-time image by image across the video, in order to ascertain what the content is.

    While this is not a trivial task, the technology is available, and is supported by an array of different other technologies and machine learning, artificial intelligence and neural networks. The challenge is that even with this technology, human intervention is still required, and the technology consumes massive resources, processors, cycles, energy, and ultimately all this costs a lot of money and takes away from the bottom line. If media companies are willing to spend they money it can be done. It is a choice.

  • Roger McNamee you are one brilliant man. Impressive thoughts in your comments on this.

  • The problem stems from the fact that these tech platforms heavily rely on users to essentially report unsavory content, which is a reactive approach vs a proactive on. On the one hand, you can’t blame them because it’s just too much content to scour. On the other hand, there is definitely content that shouldn’t be there. What these tech platforms need to do is flip the algorithms to proactively search for such content to scuff it out. The lines blur when you incorporate freedom of speech. But at

    The problem stems from the fact that these tech platforms heavily rely on users to essentially report unsavory content, which is a reactive approach vs a proactive on. On the one hand, you can’t blame them because it’s just too much content to scour. On the other hand, there is definitely content that shouldn’t be there. What these tech platforms need to do is flip the algorithms to proactively search for such content to scuff it out. The lines blur when you incorporate freedom of speech. But at the very least the illegal stuff should be banned without question. This though requires a philosophical shift. If you read any of Dorsey’s latest commentary on harassment it still seems the strategy is to let users report it. That’s not enough.

  • I agree with everything Roger McNamee said, until I got to "socially inappropriate content " and then the hair on the back of my neck stood up. While I would support eliminating illegal content, such as child pornography, it concerns me when well meaning people try the codify or legislate morality. The amount of trash on the internet really disgusts me. I think it's dragging our collective consciousness into the proverbial swamp. But I really don't trust people who want to be the ones to clean it up.

Want more conversations like this?

Join the Quartz community for all the intelligence, without the noise.

App Store BadgeGoogle Play Badge
Leaderboard Screenshot

A community of leaders, subject matter experts, and curious minds bringing nuance back to how we talk about the news.

Editors' Picks Screenshot

No content overload: our editors will curate the most notable and discussion-worthy pieces for you every day.

Share Screenshot

Don’t just read the story, tell it: contribute your ideas and experience to the dialogue.