Skip to navigationSkip to content

IBM apologizes for using insensitive ethnic labels on job application

By TheHill

IBM apologized on Tuesday for asking job applicants to identify their ethnicity from a list of racially insensitive categories that included "yellow," "mulatto" and "colouredRead full story

Comments

  • Also share to
  • Let me guess, someone accidentally rolled back interview software code from 1956. I’m the the type of person who would have thrown some stuff back at them. My application would have said at the top “curvaceous c-line island woman”. 🤦🏻‍♀️

  • “IBM has since transitioned to ethnic categories, such as "Asian" and "Native American" that are standard in the U.S., Barbini told the Post. The site will also allow for an "unknown" or "not indicated" option.”

    Remind me again why we still need gender or ethnicity drop downs in applications?

    If it’s

    “IBM has since transitioned to ethnic categories, such as "Asian" and "Native American" that are standard in the U.S., Barbini told the Post. The site will also allow for an "unknown" or "not indicated" option.”

    Remind me again why we still need gender or ethnicity drop downs in applications?

    If it’s not being used as an evaluation criteria, and only for company statistics, surely you can answer those questions later on as part of employee onboarding.

  • “The company responded to [the applicant] on Twitter last week saying that the categories were the result of a translation error.”

    Take it it was a computer error? Think that’s important to clarify but nonetheless may not favor well for the company being that the public would either be upset that a

    “The company responded to [the applicant] on Twitter last week saying that the categories were the result of a translation error.”

    Take it it was a computer error? Think that’s important to clarify but nonetheless may not favor well for the company being that the public would either be upset that a computer was able to publish said language on an application or scrutinize IBM for hiring someone incapable of recognizing that said translations were problematic.

  • This auto-translation failure requires human oversight. AI is not ready to supervise without humans.

  • Major failures with AI are being revealed weekly, and soon may be daily. The ones we hear about are systemic bias failures. You can be sure that AI is producing far more failures at the individual level.

    Done right, AI should be the digits penicillin of the 21st century. Done right, there would be effective

    Major failures with AI are being revealed weekly, and soon may be daily. The ones we hear about are systemic bias failures. You can be sure that AI is producing far more failures at the individual level.

    Done right, AI should be the digits penicillin of the 21st century. Done right, there would be effective testing for safety, efficacy, and bias before products go live. There would be systems that enable auditing and verification of decision processes.

  • Okey, Lost in Translation, or altered?

  • How did this make it through the system?

Want more conversations like this?

Join the Quartz community for all the intelligence, without the noise.

App Store BadgeGoogle Play Badge
Leaderboard Screenshot

A community of leaders, subject matter experts, and curious minds bringing nuance back to how we talk about the news.

Editors' Picks Screenshot

No content overload: our editors will curate the most notable and discussion-worthy pieces for you every day.

Share Screenshot

Don’t just read the story, tell it: contribute your ideas and experience to the dialogue.