Skip to navigationSkip to content

Been Kim Is Building a Translator for Artificial Intelligence

By Quanta Magazine

Neural networks are famously incomprehensible, so Been Kim is developing a “translator for humansRead full story

Comments

  • Also share to
  • Unfortunately we’re already seeing the consequences of AI tools being deployed before the people overseeing them fully understand what they’ve built – like AI recruiting tools that favor men for engineering jobs, or algorithms used in the criminal justice system which inflate the risk of recidivism for

    Unfortunately we’re already seeing the consequences of AI tools being deployed before the people overseeing them fully understand what they’ve built – like AI recruiting tools that favor men for engineering jobs, or algorithms used in the criminal justice system which inflate the risk of recidivism for black defendants.

    The key is understanding the underlying data and applying a human value judgment to correct for bias. This task is hard enough for simple pattern matching algorithms, let alone neural networks. At Microsoft we have a set of principles which guide how we develop AI, and among them are to ensure transparency and accountability, and to protect against bias. It’s encouraging to see Google tackling these challenges as well. It’s a worthy effort.

  • This seems like an incredibly important initiative ... and a big step towards making AI audit-able. We are rapidly turning over major decision making processes to AI without verifying first that the AI is making decisions honestly, fairly, and legally. There have been some terrible failures, as seen

    This seems like an incredibly important initiative ... and a big step towards making AI audit-able. We are rapidly turning over major decision making processes to AI without verifying first that the AI is making decisions honestly, fairly, and legally. There have been some terrible failures, as seen when AI-based mortgage approval systems discriminated against people of color.

    Technology is NOT value neutral. It incorporates the biases of its creators unless they take great care. There have been no tools to verify the decision making of AI, so generally developers have followed the traditional industry model of shipping products and waiting for user feedback. Given how strategic tech products are these days, that “ship and pray” philosophy is obsolete.

  • I remember explaining AI and neural networks to family and friends over meals and holidays. Without a doubt, explaining the fact that sometimes one just doesn't know how an oucome is derived is the most confusing for the lot. What a great opportunity here.

  • So many gems in here, including insights into the AI work we really need - human comprehensible transparency.

    “If we don’t solve this problem of interpretability, I don’t think we’re going to move forward with this technology. We might just drop it.”

  • Absolutely the AI should explain how it comes to it's conclusions so that we can check what factors it's giving weight to, supervise ethical concerns. Any individual or entity in any human relationship or organization pretty much has to summarize their thinking, gain agreements and go aheads.

  • I think when it comes to thinking creatively, it’s beneficial to have AI not explain the reasoning and let humans “guess” where the reasoning came from.

  • A.I. does not exist. Yet. It’s only machine learning at this point, and although A.I. Is the goal, machine learning is where the industry is. While I applaud the reasoning for an A.I. Translator for the future, it would be better to invest in training the software engineers to better understand how to program with less bias.

  • Fascinating stuff. This kind of research-- basically, making an AI's "reasoning" understandable to a layperson-- is crucial for both the present and future of AI, in any field. Even the relatively simple TCAV system she describes is a spectacular breakthrough, especially for the end user of an AI or an AI-assisted system.

  • Interpretability is a underying issue AI scientists deliberately overlooked in the past

  • Predicted up to 40% of work force will be replaced by AI in 15 years.

  • A promising development to follow and see where it leads...

  • Linguistic capability enables human to convey the imagination and story effectively to the others.Therefore miscommunication happens a lot if 2 parties aren't on the same page.

    Language interpretation is the theme not only for scientists but also the other occupations.

  • It will help to work more efficient,

    like human can spend time for the other things.

Want more conversations like this?

Join the Quartz community for all the intelligence, without the noise.

App Store BadgeGoogle Play Badge
Leaderboard Screenshot

A community of leaders, subject matter experts, and curious minds bringing nuance back to how we talk about the news.

Editors' Picks Screenshot

No content overload: our editors will curate the most notable and discussion-worthy pieces for you every day.

Share Screenshot

Don’t just read the story, tell it: contribute your ideas and experience to the dialogue.