Skip to navigationSkip to content
Close

Tech firms are rushing to hire AI ethicists. They’re asking them to think about everything from bias and fairness to the circumstances under which it is acceptable to use autonomous weapons. It’s a welcome recognition of AI’s potential harm, but are ethicists making any difference? ✦

Read more on Quartz

Featured contributions

  • The last thing we need right now is ethics to become corporate jargon - today, every person and their dog is trying to embed AI in their business.

    There are two important things to get right if business is serious about this: first, make sure your AI ethics lead or ethics board has the teeth to act

    The last thing we need right now is ethics to become corporate jargon - today, every person and their dog is trying to embed AI in their business.

    There are two important things to get right if business is serious about this: first, make sure your AI ethics lead or ethics board has the teeth to act. Do they have the power to demand a change of course and will that be acted on? Second, make sure that ethics learning is part of every person’s role, and make sure the teams building AI are themselves diverse. Sadly, most AI is still built by white men in hoodies, with a huge risk of unintended bias built in. Beyond actions that business can take, regulation is needed to help support mass adoption and diligent application of ethics in building AI – something Brad Smith from Microsoft in particular has called for, most recently at Davos earlier this year.

  • The best way to mitigate bias in AI is to make sure we have diverse teams building the AI itself. If the workers building our tech aren't diverse, they can't see the biases that are built into the AI they've created. AI will be more ethical when we have more ethical practices for determining who helps

    The best way to mitigate bias in AI is to make sure we have diverse teams building the AI itself. If the workers building our tech aren't diverse, they can't see the biases that are built into the AI they've created. AI will be more ethical when we have more ethical practices for determining who helps shape our tech, and shape our world.

  • I couldn't agree more with Reshma. It's challenging to get technical people to allow others into the room. I also mean that technical companies need to give space to others, invite them in, use design tools that level the playing field and open up the ability to converse. In all my conversations for

    I couldn't agree more with Reshma. It's challenging to get technical people to allow others into the room. I also mean that technical companies need to give space to others, invite them in, use design tools that level the playing field and open up the ability to converse. In all my conversations for this series, people want this but need more frameworks. So I am even more motivated than ever to carry on with my work of translating machine speak into human speak through our design system and educational material.

More contributions

  • It’s also a question of the data used to train the AI. There is implicitly a bias in the data set which by definition is a subset of the entire universe of possibilities. Give data that cars always turn right and the AI will replicate. So we need balanced or wide data sets for training.

  • Tech firms need professional ethicists (whatever that vocational moniker means!) to implement ethical priorities mandated by government regulation based on a legitimate democratic political process, full stop. Anything short of this is based on arbitrary personal morality. While thinly veiled with civic

    Tech firms need professional ethicists (whatever that vocational moniker means!) to implement ethical priorities mandated by government regulation based on a legitimate democratic political process, full stop. Anything short of this is based on arbitrary personal morality. While thinly veiled with civic compunction, such rationales are usually drummed up to advance the cause of profit rather than people. If this were just any other product, such motivations would be no problem because their outcomes would fall squarely in the realm of private commercial activity. Unfortunately, AI and its effects insidiously bleed into other spheres of human conduct, including subconscious ones.