Microsoft warned investors that biased or flawed AI could hurt the company’s image

Microsoft warned investors that biased or flawed AI could hurt the company’s image
Image: AP Photo/Elaine Thompso
We may earn a commission from links on this page.

Each year, US public companies file a 10-K to the Securities and Exchange Commission, a form meant to inform investors of the financial state of the company over the previous year, as well as relay new business risks and practices.

It’s ordinary for companies to add new risks as their business evolve, but Microsoft added a new section specifically referencing the potential for its focus on artificial intelligence to cause harm to the company, in its filing published Aug. 3, 2018.

Here’s a selection from the filing:

AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm. Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.

Notably, this addition comes after a research paper by MIT Media Lab graduate researcher Joy Buolamwini showed in February 2018 that Microsoft’s facial recognition algorithm’s was less accurate for women and people of color. In response, Microsoft updated its facial recognition models, and wrote a blog post about how it was addressing bias in its software.

Microsoft also formed a group in March called AI and Ethics in Engineering and Research, or AETHER, headed by company executives to combat AI issues inside the company.

“If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases,” Hanna Wallach, a senior researcher at Microsoft, said in the company’s blog post.