Alphabet’s artificial intelligence lab, DeepMind, has launched a new research group to consider the “real-world impacts” of replicating human intelligence.
DeepMind Ethics & Society (DMES), consisting of six independent research fellows, eight full-time researchers, and nine partnerships with other research institutions, will explore topics such as algorithmic bias, accountability, and autonomous killing machines, according to Wired UK. DeepMind plans to grow the group to 25 full-time employees in the next year, and will openly publish all research.
For all the thorny questions that the group wants to answer, there’s no indication that this group will take an introspective look at DeepMind’s own work. DMES stands separate from the organization’s existing ethics group, which has been criticized for its secrecy as DeepMind has weathered ethical inquiries on data use.
In 2016, a New Scientist investigation revealed DeepMind had access to “legally inappropriate” patient data from England’s National Health Service. DeepMind co-founder Mustafa Suleyman told Wired UK today that this internal ethics board is focused on ethical questions associated with artificial general intelligence, a long-term, unsolved problem with a 10-to-30 year outlook.
Large technology companies have spent the last year making a big show of commitment to ethics and societal impact, while producing little of value. The Partnership on AI, a task-force of industry titans Google, Facebook, Apple, Amazon, IBM, and Microsoft that promised proactive work and industry standards on artificial intelligence implementation, has done little besides retweet its board members since May 2017.
Update: A sentence has been removed which claimed there was uncertainty of how DeepMind’s NHS partnerships are ethically reviewed. DeepMind set up an independent panel of 9 experts in February 2016 to review its NHS work.