As San-Francisco-based AI development firm OpenAI’s dream of achieving general artificial intelligence (AI) picks up speed, reports of how it used underpaid Kenyans to remove racism, sexism, and violence from ChatGPT-3's language model are appalling.
OpenAI didn’t learn from Meta, which was sued in Nairobi last May by South African whistleblower Daniel Motaung for union busting and contracting Sama, a company that subjected them to inhumane working conditions while removing harmful content from Facebook’s algorithms.
Now, after OpenAI’s GPT-3 project of perfecting the results the platform generates was completed last November, Sama retrenched 200 employees at the Nairobi office.
In a blog post on Jan. 10, Sama said that it understood “that this is a difficult day, [but] we believe it is the right long-term decision for our business.” It asked sacked workers to apply for “other open positions in the Sama east African offices.”
Sama’s content moderators earned as little as $1.32 per hour
This was after the company, also based in San Francisco paid the workers between $1.32 and $2 per hour from November 2021 to February 2022, which is just a small fraction of California’s minimum wage of $16.99 per hour or less than a third of the federal state’s minimum wage of $7.25 per hour.
On Jan. 20, Sama wrote to Quartz, saying it pays almost double what other content moderation firms in east Africa pay and offers “a full benefits and pension package”, which it claims “is uncommon.”
“Sama pays between Sh26,600 and Sh40,000 ($210 to $323) per month , which is more than double the minimum wage in Kenya and also well above the living wage. A comparative US wage would be between $30 and $45 per hour.” It also said it offered them personal welfare services with rooms specially designed for counseling, meditation, prayer, nursing, gaming, and local artwork “and full meal services that are curated to support physical and mental wellbeing.”
To build an ethical messaging query system through its 175 billion parameters, according to a Time investigation, OpenAI had to rely on Meta, which had proven that it was possible to build AI that could filter out toxic connotations and biased information.
But the work was traumatizing to employees who had to sit for several hours every day watching videos of harmful content and analyzing textual descriptions of hate speech, sexual violence, bestiality, and violence. That forced Sama to cancel all three contracts it signed with OpenAI in February 2022.
Now, as OpenAI projects to rake in $1 billion in revenue by 2024, sell shares that would raise its valuation to $29 billion, attract a $10 billion investment from Microsoft, and ultimately launch GPT-4, it will have to reconsider how it outsources content moderation services.
This story has been updated to include Sama’s comments.