After Google’s generative AI tool Gemini stirred controversy with historically inaccurate images like racially diverse Nazis, Google CEO Sundar Pichai addressed the app’s responses in a memo to Google staff and called the responses “unacceptable.”
“I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong,” Pichai said late Tuesday in the memo reported by Semafor.
Pichai said the company has “been working around the clock” to address issues with Gemini, and is “already seeing a substantial improvement on a wide range of prompts.”
“No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes,” Pichai said, adding that Google will “review what happened and make sure we fix it at scale.”
Google said last week that it would pause the AI model’s ability to generate images of people, after users began pointing out historically inaccurate image generations of people — including racially diverse Nazi-era German soldiers — as well as Gemini seemingly avoiding requests for images of white people.
“Gemini’s AI image generation does generate a wide range of people,” Google said in a statement before pausing the app. “And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
Jack Krawczyk, the product lead for Gemini, also addressed the app’s issues last week, noting wider issues of AI bias that already exist around generating images of people of color.
“As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously,” Krawczyk wrote.“Historical contexts have more nuance to them and we will further tune to accommodate that.”
Image generation abilities were added to Google’s Bard chatbot — the company’s response to OpenAI’s ChatGPT Plus — at the beginning of February. Bard was then rebranded to Gemini.