ChatGPT isn’t always right. In fact, it’s often very wrong, giving faulty biographical information about a person or whiffing on the answers to simple questions. But instead of saying it doesn’t know, ChatGPT often makes stuff up. Chatbots can’t actually lie, but researchers sometimes call these untruthful performances “hallucinations”—not quite a lie, but a vision of something that isn’t there. So, what’s really happening here and what does it tell us about the way that AI systems err? Read the full transcript here. (Presented by Deloitte)
Scott Nover is a tech reporter at Quartz and the host of season 5 of the Quartz Obsession podcast. He is obsessed with TikTok, fantasy football, and The Real Housewives of Salt Lake City.
Michelle Cheng is a reporter covering AI at Quartz. She is currently obsessed with bunnies, mushrooms, and pilates.
How designers are using ChatGPT and DALL-E to fast track projects, by Michelle Cheng for Quartz (includes interview with Pau Garcia)
A Conversation with Bing’s Chatbot Left Me Deeply Unsettled, by Kevin Roose for the New York Times
OpenAI’s DALL-E 2
The metaverse will mostly be for work, by Scott Nover for Quartz
The Quartz Obsession is produced by Rachel Ward, with additional support from executive editor Susan Howson and platform strategist Shivank Taksali. Our theme music is by Taka Yasuzawa and Alex Suguira. This episode was recorded by Eric Wojahn at Solid Sound, in Ann Arbor, Michigan and at our studio in New York City.