Hello, fellow humans! Hope you’re enjoying the new, limited Saturday edition of the Daily Brief, which is focused on AI, though, we promise, written by actual people.
Got some questions about AI you’d like answered? Or just some AI hallucinations you’d like to share? Email us anytime. Enjoy!
Here’s what you need to know
Amazon said large language models are now powering Alexa. The voice assistant’s new abilities are meant to make it less like an expensive kitchen timer and more like a chatty friend who can tell you stories on demand, but is this something people want?
Jensen Huang says India is the next big AI market. The Nvidia CEO just took a whirlwind tour of the country, which has a potentially lucrative consumer base that reduces his company’s dependence on China.
George RR Martin and John Grisham joined the authors suing OpenAI for copyright infringement. So did Jodi Picoult and Jonathan Franzen, among other big names in books who allege that AI tools are being trained on their work without their permission.
DALL-E 3 is coming soon. Get your weird art prompts ready. (We’re thinking “snail with a shell made of email newsletters, in the style of Gustav Klimt.”)
AI assistant status update: Don’t start skipping meetings just yet
Microsoft revealed in March that it was bringing an AI assistant to its Office apps like Word and PowerPoint. So far, about 600 companies have tested Microsoft 365 Copilot. What are the most popular use cases for the software right now? According to the company, they are:
- Summarizing meetings
- Highlighting important emails from Outlook
- Summarizing emails
In other words, the AI assistant is taking on some of the work we dread doing ourselves. We don’t yet know how much more productive these tools will make us, though. “We’re just setting all that up now in terms of the infrastructure of how we will measure,” Colette Stallbaumer, general manager of the future of work at Microsoft, told Quartz. “It’s really early days.”
Quotable
“We want them to take responsibility for not allowing things to be created that are more dangerous than what already occurs in nature.” —Tom Inglesby, director of the Johns Hopkins Center for Health Security, speaking at the Clinton Global Initiative this week on what his team wants from regulators when it comes to AI
AI-run asset managers have an old problem to fix
Humans are struggling to compile green, clean investment portfolios that actually take sustainability metrics into account. Giving AI the job doesn’t make that any easier… at least not yet.
In its four years since launching, asset management tool Aether has learned a lot. But it’s only as good as its data, which, when it comes to ESG metrics, isn’t great—especially when companies calculate carbon emissions in their own, distinct ways. That forces the tool to redo calculations to standardize the data. And that’s an invitation for problems. In short: users have to be careful.
Quartz’s Michelle Cheng writes about how this same old song and dance (poor data) results in the same old problems (bad results) and necessitates, well, a human.
Other great AI reads from this week
🔭 Luddites saw the problem of AI coming from two centuries away
🥼 An AI built with Amazon software is monitoring cancer in Nigeria and Kenya
😬 Project Gutenberg has implemented one of the worst AI fears of striking actors
🖼️ Tech companies try to take AI image generators mainstream with better protections against misuse
Chatbots may change the white-collar CV, but maybe it’s not all that bad
Salesforce customers, who use the company’s software to organize their business data, will soon be interacting a lot more with an AI assistant called Einstein Copilot (a missed opportunity to use AI-bert Einstein). During a news conference about the assistant’s launch, Salesforce executives repeatedly stressed that they believe generative AI will usher in major changes to white-collar jobs, but not take them away.
For example, Einstein will be able to:
🤖 Initiate a return on an item for a customer
🤖 Summarize chat support responses
🤖 Write sales and marketing language for emails
But it will still need plenty of white-collar workers around to provide engineering prompts and reduce hallucinations.
Ask an AI
This week, Google’s Bard AI chatbot launched a fact-checking feature. Naturally, we wanted to try it.
Inspired by our recent Weekly Obsession on organ donation (you can sign up for our Obsession email here), we asked Bard to make a list of the 10 most in-demand organs, arranged from highest to lowest in demand. Then, we clicked on the double-check feature. The result:
The tool—which basically does a Google Search of the web to verify its own answers—flagged kidneys as correct, and “intestines” as something it couldn’t corroborate with relevant content, prompting us to “consider researching further to assess the statement.” We did, and intestines don’t appear to be sixth on the very list Bard cites. In fact, it’s not super clear what is sixth on that list, but Bard confidently presents the information anyway, which AI tends to do.
Long story short: We still need to use our brains to fact-check the fact-checking tool. Learn more about the nature of AI’s mistakes in our Quartz Obsession podcast episode on AI hallucinations.
Side note: We also asked Bard to write a one-paragraph story about a baboon giving a busboy a heart. The fact-checking feature told us everything was fine, even though the story Bard generated was four paragraphs long and suggested the baboon (Bobo by name) not only lived after its heroic heart donation, but talked.
Our best wishes for a very human day. Send any news, comments, Bobo quotes, and DALL-E 3 prompts, preferably involving Bobo, to talk@qz.com. Reader support makes Quartz available to all—become a member. Today’s AI Daily Brief was brought to you by Michelle Cheng, Morgan Haefner, Susan Howson, and Heather Landy.