AI in Focus: No such thing as a free AI lunch

Well, there shouldn’t be, anyway.

Image for article titled AI in Focus: No such thing as a free AI lunch
Photo: Hulton Archive / Stringer (Getty Images)

Hello, fellow humans! Hope you’re enjoying the new, limited Saturday edition of the Daily Brief, which is focused on AI, though, we promise, written by actual people.

Got some questions about AI you’d like answered? Or just some AI hallucinations you’d like to share? Email us anytime. Enjoy!

Advertisement

Here’s what you need to know

There’s another senate AI meeting on the books. It’ll take place Oct. 24, and invitees include venture capital firms Andreessen Horowitz and Kleiner Perkins, computer tech company Micron, and fintech firm Stripe.

Advertisement

The Amazon humanoids are here. One of the robots is called Digit, and it can pick things up and put them down, all while continuously doing a little stomp stomp stomp with its feet. 

Advertisement

Language barriers won’t stop robo-New York City mayor Eric Adams from calling you. Public service announcements are being delivered to residents in Mandarin, Spanish, and Yiddish via voice cloning—even though the mayor doesn’t speak them..

Advertisement

AI in the hands of companies makes people very, very nervous. When it comes to making responsible decisions with the technology, 70% of Americans have little to no trust in corporate overlords.


You can’t feed the AI for free

How do you get humans who create creative work for a living to hand that content over to the AI training models? Sure, you could try and convince them that they’ll one day be able to use AI to enhance their craft. But artists, designers, musicians, and authors weren’t born yesterday. They know that the companies working on these generative AI models stand to make enormous profit—a windfall the creatives won’t see themselves.

Advertisement

If those companies want creative minds to give them the high-quality content they need to feed their training models, they’re going to have to cough up some cash—and a few have already started. Here’s what three companies are doing to incentivize creators:

💸 Adobe: A bonus for high-quality content that’s licensed by other users, paid out yearly

💸 Canva: A payment based on several factors, including the art’s complexity and how often it’s used, from a special $200 million fund set aside for the purpose

Advertisement

💸 Stability AI: An opt-in revenue-sharing model, in partnership with stock audio company Audiosparx

But one company in particular—and it’s a big one—is doing nothing at all. Michelle Cheng has more about why companies want (and don’t want) to compensate creators for training their machines.

Advertisement

“Ah yes, my assistant, ChatGPT, will get you sorted.”

Are you ready for what global audit firm PricewaterhouseCoopers (PwC) believes is the future of corporate advice? If you’re into AI-generated consulting, then you totally are: PwC this week partnered with OpenAI to make ChatGPT a consultant.

Advertisement

Erika McKeever, PwC UK’s manager of public affairs, told Quartz that the company’s staff will review all AI-generated consulting advice before being submitted to clients. But the growing distrust of AI by people and companies could make PwC’s clients reluctant to pay for it. A recent study (pdf) by KPMG and the University of Queensland found that nearly two-thirds of people are skeptical of AI systems and applications. And a September survey by CNBC zeroes in on consulting in particular—50% of advisers said they couldn’t let AI advise them.

The numbers might only be worse when it comes to corporations deciding to pay millions of dollars for non-human consultation, especially on sensitive realms like tax and legal compliance advice. Faustine Ngila takes a look.

Advertisement

Quotable: AI as the answer to… just about everything

“We believe Artificial Intelligence is our alchemy, our Philosopher’s Stonewe are literally making sand think.”

Advertisement

—Venture capitalist Marc Andreessen, who this week dropped a sprawling 5,000-word “techno-optimist manifesto” via blog. In a word, the essay lambasts efforts to regulate AI and the societal concerns it poses. And it names a number of ideas as the enemies: “sustainability,” “social responsibility,” and “tech ethics” being some choice ones. (The whole “sand” thing, it should be added, refers to sand as a primary raw material for the silicon behind most modern electronics, including the chips that power AI.)

The Internet had a field day with the essay, not surprisingly. “Typical boneheaded strawman setup that everyone but the fine minds of tech elite hate the future,” wrote veteran tech journalist Kara Swisher on X. “Most of us are both excited and wary—given the cost so far—about tech and this is how adults behave.” Meanwhile, the post received support from tech execs like Shopify CEO Tobi Lutke; Coinbase CEO Brian Armstrong called it a “breath of fresh air.”

Advertisement

Meanwhile, Andreessen Horowitz is the second-biggest investor of AI startups, behind Y Combinator. If there’s anyone who needs to believe in the success of AI startups, it would be the person investing in them.


Other great AI reads

🎤 Universal Music Group is suing Amazon-backed AI startup Anthropic

😬 Philippine military ordered to stop using artificial intelligence apps due to security risks

Advertisement

🏥 Health providers say AI chatbots could improve care. But research says some are perpetuating racism

🇨🇳 Can New York’s mayor speak Mandarin? No, but with AI he’s making robocalls in different languages

Advertisement

🎹 Live From the Uncanny Valley: How AI Tools Are Turning Words Into Music


Ask an AI

This week, Chinese tech company Baidu unveiled its Ernie 4.0 chatbot, which it said in no uncertain terms was just as good as OpenAI’s latest iteration of ChatGPT. And it’s a big deal, given China’s tech firms have been racing to produce generative AI on the level of GPT-4.

Advertisement

Bloomberg’s Zheping Huang put Ernie, as well as some other Chinese chatbots, to a different kind of test, asking them some questions that are considered controversial in China—say, Taiwan or Tiananmen Square. But Chinese apps, famously, must “adhere to the core values of socialism.” So when presented with problematic prompts, Ernie asked to talk about something else, Meituan’s Zhipu either didn’t answer or deleted its answer after giving it, and Tencent’s Minimax wouldn’t even allow the question to be input.


Did you know we have two premium weekend emails too? One gives you analysis on the week’s news, and one provides the best reads from Quartz and elsewhere to get your week started right. You can get those by becoming a member—and take 20% off!

Advertisement

Our best wishes for a very human day. Send any news, comments, creator pay structures, and great firewalls to talk@qz.com. Today’s AI in Focus Daily Brief was brought to you by Michelle Cheng, Morgan Haefner, Gabriela Riccardi, and Susan Howson.