Nearly 80% of 1,600 Quartz readers surveyed recently said they don’t trust Facebook with their personal data, a sobering finding as Facebook comes under increasing scrutiny for its handling of data privacy, ad targeting, and propaganda.
When asked “Which companies do you trust with your personal data and information?” and presented with a list of 25 top brands, the respondents chose Facebook least often when compared with the other four biggest US consumer tech companies, with just 21% selecting it. Facebook didn’t respond to a request for comment on the findings, which were roughly consistent with earlier surveys of consumers indicating a lack of trust in the social networking company.
(Respondents could make unlimited selections from the list. The survey was conducted among the Quartz readership in early July and is a representative sample of Quartz readership, with one third identifying as executives in the technology, finance, and marketing sectors and the balance spread evenly across all major industries).
Respondents were 58% US-based, with the remaining 42% from around the world. Looking at just US-based respondents, the trust gap was even more pronounced.
To be sure, consumers nevertheless do entrust Facebook with information about themselves every day. The number of Facebook monthly users—2.01 billion as of June—has surpassed the followers of Islam, and is closing in on the most numerous religion, Christianity.
In the digital world, data—including data about users’ activities—is a key asset. Because machines learn from data, advances in artificial intelligence have increased the value of data assets, creating big incentives to capture and process ever increasing amounts of data. And because the algorithms being operated by companies such as Facebook and Google are able to learn relationships in real time, across billions of data points, they have an incomparable advantage in their pace of progress.
Users of Google and Facebook know they get the service for “free” in exchange for their personal data. But as concerns about unchecked power of artificial intelligence grow, trust becomes a valuable asset. So the lack of trust in Facebook could prove a competitive issue for it over time. It also could embolden regulators and lawmakers to attack Facebook.
The European Union has been far more aggressive than the US in regulating the big tech companies, especially around data protection. New EU regulations come into effect next year that introduce the idea of “purpose limitation” in how companies like Facebook can use people’s data. It will be illegal to talk about purpose in vague and unspecific language, such as “improving user experience” or “marketing” or “research.” While there are no immediate signs of similar regulations being adopted in the US, there are many ways that regulators could take a harder line in enforcing existing laws or that politicians could propose new ways to force transparency and accountability on Facebook.
Public support for regulation of AI, along with relatively low trust in Facebook as compared to its peers, could bolster regulators’ confidence in attacking Facebook politically and legally. We also surveyed Quartz readers on their views of artificial intelligence—and a staggering 84% thought AI should be regulated, while only 3% were opposed to that idea.
For investors and for management, legal and regulatory constraints are a beast. Once a company crosses over from being a good guy to being a problem, it’s hard to get back behind the line. The lack of trust in Facebook and support for regulation among Quartz readers suggest Facebook’s challenges are mounting.