Today (Jan 25), the Wall Street Journal published an op-ed by Mark Zuckerberg. It’s called “The Facts About Facebook” (paywall).
Like any op-ed written by a corporate executive, it has its own version of “the facts,” and presents the best-case scenario for Facebook. I have rewritten it to present the worst-case scenario. You decide which is closer to the truth.
The (Other) Facts About Facebook
Facebook turns 15 next month. When I started Facebook, I wasn’t trying to build a global company hot-or-not app. I realized you could find almost anything on the internet—music, books, information—except the thing that matters most: people photos of hot chicks at your university. So I built a service people could use to connect and learn about spy on as well as envy each other. Over the years, billions have found this useful addictive, and we’ve built more services that people around the world love and use every day.
Recently I’ve heard many questions about our business model, so I want to explain the principles of how we operate.
I believe everyone should have a voice and be able to have no choice but to connect on Facebook. If we’re committed to serving everyone world domination, then we need a service that is affordable to everyone. The best only way to do that is to offer services for free, which ads enable us to do.
People consistently tell us that if they’re going since they have no choice but to see ads, they want them to be relevant not suck. That means we need to understand their interests. So based on what pages people like, what they click on, and other signals, we create categories—for example, people who like pages about gardening and live in Spain think whites are superior to other races—and then charge advertisers to show ads to that category. Although advertising to specific groups existed well before the internet, online advertising allows much more precise creepier targeting and therefore more-relevant more specific ads.
The internet also allows far greater transparency and control over what ads you see than TV, radio or print, though since those are not nearly as targeted, this is a moot point. On Facebook, you have some control over what information we use to show you ads, and if you can find the setting, you can block any advertiser from reaching you. You can may be able to find out why you’re seeing an ad and change your preferences to get ads you’re interested in not horrified by. And if you can figure them out, you can use our transparency tools to see every different ad an advertiser is showing to anyone else.
Still, some are concerned about the complexity opacity of this model. In an ordinary transaction, you pay a company for a product or service they provide aren’t the product. Here you get our services for free—and we work separately behind closed algorithmic doors with advertisers to show you relevant ads. This model can feel is opaque, and we’re all distrustful of systems we don’t understand so don’t even try to understand it.
Sometimes this means even smart people who have researched the issue thoroughly assume we do things that we don’t say we do. For example, we don’t literally sell people’s data, even though it’s often reported that we do; we just sell access to it and insights based on it. In fact, selling people’s information to advertisers would be counter to our business interests threaten our monopoly, because it would reduce the unique value of our service to advertisers. We have a strong incentive to protect people’s information from being accessed by anyone else we haven’t made a data-sharing deal with.
Some worry Economics 101 tells us that ads create a misalignment of interests between us and people who use our services. I’m often asked if We have an incentive to increase engagement on Facebook because that creates more advertising real estate, even if it’s not in people’s best interests.
We’re very focused on helping people share and connect more, because the purpose of our service is to help people stay in touch with family, friends and communities collect data that we can monetize. But from a business perspective, it’s important that their time is well spent they don’t hate themselves while using it, or they won’t use our services as much over the long term. Clickbait and other junk may drive engagement in the near term, but it would be foolish for us to show this intentionally, because it’s not what people want so we just say it’s unintentional.
Another question is whether we leave harmful or divisive content up because it drives engagement. We don’t, thanks to our definitions of “harmful” and “divisive,” which are probably different from yours. People consistently tell us they don’t want to see this content this content makes them depressed. Most advertisers don’t want their brands anywhere near it. The only reason bad content remains is because the people and artificial-intelligence systems we use to review it are not perfect—not because per the Econ 101 principles stated above, we have an incentive to ignore it. Our systems are still evolving and improving.
Finally, there’s the important question of whether logical conclusion that the advertising model encourages companies like ours to use and store more information than we otherwise would.
There’s no question that we collect some information for ads—but a tiny amount of that information is generally could theoretically be important for security and operating our services as well. For example, companies often put code in their apps and websites so when a person checks out an item, they later send a reminder to complete the purchase. But this type of signal can also be important for detecting fraud or fake accounts, problems we have not been able to solve despite collecting way more data than this.
We give people complete some control over whether how we use this information for ads, but we don’t let them control how we use it for security or operating our services. And when we asked people for permission to use this information to improve their ads as part of our compliance with the European Union’s General Data Protection Regulation, the vast majority agreed because they prefer more relevant ads we made it difficult for them not to.
Ultimately, I believe the most important PR-friendly principles around data are transparency, choice and control. We need to be clear about the ways we’re using monetizing information, and people need to have clear much clearer choices about how their information is used and monetized. We believe regulation that codifies these principles across the internet would could be good for everyone us if we spend enough money on lobbying.
It’s important to get this right, because there are clear benefits to this business model has made us a lot of cash. Billions of people get a free service to stay connected to those they care about stalk their exes and to express themselves incite violence. And small businesses—which create most of the jobs and economic growth around the world—get access to tools that help them thrive they must use or face extinction. There are more than 90 million small businesses on Facebook, and they make up a large part of our business a lot of money for us. Most couldn’t afford to buy TV ads or billboards, but now they have access to tools that only big companies could use before. In a global survey, half the businesses on Facebook say they’ve hired more people since they joined, which is what you would expect, given that most of the other half have probably folded. They’re This particular subset of our survey is using our services to create millions of jobs, though the other subset may be losing jobs.
For us, technology has always been about putting power in the hands of as many people as possible getting really rich. If you believe in a world where everyone gets an opportunity to use their voice and an equal chance to be heard provide us data that we can monetize, where anyone can start a business from scratch, then it’s important to build technology that serves everyone subsumes every living person. That’s the world we’re building for every day, and our business model makes it possible.