In 1949, my father—a twenty-three-year-old just starting a new job as a teacher in East Germany—was charged with being an American spy, presumably because he spoke English. He was thrown into solitary confinement in a Soviet prison where he languished for six years. None of his family or friends knew where he was. He simply disappeared.
After the collapse of the Berlin Wall, I requested to see what information the East German Ministry for State Security, better known as the Stasi, had collected about my dad. The Stasi were surveillance experts and I had no doubt that they had built up a thick file on him. When the letter came from the commission in charge of the Stasi’s files, I learned that everything that had been collected about my father had been destroyed. But tucked into the envelope was a photocopy of another Stasi file: my own. I was amazed. I was just a kid, studying physics. Yet, the security agents had started gathering information about me as early as 1979, when I was a teenager, and had last updated the file in 1987, the year after I had moved to the United States.
There are real, life-threatening risks to sharing personal information. Indeed, contemplating these risks is quite sobering and scary to me, because I know data were collected and used against my father—data that I cannot see, or judge for myself. So you might imagine that the added shock of discovering my Stasi file converted me into a zealot for privacy. Far from it. In fact, the Stasi’s files are nothing compared to what I voluntarily share about myself each and every day.
The Stasi’s files are nothing compared to what I voluntarily share about myself each and every day. Since 2006, I have published on my website every lecture and speech I will give and every flight I will take, down to my specific seat assignment. I do this because I believe the real, tangible value we get from sharing data about ourselves outweighs the risks. Life is just more convenient when we share information about ourselves with data companies that “refine” ourdata—combining it with data from many others, analyzing them, and then generating recommendations and predictions. What matters is that we find ways to ensure that the interests of those who use our data are aligned with our own interests.
When it comes to borrowed money, privacy has never been the rule. In the past, the loan officer in a small town knew everyone’s business, George Bailey–style. Today, you’re likely to be asking a too-big-to-fail multinational to judge your creditworthiness, which may seem more anonymous and private. So, when you apply for a loan, you might not want the loan officer to check your detailed transaction history or your Facebook Timeline to find out what you’ve been up to.
Yet sharing many kinds of personal data with financial institutions may be beneficial, particularly for people who have very little credit history, for instance, when they are just out of school. Max Levchin co-founded PayPal and was its chief technology officer, and with his new company Affirm, he aspires to reinvent consumer credit much like PayPal reinvented online payments. He believes that social data can give more people access to credit. To do a better job of estimating credit risk for people with scant financial history, his company draws on far more than the five categories of information used to come up with a FICO score. Among the data analyzed are web browsing behavior, activity on Facebook and Twitter, frequency of mobile phone calls and text messages, and even the operating system of the applicant’s mobile phone. Affirm also looks to see if the applicant has been active in an online community such as GitHub, where software developers share their code and collaborate with others. Contributors to the site generally have an authenticated identity and reputation feedback on their work.
When it comes to borrowed money, privacy has never been the rule. This is not the first use of social data by financial institutions. Several years ago, Allstate, which sells insurance to 10% of American households, hypothesized that its customers were more likely to file a false claim if people in their social network had themselves previously filed false claims in the past. People with similar values—in this case, a proclivity toward filing a false claim—are more likely to be friends than not. Allstate receives millions of claims each year, and not every claim can be investigated deeply. In the past, the company had to rely on crude proxies, such as whether the physical neighborhood in which a person lived had a high percentage of false claims. If Allstate could instead access data about a customer’s network of friends, these data would help staff flag up new claims that needed extra scrutiny for possible fraud. Enter Facebook. (After the use of the social graph data became the subject of a withering profile in the Wall Street Journal, Facebook banned the company from scraping its site.)
The German start-up Friendsurance offers “peer-to-peer insurance.” To set up a Friendsurance plan, two (or more) people indicate they will contribute a specified amount—say, thirty euros—if the other reports that an insured piece of property has been lost or stolen. The customer pays lower premiums while keeping the same coverage, because each friend’s commitment helps cover the higher deductible required for the lower-premium policy. Asking friends to pay on a claim also reduces the number of filed claims. People are less likely to submit a fraudulent claim “against” a group of friends versus a distant corporate HQ, either because they don’t want their friends to know (say, if they’ve been reckless) or because they don’t want their friends to be saddled with the bill. In a way, customers vouch for the honesty of their claims to their friends—and their friends vouch with their wallets for the veracity of the claim when the insurer has to pay the amount due above and beyond the Friendsurance deductible. In essence, Friendsurance hands over some of the work of assessing a customer’s risk profile to his peers.
Of course, Facebook itself is exploring ways to monetize its social graph data. In 2010, the company acquired a patent from Friendster that suggests how social graph data can be used to filter content about other individuals.
In the patent’s words:
When an individual applies for a loan, the lender examines the credit ratings of members of the individual’s social network who are connected to the individual. If the average credit rating of these members is at least a minimum credit score, the lender continues to process the loan application. Otherwise, the loan application is rejected.
If your only real-world overlap with a Facebook friend is that you once worked for the same company, occasionally played basketball together in a pick-up league, or knew from your mother that he was a third cousin twice removed, should you be considered risky business because he appears on your friend list and is a sketchy character?
Until quite recently, most Chinese citizens had to turn to friends and family when they needed to raise extra cash. In 2015, e-commerce giant Alibaba’s affiliate Ant Financial rolled out a pilot service in China, called Sesame Credit, which analyzes individual transactions to summarize a person’s creditworthiness—somewhat like having your Amazon purchase history reviewed to determine whether you qualify for credit.
Until quite recently, most Chinese citizens had to turn to friends and family when they needed to raise extra cash. More than 650 million people use Alibaba’s e-commerce site each year, which gives Sesame Credit access to a wealth of transaction and communication data. For example, its payment system, Alipay, reportedly handled $14 billion in gross sales on November 11, 2015—11/11, or Singles Day, a “festival of shopping” for single adults popularized by Alibaba itself. Nearly 70% of the sales were conducted via smartphone, which meant the geolocation data recorded by the Alipay mobile app can be used to identify where shoppers spent the day. When people shared a meal, the Alipay app offered a payment option to “go Dutch” on the bill. This gives Alibaba real-world information not only about where people are eating but also with whom they are eating, in calculating Sesame Credit scores. The score quickly got adopted in other areas, including as an optional but popular profile field on a dating site in China.
Sharing data from the sensors on our phones could be used in other novel ways. For instance, at least one bank has considered offering a customer service, code-named “no regrets,” that analyzed an individuals past transactions and current context, as provided by a smartphone. You walk up to an ATM in Las Vegas at 4 a.m. and ask to withdraw a thousand bucks. Instead of spitting out the bills, the machine prompts: “Are you really sure you want to withdraw that much cash right now? People like you who confirmed ‘yes’ in a similar situation tended to regret it later.”
For example, you should be able to see an analysis of the data sources and how they affect your credit score, much like FICO tells you what percentage of its score is based on whether you pay your bills on time. How much weight is put on a semantic analysis of your tweets, which could reveal that you are worried about losing your job? Are your location data considered, to see where you spend your time, noting with approval that you clock long hours at the office—or with concern, too many nights at your local bar? Is the refinery analyzing your social graph for indications that some of your friends are a credit risk, akin to how Allstate flagged claims for investigation in cases where some friends may have filed false claims themselves? If connections with certain people are making lenders turn down your loan application, you should have a right to know which ones they were.
Once you’ve reviewed the data, you can decide whether to change your behavior or change your data, by amending, blurring, and experimenting with them. You could amend data with additional context, as you might in order to explain a bad grade on your college transcript, or, in past decades, to explain a missed payment in an interview with the loan officer at your local Building & Loan. After seeing the effect of each person in your social graph on your application’s chances, you might decide to unfriend a person who is dragging down your social data credit score, much like a person might sever ties with a disreputable character in town.
This can get complicated if a data refinery considers the reputation of friends of friends of friends, the use case mentioned in Facebook’s patent. You might agree to let a financial institution review your Facebook social graph for your loan application, but only if you have the right to blur the resolution of your friends, only allowing direct contacts’ to be considered in the analysis. In some circumstances, your social network might make it easier to get a loan, because your friends are responsible debt payers or because, à la Friendsurance, they are willing to effectively “co-sign” a certain amount of your loan. Experimenting with the make-up and resolution of your social graph is the only way you’ll be able to get a good sense for how a potential lender may use your data.
Some data refineries will be more focused on developing products and services for individuals sharing their data; others, on developing products and services for companies and organizations that want to identify and understand those individuals and their purchasing propensity. Your power lies in choosing to use those data refineries which offer tools that increase transparency and agency for users—including tools that allow you to evaluate how much benefit you get in exchange for the data you share—not in asking for more privacy.
Excerpted from Data for the People: How to Make Our Post-Privacy Economy Work for You by Andreas Weigend. Copyright 2017. Available from Basic Books, an imprint of Perseus Books, a division of PBG Publishing, LLC, a subsidiary of Hachette Book Group, Inc.