Stop asking if Piketty was right or wrong; not everyone will ever agree anyway

There’s big data in there.
There’s big data in there.
Image: AP Photo/Mark Humphrey
We may earn a commission from links on this page.

You can’t blame the world for wondering if economists know what they’re doing. Last week came allegations that Thomas Piketty had an Excel error and made transformations on his data that appear questionable. This comes a year after the Reinhart/Rogoff controversy, where the duo also made an Excel error and used an un-straightforward estimation technique. But once you understand economic data and the research process, the fact that these problems regularly crop up is neither surprising nor worrying.

Piketty was not the first to study wealth inequality, but he was the first to find a significant increase in it.  That’s why his findings are so interesting and subject to so much criticism. I don’t have anything new to add on the particulars of Piketty’s data choices. But I do think it’s important to understand what other economists wrote about wealth inequality, prior to Piketty’s book, a literature that sadly has been misrepresented.

So here’s how the economic sausage gets made—and how mistakes can make for a few poor links along the way.

1.     Bugs happen

 Computer bugs and Excel mishaps are inevitable because code is written by humans and even brilliant economists aren’t perfect. But careful researchers catch important mistakes; they debug their work until the remaining bugs don’t change the result very much when they are fixed. In both the Piketty and Reinhart and Rogoff cases, it seems the bugs exposed didn’t change their original results. This explains why, while most people assume a bug is a sign of unforgivable sloppiness, economists shrug, point out the results didn’t change much, and think that’s an adequate defense. The economists are right; the existence of a bug isn’t necessarily a big deal.

2. Data is messy and that makes it hard to answer even the most straightforward questions

It is not actually clear what’s happened to wealth in America, Europe or the UK. Income inequality has increased, but wealth is a different matter. Most studies haven’t found evidence of worsening wealth inequality. But we don’t know for certain what happened because there is no single good source of wealth data. People don’t declare their net worth on their tax returns. Researchers rely on surveys, which haven’t shown a big increase in the concentration of wealth among the 1% over time.

Surveys are helpful, but aren’t a reliable way to measure the 1% because people (especially super rich people) often understate their wealth and the sample is small. Wealth is reported when people die and pay estate taxes. There doesn’t appear to be a change in inequality using that measure either, but people distort their estates to avoid taxes, while the poor and middle class don’t have many large, if any, assets to leave behind.

3. Messy data forces economists to make some strong and questionable assumptions

Economists must make assumptions about the appropriate way to estimate an empirical relationship or how they construct their data.  For example, a new study by Emmanuel Saez and Gabriel Zucman, like Piketty, found an increase in the concentration of wealth among the 1%. This paper is going to be very important, but it’s not the last word on wealth inequality because it also relies on some controversial assumptions.

Zucman and Saez use tax data on capital income, the flow of income wealth generates, not wealth itself. But you can derive the stock of wealth from its flow of income if you know the rate of return. Zucman and Saez assume everyone invested in financial markets earns the same rate of return. This strikes me as strange because critics of wealth inequality often say markets are rigged to favor of the super wealthy. Can we assume they really only earn the normal market return? The market efficient side of me is inclined to agree, but there are institutional forces that might mean the rich do better (by law only rich people can directly invest in hedge funds and the rich may have access to information and assets we don’t through their jobs). My ambivalence on Zucman and Saezs’ return assumption makes me hesitant to dismiss all previous studies (even if they had their issues too). It’s hard to judge the least dirty shirt on the floor.

Peer review may be flawed, but fellow researchers understand the pitfalls of working with difficult data.  That’s why when a new paper comes out that upends an existing literature, there’s both excitement and appropriate skepticism. What follows is spirited debate about the merits of the respective assumptions driving different results. The debate goes on for years and new studies, using more flawed data, and relying on more controversial assumptions are done. At the end one of three things happen: the field comes to a consensus about whether the incumbent is right and the original research was wrong or scholars may decide original research was once right but things have changed. Or the most likely outcome: there’s no agreement and competing schools of thought form around personal judgment on whose assumptions are worse. It’s not pretty; but data is imperfect, subject to interpretation and the economy constantly evolves. That’s the best researchers (in any field) can do. If you look hard enough, at any study, you can always find something you disagree with and assumptions that didn’t prove correct, even in hard sciences. For good or bad, Piketty wrote the right book at the right time, which meant undue praise and unfair criticism.

But this is where the interaction with the world outside of academia gets messy. People respond to quick and clear answers—they want to know, yes or no, are we in a new gilded age. Few are interested in an esoteric debate about whether rich people regularly beat the market. It’s seductive to give simple answers because nothing is more annoying than an academic who doesn’t take a firm view and loads everything he says with caveats.  Having a strong voice with easy answers influences policy and how people view economics, but it also exposes public scholars to public flogging when the common hazards of empirical work are exposed.