Incentives in economics are wrong, but how?
We aren’t starving for quantity, we’re drowning in garbage
Two weeks ago I had an exchange with Kevin Munger on economic imperialism. He’s now written some more on economics, and there’ve simultaneously been some other interesting contributions. He didn’t really speak to my arguments, but he did call me “incurious and self-satisfied”, and like all good insults, this encouraged me to think more so as to find a good retort.
So let me practice what I preach, act like an imperialist, and think about metascience from an economic point of view.
The research industry
Research is bought by the community and supplied by the social science industry. This recent debate, along with some others, is about how to organize and regulate this industry. Two sets of interests are at play here: the interests of the broader community in high quality social science; and the interests of the research industry itself. How should these be balanced? Which should dominate? And which will?
The first question has a simple answer. Trade-offs between the quality of research, and the welfare of social scientists, should always be resolved in favour of research quality.
This is because, compared to the world which benefits from our research, researchers are 1/N and N is large. We are a tiny group. The potential benefits of good social science include things like preventing malaria, or helping countries get rich. They matter vastly more than anything that happens to us. We are also a privileged group. A menial, struggling post-doc is a member of an elite segment of (typically) an extremely rich society. (You don’t feel that way? Of course you don’t!) Top professors, sure, are more privileged still.
OK, but what will happen? Should we expect the research industry to organize itself for the greatest benefit to humanity?
For an answer we can turn to George Stigler. Regulation, Stigler said, is purchased by the industry for its own benefit. So, when you hear economists talking amongst themselves about how to organize economics, think of Adam Smith’s famous quote: “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public….”
And this story checks out. On Twitter, you’ll find plenty of social scientists complaining about their brutally long hours. Funny: no one ever comes out with “Actually, I work from home in my pyjamas, teach one class a week, and take long summer holidays.” Even Kevin Munger, who is explicitly — and honestly — thinking about how to produce better research, sometimes falls into the trap of focusing on “fair division” among social scientists, as I pointed out.
In short, if there are “reforms” which benefit researchers, but make research worse, we should expect them to be popular. And when social scientists complain how tough it is in their discipline, we should start off skeptical.
But we can’t stop there. There might be reforms that benefit researchers and make research better. Are there?
Where we’re at
Today, there are hundreds of journals in economics, but the Top Five matter much more than the others. And they matter a lot: a Top Five might get you tenure, or a job offer. Incentives are large. Relatedly, standards are high. Journal articles have to pass multiple reviewers, and a lot of work goes into robustness checks and appendices. Also, because of the back-and-forth with reviewers and revisions, publication is slow. Last, the discipline is imperialist: if they like, researchers can write about crime, the family, religion or politics, rather than markets and firms.
So. Are standards too high? Are incentives too large? Is publication too slow? Are economists too imperialist?
Are standards too high?
In the abstract this might be a difficult question to answer, but today, it’s really not. Guys, a third of social science papers have zero citations. Let’s not kid ourselves: most of this work might as well never have been written; it affects nothing; nobody even cares enough to disagree. Meanwhile, two thirds of social psychology experiments fail to replicate. Does economics do better? Well, sure, yes, some experimental economists are quietly smug because 60% of their results replicated. Whoopee! Only 40% of what we produce is misleading, i.e. literally harmful!
We aren’t starving for quantity: we’re drowning in garbage. If we have higher standards than other social science disciplines, good, they should be higher still.
Are economists too imperialist?
Another easy one. This is a classic piece of industry self-interest. Of course we’d all like to have a local monopoly of our little specialization. Of course that would be mutually cosy. And it already happens far too much. There are areas where contradictory literatures proceed in parallel, never challenging each other, to an almost absurd degree. Take race and ethnicity: has any one of the myriad theories on this sprawling topic, across cognitive psychology-social psychology-sociology-economics-anthropology-politics-history-biology, ever been wiped out by any other?
Economists should be praised for their turf invasions, and just as in any market, complaints from the local competitors are signs they’re doing it right.
Are rewards too big?
This is harder.
First, economic theory tells us what to expect if we offer big incentives on measurable outputs. People work hard: good; but they transfer effort away from what’s unmeasurable: bad. Behavioural economics adds that material incentives can crowd out other, gentler motivations, like curiosity and honesty.
There’s a plausible case that this is happening. I’ve heard of European universities where academics get 4-figure bonuses for a top five publication. Intuitively, that smells bad. And one sees a lot of… loserish behaviour. People writing to ask to be nominated for prizes. Respected elder statesmen boasting about their citation count, like a sales guy with the monthly figures.1 What Paul Romer calls “the norms of politics”. It may sound prissy, but thinking and acting like this corrupts a noble endeavour.
From this perspective, it is probably a step backward that submitted articles no longer get anonymized in many journals. The logic behind this was that reviewers would know, or look up, the author anyway. In effect this was giving in to corruption. There are few legitimate reasons to look up the author of an article you’re reviewing. And as Maciej Kotowski points out, big names inevitably impress the reviewer.
Second, big incentives means inequality, and as Thomas Piketty would tell us, inequality raises issues of political power.
Right now, a few old men at the top run the journals that matter. The 1%, if you like. They share some of the bad features of the 1%, including some not very nice histories of sexual behaviour. And of course, they may steer the discipline too conservatively: the old “one funeral at a time” story.
The old men are now being fought. The risk is that if they lose, the winners will not be the academic Left Behind, but the Complacent Class. The top 20%. The academics who organize boycotts on Twitter and try to shout down speakers they don’t like. A plague on both their houses, but if forced to pick, I’d rather the old elite than these new guys.
I think there is a reasonable case that academia has focused too little on selection and too much on incentives. We should stop trying to pay for good work, and instead, try to hire people who like doing good work and damn the pay. This is in line with the traditional folk wisdom of Cambridge: “hire good people and let them get on with it”.
Are we too slow?
Hmm, again maybe. It’s certainly a big shock when you publish in a mainstream science journal and reviews come back within a week.2 And there’s a strong case that much social science is really journalism, masquerading as science by hiding behind overbroad generalizations. On the other hand, maybe the demand for time-sensitive research can be filled by actual data journalism.
There’s also room for the opposite point of view. The pressure to publish fast might mean we are missing the big slow passion projects that can hit gold. The kind of work Cavalli-Sforza did on geography and human genes, say. There’s certainly some evidence that the quality of work only becomes visible over time, with innovative papers slowly getting more citations (I lost the link). Maybe social science needs to be both faster and slower.
(Bonus question: should we be on Twitter more? Bonus answer: God no 🤮. If Twitter is the answer, rethink your question. Substack is the way!)
Hither, economics
The overall thesis here is that we need reform, but to make us less like other social sciences, not more. Or, other social sciences should be more like us. (If this sounds arrogant, consider that my BA was in soft social sciences, Foucault and all. I’m a refugee from those approaches!) Also, we should mistrust what the modal social scientist is calling for. It’s no more likely to produce good science than any other conspiracy of tradesmen.
To sum up, there should be fewer social scientists, producing less. There should maybe also be fewer economists, but it goes two ways because there should be a higher proportion of economists among the total of social scientists. Lastly, to make the economists less greedy, they should be paid less. (What, you thought we should weaken incentives by levelling up? No, remember: we’re privileged already!) I eagerly await my invitation to give a keynote at the American Social Sciences Association on these points.
If you liked this piece, then I would love you to do three things:
Subscribe to this newsletter. It’s free and spam-free.
Share Wyclif’s Dust on social media, or forward it via email. By telling your friends and/or followers, you’ll be doing me a huge favour.
Read about the book I’m writing. It’s called Wyclif’s Dust, too. You can download a sample chapter.
But earlier didn’t I say…? Yeah, yeah. Look, a non-zero citation count is a necessary condition for good work, but not a sufficient one.
The quality of those reviews is another matter. I’ve had reviews from top science journals which were lazier and dumber than anything I’ve seen in economics. It varies.
And I would come to hear that keynote!
I can argue about the rate of replication in economics, but everything else is spot on.
> Are we too slow?
> Hmm, again maybe. It’s certainly a big shock when you publish in a mainstream science journal and reviews come back within a week.
That is a separate issue. Again see bit.ly/unjournal -- we need more meaningful continuous feedback and rating, and less of the 'wait and waste' parts of the journal game.
But we need 'slower' more careful, larger-scale incremental social science imo. That has nothing to do with 'waiting 6 months for a rejection/acceptance from a journal.'