Monday, December 05, 2005

Extraordinary claims require extraordinary scrutiny

Economist's Focus summarizes this recent paper (PDF) by two Federal Reserve economists refuting the controversial Freakonomics thesis: that abortion in the US in the 1970s caused a dramatic fall in the crime rate two decades later.

"Extraordinary claims require extraordinary evidence" is a phrase popularized by Carl Sagan - in turn derived from Hume's examination of miracles-claims. Now the original abortion-crime hypothesis is far from alleging a miracle. It is however extraordinary as it implies that causal mechanisms of crime originate from circumstances prevailing at the time of birth. Moreover, the claim that the behavior of eliminating live births is skewed against this causal mechanism (that is, abortion does not neutrally eliminate future crooks and law-abiders on a 50:50 ratio).

Now how do you actually produce extraordinary evidence? Only by extraordinary scrutiny. And that is not the responsibility only of the initial proponents. This highlights the social nature of science - even a soft science like economics. Peer review is essential. And it doesn't end when a paper is accepted for publication. The fact that it is published means its evidentiary claims are on the public domain. Hence data can be re-examined by others, or the method reapplied to another data set to check for robustness, and so on. Of course, such painstaking scrutiny entails plenty of resources, and cannot apply to all research. By a very economics-sounding principle, it will apply only to the most controversial claims.

Incredibly, Donahue and Levitt seem to have floundered on a couple of elementary gaffes: one of which is a computer code error. Wow. It's damn easy to go wrong with code, especially if you're doing it alone, and the reason you're doing it alone is that other programmers find it hard to follow what you're doing and therefore, spot your mistake. But others will spend that time if - that's right - you end up with a controversial result.

Lesson for the publication-challenged (me): keep it conventional and complicated, but not more complicated than you have to. That way you don't provoke extraordinary scrutiny, you pose enough obstacles against spotting obvious mistakes, but you don't look too opaque to your reviewer.

Oh yeah: it also helps to get it right.

-------------------------------------------------------------------
UPDATE and ERRATUM:

The Freakonomics blog replies to the Foote and Goetze paper here. Steve Sailer has got an interesting post plus links on this issue, here.

When I stated "50:50" ratio in my original post above, I was thinking of even odds for a person turning out crooked or law-abiding. If not, then for abortion to be selective, the percent of future crooks aborted in total abortions (adjusted for normal cohort mortality by age of "crimehood") should be more than the proportion of actual crooks in the population.

(Sheesh, that was a mouthful, 50:50 was such a better soundbite.)

There, admitting to a mistake isn't so hard. I've got nothin' to lose on this blog!

4 comments:

Anonymous said...

with more and more journals posting the original datasets used for published articles, the cost of replication is also going down.

F

Roehlano said...

I have read hundreds of journal articles. I confess - not once have I bothered to look at the original data set, where available.

There are only two kinds of researchers who will examine those data sets:

a. those who will use the data/method for their own research

b. those who aim to publish a comment/note criticizing or confirming the research. In case of the latter, by replicating the result elsewhere; this overlaps with a.

Steve Sailer said...

"Extraordinary claims require extraordinary scrutiny" -- excellent quote.

Another lesson is to test your complicated econometric analysis against simple reality checks. That's what I did back in 1999 when I debated Levitt on his abortion-cut-crime claim in Slate.com:

http://www.slate.com/id/33569/entry/33571/

I found that reality checks suggested that Levitt's claim was unlikely to be right at any significant scale. Levitt responded that his black box analysis proved he was right. For six years, the conventional wisdom held that Levitt was right, because, well Occam's Butterknife says the guy with the most convoluted, hard-to-check evidence must be right.

But, now we know better.

You can read more about this Freakonomics Fiasco at

http://isteve.com/Freakonomics_Fiasco.htm

Roehlano said...

Thanks Steve for the links. Fascinating exchange. Have quite a lot to catch up on.