SABR

Birnbaum: On academic rigor in sabermetrics

From SABR member Phil Birnbaum at Sabermetric Research on April 10, 2012:

At the SABR Analytics Conference last month, a group of academics, led by Patrick Kilgo and Hillary Superak, presented some comments on the differences between academic sabermetric studies, and "amateur" studies. The abstract and audio of their presentation is here (scroll down to "Friday"). Also, they have kindly allowed me to post their slides, which are in .pdf format here.

I'm not going to comment on the presentation much right now ... I'm just going to go off on one of the differences they spoke about, from page 11 of their slides:

-- Classical sabermetrics often uses all of the data -- a census.

-- [Academic sabermetrics] is built for drawing inferences on populations, based on the assumption of a random sample.

That difference hadn't occurred to me before. But, yeah, they're right. You don't often see an academic paper that doesn't include some kind of formal statistical test.

That's true even when there are times when there are better methods available. I've written about this before, about how academics like to derive linear weights by regression, when, as it turns out, you can get much more accurate results from a method that uses only logic and simple arithmetic.

So, why do they do this? The reason, I think, is that academics are operating under the wrong incentives.

Read the full article here: http://sabermetricresearch.blogspot.com/2012/04/academic-rigor.html

Related link: Listen to Hillary Superak's SABR Analytics Conference presentation here

For more stories and clips from the 2012 SABR Analytics Conference, visit SABR.org/analytics.

This page was last updated April 11, 2012 at 2:04 pm MST.

Individual Memberships start at just $45/year

Become A Member Today

When you join SABR you are making a statement of support for baseball history. You are joining a worldwide community of people who love to read about, talk about and write about baseball.