Home Run Derby Curse: Fact or Fiction?
This article was written by Marcus Jaiclin - Joseph McCollum
This article was published in Fall 2010 Baseball Research Journal
A variety of sources have indicated the existence of a Home Run Derby curse. For example, Alex Rodriguez has been quoted as saying about the Derby, “I try to stay away from that” and “My responsibility is to the New York Yankees. I need my swing to be at its best.”1 The implication is that participation in the Derby would leave his swing somewhere other than at its best. In the Wall Street Journal, we read that “for each of the past four years, one player who has hit at least 10 home runs in the Derby has seen his power disappear once play resumed for the second half of the season.”2 We also see on mlb.com that the curse is real “at least since 1999” and that “43 out of 74 players saw a decrease in their production after the Derby.”3
Hardball Times came out with the opposite perspective: “No matter how long a hitter lasts or how many home runs he hits, we still don’t see any signs of a second-half decline.”4 However, the list of caveats to their analysis was almost as long as their analysis, which might lead to some skepticism.
We hope to put this subject to rest with some clear assumptions and a careful consideration from multiple perspectives.
WHAT IS A HOME RUN DERBY CURSE?
The goal of this analysis is to determine if the claim of a Home Run Derby curse is borne out by the performance of the players who participated. In order to do so, we need to determine how the idea of a Home Run Derby Curse should be interpreted statistically.
Interpretation 1
A player who participates in the Home Run Derby experiences a decrease in offensive and power hitting statistics in the second half, as compared to the first half of that season.
Interpretation 2
A player who participates in the Home Run Derby experiences a decrease in offensive and power hitting statistics in the second half, as compared to his usual production.
Behind the first interpretation is the assumption that a player would have continued to perform at the same level for the rest of the season had he not participated in the Derby. There is certainly some reason to believe that this is true—the player is clearly capable of performing at this level, having done so for half of a season. However, if this level is substantially above his typical level, it may be unreasonable to expect this to continue for the full 162 games. So it seems that this interpretation would tend to predict for the second half a level of performance that is higher than one should reasonably expect.
Behind the second interpretation is the assumption that a player’s career statistics are more indicative of his likely performance than is his first half of the season. Players selected to participate in the Derby are those leading the league in power hitting in the first half of the season and so are, for all except the best of players, performing at a level above their average statistics. Similarly, participants in the Derby are often having an excellent season at the peak of their careers, so a somewhat higher level of performance is to be expected again in the second half of the season. So it seems that this interpretation would tend to predict a level of performance in the second half that is lower than one should reasonably expect.
Neither interpretation is perfect, so we will consider both comparisons, looking at the difference between the post-Derby statistics and pre-Derby statistics, and then at the post-Derby statistics and the players’ usual statistics. In addition, we will consider a third comparison, where we will build a comparable dataset to estimate the expected regression to the mean and to see if a similar effect can be found in a dataset where the actual participation in the Derby is not present as a variable.
ANALYSIS 1: ON ALL PARTICIPANTS IN THE HOME RUN DERBY
In our first pass through the data, for each player who ever participated in the Home Run Derby, we collected all of his statistics for all seasons, separating any seasons where he participated in the Home Run Derby from any seasons where he did not participate. We restricted the seasons to those where the player reached a total of 502 plate appearances, in order to exclude any unusual averages for a player. The total number of player-seasons in the data is 1,111 (up to and including the 2009 season), 192 of which were seasons where a player participated in the Derby. There were six player-seasons where a player participated in the Derby and did not reach 502 plate appearances, and so those were excluded. This number is small enough to give some indication that participation in the Derby may not be linked to injury in the second half, though the number of occurrences of this is too small to allow us to perform any statistical inference process.
In order to simplify the analysis, we have focused on two offensive statistics: OPS and the percentage of plate appearances that were home runs. We found similar results with other statistics, so the analysis does not depend substantially on this choice. Note that these players averaged 10 to 15 fewer games played after the All-Star Break than before (due to its timing in the schedule more than their own playing time), so only statistics that are computed per at-bat, per plate appearance, or per game would make sense in these comparisons.
Comparison 1.1: Pre-Derby to Post-Derby, for Seasons Where Player Did Participate
Pre-Derby | Post Derby | |||
---|---|---|---|---|
OPS | HR % | OPS | HR % | |
.969 | 6.061 | .926 | 5.346 |
Clearly, there is a significant overall drop-off in production for those players who participated in the Derby; both decreases are statistically significant (OPS, P < 10-5, HR %, P < 10-6). This difference, essentially, is the origin of the idea of a Home Run Derby curse; however, some decrease should be expected, as noted above.
Comparison 1.2: First Half to Second Half, for Seasons Where Player Did Not Participate
First Half | Second Half | |||
---|---|---|---|---|
OPS | HR % | OPS | HR % | |
.851 | 4.201 | .858 | 4.288 |
Both of these differences are not statistically significant, so, in a typical season, these players perform at about the same level after the All-Star Break as they do before. However, these levels are substantially lower than the level they perform at in seasons where they are selected to the Derby, which is also to be expected, since they were not among the league leaders in power hitting when the Derby participants were invited.
Comparison 1.3: First Half to Second Half, All Seasons
First Half | Second Half | |||
---|---|---|---|---|
OPS | HR % | OPS | HR % | |
.871 | 4.522 | .870 | 4.471 |
Again, we see essentially no change; the overall levels are slightly higher since they include more of these players’ best seasons. However, the important comparison to make here is between the performance after the Home Run Derby and these two half-seasons: post-Derby performance is closer to pre-Derby performance than it is to these same players’ average performance:
Comparison 1.4: Post-Derby Participation versus Pre-Derby and versus Typical Season
Post-Derby vs. Pre- | vs. Avg. First Half | vs. Avg. Second Half | |
---|---|---|---|
Change in OPS | -.043 | +.055 | +.056 |
Change in HR % | -.716 | +0.824 | +0.875 |
Note: It may appear that differences do not add up exactly due to round-off effects.
The differences in the second and third columns are clearly statistically significant (versus first-half OPS: P < 10-8 and HR %: P < 10-10; versus second-half OPS: P < 10-5 and HR %: P <10-7; first column is from comparison 1.1), so the players’ performance has declined somewhat from pre-Derby performance but is still superior to their own average performance. This leads us to believe that the “second-half slump” could be simply due to a regression to their mean.
In order to test this, it makes sense to restrict the dataset to players whose performance in seasons where they participate in the Derby is closer to their average performance—that is, to restrict the dataset to the top players. If there is less of a drop-off among the top players, then regression to the mean is a likely explanation. So we considered the same comparisons with players who participated in the Derby more than once; players selected more than once have higher average statistics, and so we would expect a smaller drop-off if regression to the mean is the best explanation.
ANALYSIS 2: ON REPEAT PARTICIPANTS IN THE HOME RUN DERBY
Here, we make the same comparisons using the same statistics on the smaller data set of players who participated in the Home Run Derby more than once.
Comparison 2.1: Pre-Derby to Post-Derby, for Seasons Where Player Did Participate
Pre-Derby | Post-Derby | |||
---|---|---|---|---|
OPS | HR % | OPS | HR % | |
.993 | 6.176 | .961 | 5.787 |
Again, these differences are statistically significant (OPS: P < .01, and HR %: P < .03), but the differences are much smaller than in the full dataset. The postDerby values are statistically significantly better than the values for all participants, and pre-Derby OPS is nearly so (OPS, pre: .05 < P < .06; HR %, pre: 0.25 < P < 0.30; OPS, post: P < .001; HR %, post: P <.04), so the assumption that repeat participation picks out the better players is generally supported by these values.
Comparison 2.2: Pre-Derby to Post-Derby, for Seasons Where Player Did Not Participate
Pre-Derby | Post-Derby | |||
---|---|---|---|---|
OPS | HR % | OPS | HR % | |
.869 | 4.424 | .880 | 4.584 |
As in the first section, these differences are not statistically significant, so their performance is similar from first half to second half in seasons where they did not participate.
Comparison 2.3: Pre-Derby to Post-Derby, All Seasons
Pre-Derby | Post-Derby | |||
---|---|---|---|---|
OPS | HR % | OPS | HR % | |
.901 | 4.874 | .901 | 4.893 |
Again, these differences are not statistically significant; in fact, they are almost indistinguishable.
Comparison 2.4: Post-Derby Participation versus Pre-Derby and versus Typical Season
vs. Pre-Derby | vs. Avg. Pre-Derby | vs. Avg. Post-Derby | |
---|---|---|---|
Change in OPS: | -.032 | +.060 | +.060 |
Change in HR %: | -.389 | +0.914 | +0.894 |
Note: It may appear that differences do not add up exactly due to round-off effects.
Here we see, as anticipated, a smaller difference between the pre-Derby and post-Derby statistics than we saw in the full data set, and a difference that is very similar to what we saw in the full data set when compared to their typical season statistics. All of these differences are statistically significant (versus first-half OPS: P < .001 and HR %: P < .0001; versus secondhalf OPS: P < .001 and HR %: P < .0001; 1st column is from comparison 2.1).
In this dataset, however, there was one other statistical measure that provided an additional insight. Here, we consider the BABIP (batting average on balls in play). Most hitters, and most pitchers (measured against) have a long-term batting average of about .300 on balls put in play (that is. excluding strikeouts and home runs), and any significant deviation from this is most likely an indication of good or bad luck. In comparison 2.1, there was a notable decrease in BABIP (from .308 to .301) in this dataset, so almost half of the drop-off in OPS from the first half to the second half can be attributed to a decrease in luck at the plate (.014 of the .032—the .007 difference is doubled because BABIP will contribute to both OPS and SLG). If we remove the entire .014 from the OPS, the difference in OPS in comparison 1.1 is no longer statistically significant. The difference in HR percentage remains, however, since home runs do not contribute in any way to BABIP. None of the other data sets have a BABIP difference bigger than .003.
This second analysis provides some indication that the differences we saw in the first analysis are a simple regression to a mean: In players whose mean is closer to the league lead, we have noticeably less decline from the first half to the second half in years when they participated in the Home Run Derby.
ANALYSIS 3: ON COMPARABLE SECOND-HALF STATISTICS
A third approach would be to try to build a dataset that removes the variable of actually participating in the Home Run Derby. Participation in the Home Run Derby is based on first-half statistics, so we decided to start with seasons with comparable second-half statistics and then compare these to the first half of the same season. If a drop-off occurs from the second half of these seasons to the first half, it cannot be due to any kind of curse, since the effect comes before the supposed cause.
To build this dataset, we looked at each player who participated in a Derby. We replaced the seasons that the player participated in the Derby with comparable seasons as follows:
If a player was selected to participate in the Derby in a season which had his best first-half OPS and HR-percentage statistics, then we replaced that season with the season where he had his best second-half OPS and HR-percentage statistics. For example, Danny Tartabull was selected to the Derby in 1991, in a season where his OPS was 14.0 percent above the average of his OPS values in the first half of his nine full seasons in professional baseball through 1990, and his HR percentage was 50.2 percent above his average, which we added to get a total “first-half score” of 64.2 for this season. This was the best first half score of his career. However, his second-half statistics in 1991 were not as good: He was 11.5 percent above his second-half average in OPS but 11.4 percent below average in HR percentage for a second-half score of 0.1. In 1987, he had a second half that was 11.8 percent above average in OPS and 29.2 percent above in HR percentage, giving a second-half score of 41.0, which was the best second-half score of his career, so we replaced his 1991 season with his 1987 season in this new dataset.
Similarly, Joe Carter participated in 1991, 1992 and 1996, which had his best, fifth-best, and second-best first-half scores respectively, and so we replaced these with 1989, 1994, and 1986, which were his best, second-best, and fifth-best second-half scores. In this process, we did not consider any seasons before 1985, since that was the first season the Derby took place, and again we only considered seasons where the player had a minimum of 450 plate appearances. In some cases, the season selected was the same as the one where the player participated in the Derby, if the rankings were the same.
This dataset should give us an effective estimate of the amount of regression to the mean we should expect after the Home Run Derby.
Comparison 3.1: Second Half to First Half, for Seasons Comparable to Those Where Player Did Participate, Using All Derby Participants
Second Half | First Half | |||
---|---|---|---|---|
OPS | HR % | OPS | HR % | |
.960 | 6.032 | .916 | 5.351 |
Comparison 3.2: Home Run Derby Curse versus Estimated Regression to the Mean
Derby Curse | Estimated Regression to the Mean | |
---|---|---|
Change in OPS | -.043 | -.043 |
Change in HR % | -.716 | -.680 |
Note: It may appear that differences do not add up exactly due to round-off effects.
The differences here are essentially identical to those we saw in comparison 1.1—in fact, the averages in comparison 3.1 are almost identical to those we saw in comparison 1.1. In other words, the statistics we saw in the Home Run Derby curse are essentially the same as when there was no Home Run Derby played. This provides very strong evidence that the Home Run Derby curse is simply an expected statistical variation.
The comparisons to their typical season will also be essentially identical, since the values in comparison 3.1 are so close to those in comparison 1.1.
CONCLUSION
Home Run Derby curse, fact or fiction? We have no choice but to conclude that it’s fiction. If we consider all the ways that the statistics should behave if there is no curse, we find that they consistently match that model. Certainly, some players will have a decline in power-hitting statistics from the first half of the season to the second after participating in the Derby, but it is clear from the analysis that this would have occurred for those players regardless of whether they chose to participate or not.
JOSEPH McCOLLUM is assistant professor of quantitative business analysis in the school of business, Siena College, where he teaches math primarily to future teachers. He has published in pure mathematics and random-walk theory.
MARCUS JAICLIN, assistant professor of mathematics at Westfield State University, is coauthoring a book on the application of statistics to measure performance in sports and wellness.
Notes
1 Mark Feinsand, “A-Rod to Skip HR Derby, Claims It Tampers with Swing,” New York Daily News, 30 June 2008.
2 Dave Cameron, “The Mysterious Curse of the Home Run Derby,” Wall Street Journal, 13 July 2009.
3 “The HR Derby Curse Is Real!” mlb.com, https://fantasy411.mlblogs.com/archives/2008/07/the_hr_derby_curse_ is_real.html
4 Derek Carty, “Do Hitters Decline After the Home Run Derby?” Hardball Times, 13 July 2009.