SABR and Baseball Info Solutions are pleased to announce the research presentations for the sixth annual SABR Analytics Conference, which was held March 9-11, 2017, at the Hyatt Regency Phoenix in Phoenix, Arizona. See below for full abstracts and presenter bios. For more information, visit SABR.org/analytics.
3:15-4:15 p.m., Thursday, March 9
RP1 and RP2 will take place back-to-back in a single session.
RP1: Minor League Defensive Evaluation
Joe Rosales and Scott Spratt
- Audio: Click here to listen to Joe and Scott’s presentation (MP3; 28:30)
- Slides: Click here to view slides from Joe and Scott’s presentation (.pptx)
An abundance of data is available to analyze the defensive abilities of major league players, including hit locations, trajectories, hang times, pop times, and more recently starting positioning, reaction times, and routes covered. Analysts have utilized this data to develop defensive metrics, including Defensive Runs Saved and Ultimate Zone Rating, to enhance our understanding and appreciation of defensive performance. Fielding Bible Award voters have factored in defensive metrics since 2006, and even the Rawlings Gold Glove Awards explicitly include the SABR Defensive Index, which is comprised of five individual defensive metrics.
However, similar relevant and accurate data has been scarce below the major league level. Minor league defensive analysis, such as Sean Smith’s Total Zone, has been largely limited to play-by-play logs and rudimentary hit locations.
In 2013, Baseball Info Solutions expanded its detailed data collection operation to the minor leagues, largely centered around defensive performance. The company began collecting detailed hit locations, batted ball times, descriptive information, and much more, allowing for the calculation of minor league Defensive Runs Saved numbers at the higher levels of the minors. Additionally, BIS collected Good Fielding Play and Defensive Misplays, the Bill James-inspired system of tracking over 80 categories of plays on a comprehensive scale, such as first baseman scoops and catcher blocks of pitches in the dirt.
For the first time, results from this research will be shared publicly, including the correlation between minor league numbers and major league numbers and developmental trends across various positions.
Joe Rosales is a Research Analyst for Baseball Info Solutions. He is a New England native and found his way to BIS after internships in baseball operations with the Boston Red Sox, Pittsburgh Pirates, and New York Mets. He is also a winner of the MIT Sloan Sports Analytics Conference Research Competition for the development of BIS’s Strike Zone Runs Saved pitch framing methodology.
Scott Spratt is a Research Analyst for Baseball Info Solutions. He writes for ESPN Insider and FanGraphs and co-hosts the Off the Charts Football Podcast with Aaron Schatz. He is a Sloan Sports Conference Research Paper Competition and FSWA award winner.
RP2: Individual Pitch Performance: Transitioning from Prospect Grades to StatCast: Scouting and Analytics Merge to Predict Pitcher Value
- Audio: Click here to listen to Jeff’s presentation (MP3; 23:32)
- Slides: Click here to view slides from Jeff’s presentation (.pptx)
Historically, scouts have graded pitchers on the 20-80 scale for their individual pitches, as well as their overall command and control. While some teams will tweak the process, the grading process in generally uniform across the majors. The question is how to turn these individual grades into a structured system to predict major league performance.
To that end, I created pERA — or Pitch ERA — which attributes each pitch both an ERA and a 20-80 grade. The grade is currently based on the pitches batted ball profile and ability to generate swings-and-misses. In addition to individual pitch grades, the pitcher’s overall control is evaluated on the same scale. The pERA’s and control values are combined to give the pitcher an overall ERA. Given the framework’s relative simplicity, additional changes can be implemented as needed.
With this framework in place, the actual and predicted grades can be compared to see how well each pitcher was evaluated. Additionally, people can get an idea on how adding or dropping a pitch with affect a pitcher’s overall performance. Finally, PITCHf/x and TrackMan data can be utilized to determine which pitches perform better historically. The information can then be fed back to scouts in hopes of improving future evaluations.
Jeff Zimmerman currently writes for several sources including FanGraphs, The Hardball Times, Rotowire, Baseball America, and Baseball HQ. Twice he has been a nominee for the SABR Analytics Conference Research Award in Contemporary Baseball Analysis and he won it with Bill Petti in 2013. Additionally, his MASH series won the 2014 Award for Best Ongoing Fantasy Baseball Series. This past year, he won Tout Wars’s inaugural head-to-head league.
11:15 a.m.-12:15 p.m., Friday, March 10
RP3 and RP4 will take place back-to-back in a single session.
RP3: Effective Velocity: Understanding the Quality of Batted Ball Contact
- Audio: Click here to listen to Perry’s presentation (MP3; 34:46)
- Slides: Click here to view slides from Perry’s presentation (.pptx)
In this research presentation we will discuss the concept of Effective Velocity (EV), which was pioneered by Perry Husband, a former Division II College World Series MVP for the 1984 national championship Cal State Northridge team. Through the study of pitching sequences, the bio-mechanics of swings and the reactionary abilities of hitters, and the way the human mind reacts to visual stimuli, the author was able to craft an integrated strategy that explain the quality of batted ball contact. By blending various scientific principles, he will discuss the foundations of his concept, including the ways in which pitch location and velocity interact, as well as the art of deception through EV tunneling. Hitters and pitchers who have employed effective velocity principles have enjoyed success, yet the concept still meets with resistance by hitting and pitching traditionalists.
Perry Husband was drafted by the Minnesota Twins and played in their minor league system after leading the Cal State Northridge Matadors to the 1984 Division II national championship. He went on to establish himself as a respected hitting coach at all levels, including the major-league level. Perry worked closely with Carlos Pena, former co-AL home run champion in 2009, in his first year as a client. He has also written extensively on his patented concept and his work is showcased on hittingisaguess.com.
RP4: How Hard are College Pitchers Worked?
- Audio: Click here to listen to Gerald’s presentation (MP3; 22:25)
- Slides: Click here to view slides from Gerald’s presentation (.pptx)
By reputation, college pitchers are worked heavily. Reports of pitchers zooming past 100 pitches and making short-rest starts are plentiful in the spring. While these stories hint at overuse, comprehensive examination is needed to pin down the extent of the issue. With the college game not being a traditional focus of public sabermetric research, this study is the first to take a deep analytical dive into whether NCAA pitchers are overused.
I examine college pitchers’ workloads through five different lenses: pitches per start, days of rest before and after starts, adherence to Pitch Smart guidelines, balance of usage across entire pitching staffs, and incidents of Tommy John surgery. From every perspective, it becomes clear that reckless pitcher management is a running theme in NCAA baseball—and it gets worse in tournament play.
Gerald Schifman is the lead researcher at Crain’s New York Business and a writer at The Hardball Times and FanGraphs. He previously worked in the New York Mets’ baseball operations department and in Major League Baseball’s publishing department.
3:45-4:45 p.m., Friday, March 10
RP5 and RP6 will take place back-to-back in a single session.
RP5: Statistics vs. Data Science in Baseball
- Audio: Click here to listen to Ben’s presentation (MP3; 34:35)
Statistics, with its origins in census demographics, has progressed as a formal field of study for over three centuries. The field has historically emphasized modeling relationships between variables based on limited samples of data, such as polling data and clinical trials, and using the resulting models to predict future outcomes. Without an abundance of data, much statistical work has been theoretical in nature, such as the development of a model that requires certain assumptions to be made.
More recently, the big data revolution has incited massive changes in the way that information can be collected, stored, manipulated, analyzed, modeled, visualized, and communicated. In fact, the terminology itself is rapidly changing, including the introduction of the term “data science” in the last decade or two. Spawned in part from the field of computer science, data scientists have very similar objectives to classical statisticians but approach the challenges from a very different perspective. Armed with massive databases and more powerful technology including new programming languages, computer processors, and machine learning algorithms, data scientists are able to tackle these challenges and often get superior results, with or without a solid statistical basis.
The baseball industry, like many others, is faced with a similar crossroads. Baseball analysts are being challenged to evolve their approaches and codebases to adapt to a world of launch angles, spin rates, and hang times that demands faster consolidation of information for quicker, more accurate decisions ranging from draft picks to trades and free agents.
The discussion will cover the relationship between statistics and data science, with specific examples and illustrations from the world of baseball.
Ben Jedlovec is the President of Baseball Info Solutions, the leading baseball data and analytics company. With BIS CEO & Owner John Dewan, he co-authored The Fielding Bible—Volume III in Spring 2012 and Volume IV in March 2015.
RP6: The Intrinsic Value of a Pitch
- Audio: Click here to listen to Glenn’s presentation (MP3; 30:00)
- Slides: Click here to view slides from Glenn’s presentation (.pptx)
Traditional statistics that are used to measure and predict the performance of pitchers are affected by a number of confounding variables such as the defense, the ballpark, the umpire, and the catcher. Approaches for reducing the effects of these variables include selectively removing plate appearances which restricts the scope of a statistic or by using inexact adjustments which distorts a statistic’s computed value. The deployment of sensors that characterize the trajectory of pitches and batted balls in three dimensions provides the opportunity to assign an intrinsic value to a pitch that depends on its physical properties and not on its observed outcome. We exploit this opportunity by utilizing a Bayesian framework to learn a mapping from five-dimensional velocity, movement, and location vectors to pitch intrinsic values. A kernel method generates nonparametric estimates for the component probability density functions in Bayes theorem while cross-validation enables the model to adapt to the size and structure of the data. The wOBA cube representation is employed by the model to derive intrinsic quality-of-contact values for batted balls that are invariant to the defense, ballpark, and atmospheric conditions. The approach models the dependence of pitch intrinsic values on count and batter/pitcher handedness and provides a framework to study the impact of pitch distribution and sequencing. The methodology does not suffer the loss of information that is inherent with schemes that rely on pitch classification and is sufficiently general to support the use of additional variables such as spin rate.
We show that pitch intrinsic values have a significantly higher internal consistency than standard linear weights pitch values which enables more accurate predictive models. We develop a method that combines intrinsic values at the individual pitch level into a statistic that measures pitcher performance over a period of time. We use this statistic to show that pitchers who outperform their intrinsic values during a season tend to perform worse the following year. Since intrinsic values are based on physical measurements, the new statistics can be used to predict how a pitcher’s collection of pitches will translate from other levels (amateur, minors, foreign leagues) to the MLB environment.
By directly relating the physical properties of a pitch to expected performance, this approach also promises to improve our understanding of how pitcher skill varies with age.
Glenn Healey is a professor of electrical engineering and computer science at the University of California, Irvine where he is director of the computer vision laboratory. He received the B.S.E. in Computer Engineering from the University of Michigan and the M.S. in computer science, the M.S. in mathematics, and the Ph.D. in computer science from Stanford University. Dr. Healey’s professional life is dedicated to combining physics, statistical signal processing, and machine learning methods for the development of algorithms that extract information from large sets of data.
9:45-10:45 a.m., Saturday, March 11
RP7 and RP8 will take place back-to-back in a single session.
RP7: Hitter Performance vs. Different Quality Levels of Pitching
- Audio: Click here to listen to Vince’s presentation (MP3; 32:56)
- Slides: Click here to view slides from Vince’s presentation (.pptx)
One thing we know about postseason baseball is that the quality of pitching is much better than what we see over the course of the regular season. In an effort to assess which hitters might be expected to perform better in the postseason, this research project delves into the how specific hitters perform against different quality levels of pitching. It starts by quantifying the quality difference of postseason vs. regular season pitching. We then create a process to stratify pitchers by their regular season performance level and measure how individual hitters perform against each strata of pitchers. Finally, we create hypotheses as to why certain hitters tend to perform disproportionately well against top pitching, while others feast on the bottom of the starting rotation and middle relievers.
Vince Gennaro is the President of SABR’s Board of Directors and author of Diamond Dollars: The Economics of Winning in Baseball. He is a consultant to MLB teams, appears regularly on MLB Network, and director of Columbia University’s sports management graduate program. He also hosts the syndicated MLB Network Radio program, Behind the Numbers: Baseball SABR Style on SiriusXM on Sunday nights. He is also the architect of the Diamond Dollars Case Competition series, which brings together students and MLB team and league executives and serves as a unique learning experience, as well as a networking opportunity for aspiring sports executives.
RP8: Has the Modern Bullpen Killed Late-Inning Comebacks?
- Audio: Click here to listen to Rob’s presentation (MP3; 22:38)
- Slides: Click here to view slides from Rob’s presentation (.pptx)
The modern bullpen is generally traced back to 1988, when Oakland manager Tony La Russa guided the A’s from a .500 season to a 104-58 finish and the American League championship with a major contribution from Dennis Eckersley, who led the league with 45 saves, 21 of which lasted exactly one inning. Since then, one-inning saves have become the norm (30% of saves in 1988, 84% in 2016), and ninth-inning specialists have led to eighth-inning specialists and seventh-inning specialists.
The proliferation of relievers whose role is to throw no more than an inning (an average of 16.5 pitches in 2016) at a time, coupled with thinner benches affording fewer platoon advantages for hitters, has created a squeeze on late-inning offense. The spread between starter and reliever ERA, which was 0.27 in 1988 (3.81 for starters, 3.54 for relievers), widened to 0.41 in 2016 (4.34 for starters, 3.93 for relievers).
These factors have made scoring in the late innings of games more challenging, imperiling late-inning comebacks. From 2004 to 2015, the percentage of teams coming back from a deficit after six innings fell from 15.1% to 11.8%. The percentage coming back after seven innings declined from 9.9% to 7.4%, and those coming back after eight from 4.7% to 3.5%. All three figures from 2015 are postwar lows, even though, as Russell Carleton explained at Baseball Prospectus, a decline in scoring has created more potential (i.e., leads of three or fewer runs heading into the eighth and ninth innings) comeback opportunities.
Late-inning comebacks, however, rebounded in 2016, and the current levels, while lower than in 2004, are in line with long-term averages for comebacks after six, seven, and eight innings. Supporting the thesis that the modern bullpen has changed late-inning baseball, though, comebacks were largely uncorrelated to the run environment prior to 1988, with a strong correlation in the years since 1988.
This implies that the adoption of the modern bullpen has enabled all teams, not just those with the very best relievers, to hold leads effectively in late innings. The difference in late-inning winning percentages for teams with the best bullpens and those with the worst has remained fairly constant in the postwar era, suggesting that any competitive advantage from superior bullpen management has been rapidly adopted by other teams, quickly negating the advantage.
Rob Mains is an author at Baseball Prospectus, where his “Flu-Like Symptoms” column runs twice weekly. He is also a contributor to Banished to the Pen. He is a SABR member, a Retrosheet volunteer, and a former Wall Street equities analyst.
2:45-3:45 p.m., Saturday, March 11
RP9 and RP10 will take place back-to-back in a single session.
RP9: Comparing Pythagorean Expected Wins and Alternative Models from Contest Theory: Estimation and Extension to the Prediction of Player On-Base Likelihood
- Audio: Click here to listen to Shane’s presentation (MP3; 24:10)
- Slides: Click here to view slides from Shane’s presentation (PDF)
The Pythagorean Expected Wins Model was developed by Bill James (1980) to estimate a baseball team’s expected wins (as distinct from the team’s actual wins) over the course of a season. As such, the model can be used to assess how lucky or unfortunate a team was over the course of a season (actual wins – expected wins). From a managerial perspective, such information is valuable in that it is important to understand how reproducible a given result may be in the next time period. In contest-theoretic (game-theoretic) parlance, James’ original model represents a (restricted) Tullock contest success function (CSF). We transform, estimate, and compare James’ original model and two alternative models from contest theory—the serial and difference-form CSFs—using MLB team win data (2003-2015). We conclude that alternative contest functional forms also have merit in wins estimation but that an optimized version of James’ original model performs very well. We also utilize the Tullock CSF to estimate on base probability for specific plate appearances. A high degree of variation in on base success is explained within the framework of a (non-linear) Tullock CSF model, which outperforms a naive, linear model that simply averages batter success against same handed pitchers and pitcher success against same handed batters. This adapted model of on base probability can be used to further inform pitcher substitutions and, independently, to inform batter substitutions.
Shane Sanders is an Associate Professor of Sport Analytics at Syracuse University. He has published academic articles in Journal of Sports Economics, Public Choice, European Journal of Political Economy, Southern Economic Journal, and other leading venues. He has also published work in the Wages of Wins Journal, The Hardball Times, Nylon Calculus, and at FoxSports.com. His work (joint with Dr. Yang-Ming Chang) on MLB revenue sharing was cited in an amicus curiae brief to the United States Supreme Court.
His co-author, Yang-Ming Chang, is a Professor of Economics at Kansas State University. Chang conducts research in applied microeconomics and sports economics. His work has been in leading journals of economics, such as the Journal of Political Economy, Economic Inquiry, Canadian Journal of Economics, and Journal of Sports Economics. His work has contributed to our understanding of MLB revenue sharing, point-shaving in NCAA basketball, and wins estimation in MLB.
RP10: Streaks and Extremes: Using Simulations to Find Patterns in Daily Performance
- Audio: Click here to listen to Matt’s presentation (MP3; 34:06)
- Slides: Click here to view slides from Matt’s presentation (PDF)
Most quantitative analyses of player performance focus on cumulative statistics, primarily in the context of partial or whole seasons and careers. Although new technologies and data are allowing us to better evaluate individual performances on a smaller scale, daily performance is still most commonly left to anecdotal analysis. Due to the necessarily small sample size of daily performance, this minimal statistical analysis is understandable. However, we can glean valuable insight on daily performance by viewing the collection of game-level data points as a distribution rather than a mean or sum.
This study utilizes a baseball play-by-play simulator to produce distributions of daily statistics for projected player performance, taking into account contextual factors such as opposing pitcher and bullpen, weather, umpire, and park. These distributions are then compared to actual results over the length of the 2016 season. Using this data, we can make simple observations about overall accuracy and the amount of variance explained by the simulation model. However, the more interesting results come from examining the shape of projected and actual distributions, particularly their tails, or the upper extremes of performance. Because the simulator is based on the assumption that player talent level is, for the most part, static or slow to change, the extremes of result distributions may lend insight to the existence of the “hot hand” effect. If hot and cold streaks exist, we would likely see differences between the shapes of result and projection distributions, as “streaky” players would have more performances in the tails rather than in the inner quartiles.
This presentation will explore the results of this distribution analysis, as well as briefly explore other uses of this research, including the nature of batter-pitcher matchups, player covariance, and other game-level factors that are lost in larger-sample analyses.
Matt Hunter is a Software Engineer living in Minneapolis, Minnesota. He is the founder and owner of SaberSim LLC, a sports projections and analytics company that utilizes simulators for all four major sports to create projections and tools for daily fantasy sports and betting analysis. He previously wrote for Beyond the Box Score, The Hardball Times, and FanGraphs.
For more information or to register for the 2017 SABR Analytics Conference, visit SABR.org/analytics.
Originally published: January 25, 2017. Last Updated: January 25, 2017.