For fans of good teams, like the current Atlanta Braves, the offseason is most often an exercise in roster construction considerations. Since no actual baseball can be played, the action takes place off the field, as each potential contender attempts to best arm itself for the coming campaign. That’s certainly been the case for the Braves, who made a flurry of moves early, and appear to be in a protracted negotiation with Josh Donaldson to fill a vacancy at the hot corner (at the time this paragraph was written, anyway).
Yet, I find myself preoccupied with different concerns: we saw what 2015-2019 baseball looked like relative to the prior two decades or so, and even what 2019 baseball looked like relative to 2018, and it all just leads me to ponder, “What’s next?” I remain unconvinced that the juiced ball was a deliberate ploy by the powers-that-(ML)be, but this post isn’t about that. Rather, it’s about what we, as baseball fans, want from our game, and where it’s going relative to those desires. Of course, there’s no actual collective baseball fandom. Different people have different preferences, and some of those preferences are going to be completely at odds with one another. Someone who loves pitching duels is not going to enjoy a run environment where games commonly end up as 8-6 contests. As such, the question posed by this post’s title is a personal one, and there’s no right or wrong answer.
The phrase “run environment” can simply mean “how many runs score in a game of major league baseball,” but how those runs score (or don’t) matters too, along with how many. To that end, while I use “run environment” as a shorthand descriptor, what I’m really talking about are the overall contours of a baseball game. In talking with a variety of folks about what it is that governs their enjoyment of one “type” (or “run environment”) of baseball over another, I came up with this curated list:
- Runs per game — this is fairly obvious.
- Strikeout rate — some people like strikeouts as the ultimate pitching outcome, others dislike them in copious quantities because it stifles defensive action.
- Home run rate — this is also fairly obvious, and probably wouldn’t be a huge consideration if not for the changes to the baseball occurring over the last half-decade. Without this, though, I doubt anyone thinks all that hard about run environment preferences.
- Rate of non-three true outcomes results — in other words, the rate of plate appearances that end with a ball in play. Similar to the above, but some people actually want to see “action” on the field rather than being particularly focused on reducing homers, strikeouts, or walks in particular.
- Innings pitched per start — to many of baseball’s detractors, including what appears to be MLB itself, relievers and their usage are a blight on the game, and the halcyon days of beefy dudes going many, many innings was preferable.
- Fielding percentage — oh yes, the dreaded fielding percentage. Really, this is mostly a proxy measure. The problem with Total Zone and any other historical defensive measure is that it’s torqued to league average, and as such isn’t going to vary era-by-era, as the figures will be adjusted to be average for that era or year. Furthermore, while “defensive efficiency” (i.e., 1.000 minus league BABIP) might be a better descriptor, what it excludes is “people being doofuses on defense.” While I personally didn’t consider (and haven’t really ever thought about) fielding percentage as a way of expressing a preference towards baseball of a given sort, others I queried definitely indicated that bumbling defenders take away from their enjoyment of the game, so here we are.
- Slugging percentage - somewhat, but not entirely, redundant with homer rate. Tied into the run environment for sure, but tells you something somewhat different. Note that the presence of slugging percentage but not OBP, wOBA, or similar here suggests that no one particularly likes walks.
- Stolen bases per game - self-explanatory, and used to capture an aspect of gameplay that none of the other components above mention.
- Pitchers per plate appearance - perhaps the truest “pace of play” factor, and one that you almost never hear anyone talk about.
Do these nine factors, in some combination, cover your preferences for the contours of a baseball game? I have no idea. I don’t think they really even cover mine. But they were the ones that generally came up in discussions.
Current MLB baseball, in the end, is a funny thing. As some are keen to remind me every so often, it is in fact not played on a spreadsheet, but with actual bats, balls, human sinews and tendons, and so on and so forth. Tweaking it, then, should be considerably harder than punching in a few keystrokes, the way one would patch a videogame. And yet, not only has MLB baseball changed far more dramatically than just some patches over the years, but MLB appears to be trying to alter it further still, shaping it more towards some kind of ideal that I’m not sure I agree with and definitely don’t understand. Look no further than the three-batter rule for relievers, the futzing with Injury List timers, and all sorts of other things that may or may not be in the pipeline. The ambiguous source of the juiced baseballs aside, it’s clear that MLB is trying to do something to the run environment, and therefore, that they’re not totally satisfied with the way things are right now. The question, which, again, is a personal one, is whether they’re even moving in the right direction.
Back to the whole “baseball’s dramatic changes” thing, though. Below are a series of charts, tracing each of the nine items identified above from 1911 through the present (except for pitchers per PA, which only has available data from 1988 through the present). The vertical chart axes have been designed to try and tread some middle ground between “meaningless distinctions causing wild fluctuations” and “opaque scales such that you can’t see any differences.”
When we look just at runs per game, there isn’t much of an overall trend. Yes, runs per game have been higher in the second half of the decade then in the first half, but 2019’s 4.8 runs per game (per team) mathed the 2007 figure, and was lower than the 2006 figure.
On the flip side, the strikeout rate just keeps climbing, and doing so at an accelerating rate. 23 percent of all plate appearances in 2019 ended in a strikeout; four years prior, it was 20.4 percent. It took seven years, previously, for the strikeout rate to rise 2.6 percentage points; before that, it took 16 years. The hump in the 1960s pertains to the expanded strikezone that came into use in 1963, culminating in the Year of the Pitcher in 1968 and the resulting lowering of the pitching mound (and reversion of the pre-1963 strike zone). In more modern history, the strikeout rate hasn’t fallen below its mid-1960s levels since 1994.
This is a very dramatic chart. While home runs rates don’t appear to have some kind of inexorable increase across baseball’s history, the very recent past kind of makes it look that way. The percentage of plate appearances ending in a homer went from 2.3 percent in 2014 to 3.6 percent in 2019, five years later. If you take that same jump of 1.3 percentage points and apply it to the 2014 value, you’d have to go all the way back to 1943 to find a comparable decrease. (One curiosity is 1987, the year of the “Rabbit Ball,” relative to the low homer rates both before and after.)
Also dramatic, and more monotonic in its decline than, say, homer rates. The pace has also accelerated here: the last five years have seen a decline in ball-in-play rates comparable to the overall decline experienced over the prior 22 years. It’s interesting to me that the decline was slowed after the late 1960s rules changes, and balls in play perked back up, only to start declining around 15 years later anyway.
I don’t really want to repeat myself, so you get the idea. Look at that sharp diagonal downward over the last few years. You can almost trace a gentle declining line between 1911 and 2009 or so, you get a last gasp of stability in the early part of the 2010s (2010 itself also comes up under the Wikipedia entry for “Year of the Pitcher,” along with 1968), and then boom, the harsh plummeting.
With errors being so subjective, I’m not sure whether this tells the story I’d like it to, one where teams eventually get wiser and wiser about playing non-doofus defenders, or an alternative one where errors just become less and less en vogue as scoring decisions. Maybe a mix of both. If we set aside changes in scoring, though, it looks like defense has gotten “cleaner” over time. (There are some interesting nexuses to explore here between reductions in balls in play versus increased defensive aptitude, for sure.)
This one more or less looks like the runs per game chart. They’re kind of redundant, by which I mean, very redundant. No, really.
Given this, I suppose the real “issue” for anyone that doesn’t like the 2019 style of baseball is not the literal run environment but the homer frequency, I guess. There’s nothing that weird about overall slugging nor overall run scoring in recent years relative to past years.
Raise your hand if you knew that stolen bases were even less popular for four decades of baseball’s history than they are now. I certainly didn’t. For anyone that wants to see more 1980s-style small ball, let me tell you: it was that era of baseball that was weird, not the other way around.
If MLB really, really, really wants to speed up games... maybe try doing something about this? It may not seem like a lot, but this increase alone from 1988 through 2019 is enough to add about five minutes of nine-inning game time to a perfect game on both sides. Then again, I don’t think MLB really, really, really wants to speed up games, I think it’s just an easy thing to say to placate antsy question-posers. (Because if they did, there’d at least be a pitch clock already, instead of the most recent such proposal being postponed to at least 2022.)
Some of the dramatic changes shown above probably provide some context for why MLB seems fairly keen on trying to “patch” baseball, and why certain old-timer observers seem particularly dyspeptic with regards to griping and grouching about today’s game. (Hi, Untitled Goose Gossage Humans.)
In addition to pondering what 2020 and beyond hold in store for us with respect to run environment, homer rate, and all the other stuff, I also think about anchor bias a lot. For my money, I don’t think compound interest is the most powerful force in the universe (it also appears that Albert Einstein didn’t say it was, so that’s a particularly unfun misattribution since now I don’t know who to attribute that sentiment to.) No, I’ll go with anchor bias for that particular category. If you don’t know what I’m talking about, in short, it’s this: the first “thing” encountered establishes a baseline against which everything else in the same framework gets compared. In neuroeconomics, this reveals itself in a series of cognitive biases, like, oh hell, let’s just go to Wikipedia:
In one of their first studies, participants were asked to compute, within 5 seconds, the product of the numbers one through eight, either as 1x2x3x4x5x6x7x8 or reversed as 8x7x6x5x4x3x2x1. Because participants did not have enough time to calculate the full answer, they had to make an estimate after their first few multiplications. When these first multiplications gave a small answer – because the sequence started with small numbers – the median estimate was 512; when the sequence started with the larger numbers, the median estimate was 2,250. (The correct answer is 40,320.) In another study by Tversky and Kahneman, participants observed a roulette wheel that was predetermined to stop on either 10 or 65. Participants were then asked to guess the percentage of the United Nations that were African nations. Participants whose wheel stopped on 10 guessed lower values (25% on average) than participants whose wheel stopped at 65 (45% on average). The pattern has held in other experiments for a wide variety of different subjects of estimation.
In areas of subjective preference, like what a baseball game should play like, on the other hand, I think of anchor bias as more of a “I liked the first thing I saw and not the other ones.” Part of the reason I think about anchor bias a lot is because I “suffer” from it (though if you ask me, it’s not “suffering” so much as I’m right, damnit, and some stuff is worse than other stuff). Whereas most of you, born and/or raised in the United States, are most familiar with the Wizard of Oz as written by L. Frank Baum and adapted in subsequent movies, stage plays, etc. etc., I was raised on the “adaptation” (read: kinda like copyright infringement fanfiction, from the 1930s) of Baum’s work by a Russian dude with the fit-for-a-Bond-villain name of Alexander Volkov, who created his own spinoff universe. I bring this up only to say that American (that is to say, original flavor, Word of God canon, actual) Wizard of Oz is weird as hell, and I can’t encounter it without getting an icky off-brand feeling akin to when you see knockoff merchandise where the bull on the Chicago Bulls logo is blue or something. (Or, you know, for a less-obscure example, the Game of Thrones TV show is/was/was always garbage, because I’m anchor biased to the books. Not sorry.)
I don’t know if the Goose Gossages and Commentator John Smoltzes of the world are “suffering” from this same phenomenon, or whether they’re just really cranky. But, it’s at least worth considering. Below are two big tables. The first just has z-scores of each of these nine metrics over time. (A z-score is just a description of how many standard deviations from the mean a given data point is; for example, the 2019 z-score of strikeout rate of 2.5 means that the strikeout rate in 2019 was 2.5 standard deviations from the mean above the “average” strikeout rate throughout baseball history.) The second attempts to do something much simpler: attach a “foreign” score to each year relative to baseball in 2019, based on a comparison of each metric in a given year to 2019. My hypothesis, which I am in no way going to rigorously test as its meant half in jest, is that by comparing the year in which you first started to seriously follow baseball enough to notice relative rates of things like homers and strikeouts is your anchor year, and your satisfaction or lack thereof with the aesthetics of a 2019 baseball game depends on how similar (or not) those aesthetics are to the way they were in your anchor year. In short, the higher the score, the more “foreign” the aesthetics.
As for myself, I think I’m probably anchor biased to 2010. While I started watching baseball in 2001, I probably didn’t pay too much attention to everything that really mattered within the game until about a decade later. The 2010 run environment, which was really suppressed (though not as much as, say, 2014) feels less “icky” to me than one where homers are easy to come by. But that’s just the anchor bias talking. What about you?
In the end, regardless of what happens with the ball, the rules, or any other considerations that impact the run environment, the charge for teams is still the same — build a good roster, as good of one as you can. The changes throughout baseball history have been dramatic, but they haven’t really altered the simple calculus that the team with the most production tends to win more often. While there may be benefits at the fringes for teams in targeting players that thrive in certain run environments and avoiding the ones that will wilt accordingly, I’m not actually sure anyone truly knows what the run environment will be in 2020 or beyond, or how the exact components of team strategy and tactics and the equipment itself will combine to create all of the parameters that inform these types of decisions. Which is why, as interesting as it is, this really might be an aesthetic concern first and foremost. So, I pose this question to everyone, including to MLB: what’s your favorite run environment, and why?