clock menu more-arrow no yes

Filed under:

A Look Back at the 2015 Player Projections for the Atlanta Braves

New, 15 comments

Were the 2015 player projections for the Atlanta Braves wildly off, or right on the money?

Jerome Miron-USA TODAY Sports

About a year ago, I thought it would be interesting to aggregate the available Steamer and ZiPS projections for every 2015 Braves player. At the same time, I thought it would be interesting to also go through and mark down how I thought each player would perform in the coming year, something I lovingly titled IWAG (Ivan's Wild-Ass Guess). You can find those results here. Unfortunately, that effort did not cover the entire roster: the Braves made a number of midseason acquisitions and call-ups, and on top of that, a bunch of guys who were likely not even considered as likely roster candidates in mid-March got some choice playing time, including Atlanta's acquisitions in the Craig Kimbrel/Melvin Upton trade.

To that end, the table below captures every player whose projections (and my own guesses) I cobbled together in the previous article, provided that player actually played as an Atlanta Brave in 2015. It does not include Matt Wisler, Juan Uribe, and most members of the star-crossed, oft-hammered bullpen, but most of the team's innings and PAs are captured.

The first three columns consist of projected and actual WAR values, pro-rated to a full season basis (600 plate appearances for hitters, 200 innings pitched for starters, and 65 innings pitched for relievers). Note that the distinction between starters and relievers is in some way artificial: we're comparing actual and projected values on the same playing time basis, so it's not important whether we're extending that to 200 innings or 65, so long as we're consistent. The tacky color coding is as follows: dark green are cases where the projection was within half a win (0.5 WAR) of the actual pro-rated production of a given player, light green represents cases where the projection was within a single win (1 WAR), and red shading are whiffs where the projection was two or more wins off the mark. For relievers, I cut these gaps by half, as relievers generally accumulate much less WAR than other players, and my pro-rated basis was less than a third of the playing time used to pro-rate starter performance.

The results shown above may not be surprising. Projections tended to be better for older, more established, and well... predictable players. For younger players or those with less playing time (and thus prone to small sample size variation driving significant gaps in full-season pro-rated performance), the projections tended to be off.

Of the players above, the projections pegged Freddie Freeman, Andrelton Simmons, Kelly Johnson, Nick Markakis, and Alex Wood very well; Jim Johnson, Luis Avilan, Julio Teheran, and Jace Peterson also played in line with their projections. Of those, only Jace Peterson was a guy with essentially no major league experience; the rest could be thought of as known quantities moreso than wild cards. (Though, of course, relievers have crazy performance variation all the time, and Julio Teheran's projections were unnerving before his 2015 season vindicated them.)

The projections tended to whiff on players with little experience, little playing time, or both. Phil Gosselin somehow managed to post a 4.6 fWAR/600, and while that's probably one of the most ridiculous things you'll read today, it does mean the projections were wrong. If Phil Gosselin can keep up being a 4+ win player over a single set of 600 PAs, then we've made a terrible mistake. Cody Martin and Eric Young Jr. were handily overestimated by the projections, Young likely because they didn't figure he'd be run out and overmatched in center, and Cody Martin because his minor league stats didn't presage the ineffectiveness and yo-yoing he endured at the big league level. Lastly, the projection systems were very down on Arodys Vizcaino, but he emerged from the ashes of a PED suspension to provide superb performance in an otherwise largely abominable relief corps.

In playing around with the numbers above, I ran quick analyses on the following, which may be of interest:

  • In terms of average distance from actual pro-rated performance, Steamer and ZiPS were pretty much identical for this group of players. This may be a little surprising to you given that Steamer tends to have the odd projection here and there (at least that's how it seems to me, see for example its virulent hatred of Todd Cunningham), but pessimism about Chris Johnson and optimism about Jim Johnson were rewarded even when not mirrored by ZiPS or myself. ZiPS seemed to be more middle-of-the-road and non-aggressive , and in the end, the mean "distance" from actual performance was pretty much the same as Steamer's. My own IWAG guesses were just a little worse (like a tenth of a win worse, on average, so not very far off), but that shouldn't be surprising given that IWAG is a very simplistic version of how Steamer, ZiPS, and other projection systems go about their business.
  • For anyone curious, the average "distance" between pro-rated and actual performance was about 1.6 WAR (Steamer, ZiPS) or 1.7 WAR (IWAG). While this may seem like a lot (and it really is), keep in mind that this takes into account pro-rated-to-full-season values for bench players and pitchers getting their cups of coffee (Foltynewicz, etc.). When only the retrospective actual starters/full-time players were included, this gap shrinks to 0.9-1 WAR, driven largely by things like everyone's pre-season pessimism about AJ Pierzynski (who found the fountain of youth) and Shelby Miller (who found the gland of effectiveness).
  • I did a bunch of uninteresting math to see if weighting the results by actual playing time changed anything, but Steamer and ZiPS were again neck-and-neck with an average "distance" of 1.18 wins; I was a bit behind at 1.22 wins.
  • Three projections were "perfect" on this pro-rated basis: ZiPS for Nick Markakis and Mike Foltynewicz, and IWAG for Luis Avilan. Both ZiPS and Steamer also came within 0.1 wins of Alex Wood's pro-rated performance.
  • The worst projection was Steamer, for Cody Martin, who was easily the most whiffed-on player of those listed here. Cody Martin had an absolutely dreadful 2015; here's hoping it gets better for him. Eric Young Jr. was another big whiff - ZiPS didn't have projections for him, so it may have saved itself a downturn (perhaps, with a similarly-sized whiff on EYJr. as the Steamer and IWAG, Steamer would come out a bit ahead). I also personally whiffed terribly on Eury Perez (his defense was supposed to be good!) and Jonny Gomes (his defense wasn't supposed to be that bad!). But, I do feel somewhat vindicated in that while Steamer and ZiPS projected Phil Gosselin to be an -0.4 win and 0.4 win/600 player, respectively, I thought he'd be closer to a 2-win/600 player, and I was way closer than them. (I was also less badly off on the catching tandem, figuring Pierzynski would be below-average but passable while Bethancourt would be quite bad in aggregate.)
In all, these results are consistent with the results found here and here, which are much more salient to actually evaluating projection systems than my little self-indulgent exercise above. It continues to surprise me that Steamer edged out other projection systems in 2015, but them's the breaks, at least for me.

If you're looking for takeaways, I think a useful one could be this: when you look at projections for the 2016 Braves (or any team, really), it probably helps to think of there being giant, blaring 1-win error bars in either direction, especially when you're talking about pro-rated WAR for non-starters. On the one hand, those error bars may give you comfort or agita about individual players; after all, there's a substantial difference between the value and roster usefulness of a 1-win player versus a 2-win player. But on the other hand, it does create a nice way to bound one's expectations: you know that not everyone's going to outplay their projection by a win, and it's even less likely that everyone will outplay their projection by more than that.

Coda: the above words discussed "distance" in absolute value terms. In general, when disregarding absolute value, ZiPS projections tended to overestimate pro-rated performance by 0.3 wins, while Steamer was at 0.1. In other words, for both ZiPS and Steamer, overly optimistic projections tended to be cancelled out by overly pessimistic ones, which just goes back to the idea that for every player that the projections whiff on in one direction, they'll whiff on another in the opposite direction, which is what makes them so effective in aggregate if not on an individual level. IWAG was too optimistic last season, with a non-absolute value distance of 0.7 wins; I'll look into revising that for 2016.