I’ve been watching baseball forever, but there’s still a ton I don’t know about the game. One of those things, until today, was just how complicated the history of the save rule is. I started watching in the 80s, which means that the save as we know it has always been there for me. But I had no idea, for instance, that when I saw my first game that our version of the rule was only 10 years old (it was adopted in 1975). I also had no clue that talk of the concept has been traced all the way back to 1907 or that it appeared in Ty Cobb’s 1915 memoir. That’s astounding to me, because although I know relief pitching was sort of a thing back then, it was also still part of the era where guys pretty much threw until their arms fell off and many of the relievers were just starters on an off day. Also interesting is that there were teams hiring stats guys as early as the 1940s. Considering how many around the game still look at stats guys today, I’m sure that went over well and without much in the way of grumbling.
Enter baseball’s first full-time statistician, Allan Roth. Brooklyn Dodgers president Branch Rickey, sensing an opportunity to gain an edge with a dedicated mathematician in the front office, had hired him in the ’40s. In 1951, Roth set about tracking the team’s relievers, and he came up with the first formal definition of the save: Any non-winning relief pitcher who finished a winning game would be credited with one, no matter how large his lead. If the team won, and he finished the game, he’d earn a save.
The system was imperfect—had a reliever “saved” anything if he entered with a double-digit lead?—but the basic concept began to spread to other teams, to reporters, and to pitchers themselves. From the beginning, the metric was linked to a reliever’s earning potential. “Saves are my bread and butter,” Cubs reliever Don Elston told The Sporting News in 1959. “What else can a relief pitcher talk about when he sits down to discuss salary with the front office?”
Roth began to share his definition with the media in the late ‘50s, and before long, the save made its first major evolution. In 1960, the stat had a new formula, a new architect, and a new principle to prove.
Holtzman, a Cubs beat writer for the Chicago Sun-Times, had spent the 1959 season watching Elston and teammate Bill Henry, and he suspected that they were among the best relievers in baseball. However, a different pitcher was getting the attention: Pirates reliever Elroy Face, who had gone 18–1 and been rewarded with a seventh-place finish for NL MVP. There was just one problem, Holtzman figured: Face hadn’t actually been that good.
“Everybody thought he was great,” Holtzman, who died in 2008, told Sports Illustrated in 1992. “But when a relief pitcher gets a win, that’s not good, unless he came into a tie game. Face would come into the eighth inning and give up the tying run. Then Pittsburgh would come back to win in the ninth.” (In five of his wins, Face entered with a lead and left without one.)
So Holtzman set out to create his own definition for the save, with criteria much stingier than Roth’s. In order to be eligible, a reliever had to face the potential tying or winning run, or come into the final inning and pitch a perfect frame with a two-run lead. If neither of those situations applied, there was no save opportunity.
From there, things became a total mess. Some liked Roth’s definition, others favoured sportswriter Jerome Holtzman’s, and plenty of others came up with their own systems. It wasn’t until 1969 that baseball officially recognized it as a stat, and even then it went through a bunch of changes before we ended up with what we have now. And I don’t want to alarm anyone, but even today people still can’t always agree on whether or not it’s a worthwhile thing to track. Yes. Bickering. In baseball. I know.
How Major League Baseball Adopted the Save—and Changed the Game Forever