Jump to content

2023 Detroit Tigers Regular Season Discussion Thread


oblong

Recommended Posts

46 minutes ago, gehringer_2 said:

this goes back to the discussion the other day. Are streaky totals actually worth less than consistent ones? One on hand, it's absolutely true that you can only win each game once. If you have streaky players you will have situations were they combine for excess runs you end up wasting in routs. The counter point is that you have 9 guys hitting so one player's heat will normally be at least somewhat balanced by another going cold. Even there you will suffer some cost in terms of batting order optimization  - though probably not much.  And of course you can end up with the 2022 Tigers where everyone runs cold at once.......😭

I don't think so. WAR already takes into account that most runs will either be in excess of what's needed to win this game today, or else will be wasted because his team already loses this game today without or without him. That's partly why the rule of thumb evolved into, for every ten runs a player plays above replacement, that translates to one win generated for the team all by himself.

Link to comment
Share on other sites

1 hour ago, mtutiger said:

In terms of speculating, I speculate that Scott Harris made up his mind on Candy sometime before the non-tender deadline last offseason, didn't see Candy as a future piece on this ballclub, pulled the trigger on non-tendering him and probably isn't losing sleep over what Candy is or isn't doing with the Nationals.

Bingo

Link to comment
Share on other sites

I think consistency/versus streakiness could go either way.  It all depends on timing.  If his streaks happen when his team needs them (when his teammates aren't streakng enough!), then the streaky hitter could be more valuable in a given year.  If not then he could be less valuable in a given year.  In doing projections, you don't know what the timing will be, so WAR is your best guess.  Do we even have a measure for streakiness?  I did some work on that a really long time ago in other life and then passed it off to Bill Petti who did some more with it, but I don't think it's tracked anywhere.  It seems to be mostly anecdotal.   

Link to comment
Share on other sites

25 minutes ago, Tiger337 said:

I think consistency/versus streakiness could go either way.  It all depends on timing.  If his streaks happen when his team needs them (when his teammates aren't streakng enough!), then the streaky hitter could be more valuable in a given year.  If not then he could be less valuable in a given year.  In doing projections, you don't know what the timing will be, so WAR is your best guess.  Do we even have a measure for streakiness?  I did some work on that a really long time ago in other life and then passed it off to Bill Petti who did some more with it, but I don't think it's tracked anywhere.  It seems to be mostly anecdotal.   

Yes this is what I always thought and why I originally thought notorious streaky hitters like Upton and Javy could've been more valuable for the Tigers in terms of w/l than their WAR indicated. 

I theorized that maybe during their hot stretches they could help carry the offense and during their cold stretches we would have other guys to step up to offset. 

Unfortunately their cold stretches were worse than I expected and it coincided with other guys suckling as well. 

Link to comment
Share on other sites

1 hour ago, Tiger337 said:

Or maybe he would have made a similar move with another team for players who were equally promising.  

Sure.

And I probably framed up my discussion around Maton a little too much and ignored Vierling and Sands.  They had skills that were sought after as well.

Link to comment
Share on other sites

1 hour ago, Tiger337 said:

I think consistency/versus streakiness could go either way.  It all depends on timing.  If his streaks happen when his team needs them (when his teammates aren't streakng enough!), then the streaky hitter could be more valuable in a given year.  If not then he could be less valuable in a given year.  In doing projections, you don't know what the timing will be, so WAR is your best guess.  Do we even have a measure for streakiness?  I did some work on that a really long time ago in other life and then passed it off to Bill Petti who did some more with it, but I don't think it's tracked anywhere.  It seems to be mostly anecdotal.   

I think if you look at it in terms of analysis of variance, you can at least show that in the limiting case, streakiness is bad - as follows: if every team scored exactly their average number of runs per game (variance ->0), everyteam would defeat every other team with a lower average runs scored every game and lose every game to each team with a higher runs scored average. The team with the highest runs scored average would win every game - subject only to integer round off error because the averages would not necessarily vary by a full run. Introduce increasing variance into the averages, and the winningest teams can only increase their losses. At the other limit, (variance -> inf) it will be true that if every team has a huge variances in its runs scored, all team records will converge toward 50% no matter what the difference in the overal scoring averages because all wins and losses will end up being the result of random noise being larger than those averages, and thus will distribute toward uniformity. It would seem the real world has to exist between those two limits. (that fact that the  exponent on the pythagorean win prediction formula is about 1.83 rather than infinity captures all this)

Now while I take the argument above as pretty reasonable, the practical question is how much of the net variance in team scoring is temporal player performance variance vs the natural chance randomizing inputs in the game (non-uniform balls, bats, weather, fields, umps etc.,etc.) . In baseball we know those effects are big, bigger than in most sports given the geneally narrower disitribution between winning and losing records between baseball and say basketball or football. And as noted player streakiness is also averaged out within the team because there are at least 9 players contributing to offense each game. So while I think the directionality of the logical argument is clear - it's not at all clear what the level of significance to the effect is or how to tease it out with any reliability.

 

Edited by gehringer_2
Link to comment
Share on other sites

this is just off the cuff, and maybe the analogy doesn't work, but if you have pretty much any statistic (or financial asset as I'm thinking about), you can analyze its performance by standard deviation.  Somebody who is streaky will have their overall average, but their periodic deviations are going to be greater than somebody who is not quite as streaky.  In general, something -- a statistic, or an asset for example -- with greater standard deviation is not as valuable as the same something with lower standard deviation.   So in that sense, too, streakiness is bad. 

PS: I think that Gehringer_2 maybe said this, too. Or, this_2

Link to comment
Share on other sites

1 hour ago, theroundsquare said:

this is just off the cuff, and maybe the analogy doesn't work, but if you have pretty much any statistic (or financial asset as I'm thinking about), you can analyze its performance by standard deviation.  Somebody who is streaky will have their overall average, but their periodic deviations are going to be greater than somebody who is not quite as streaky.  In general, something -- a statistic, or an asset for example -- with greater standard deviation is not as valuable as the same something with lower standard deviation.   So in that sense, too, streakiness is bad. 

PS: I think that Gehringer_2 maybe said this, too. Or, this_2

But stock funds, which are streakier than bond funds, have historically done better than bond funds long term.   

Link to comment
Share on other sites

21 minutes ago, Tiger337 said:

But stock funds, which are streakier than bond funds, have historically done better than bond funds long term.   

The underlying trend for stocks has a steeper slope than for bonds. A long time ago when I knew even less about it than I do now, I Fedelity guy in a class explained that businesses borrow money to make more money. The money they borrow is at bond rates, if their return on equity (basically refected stock prices) isn't better than interest rates, they are out of business, so on a very global view, stocks have to do better than bonds long term because they are always making investments with ROIs higher than interest rates. That overall result is separate from variance/volatility effects.

But course individually, some companies make investments that don't pan out, they go bust and don't pay back their loans - so the higher return comes with more risk /volatility for any individual stock.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...