2024 US Presidential Election (Taylor's Version)

Nate is emotionally invested in proving he was right about Shapiro being the best VP pick, he wants nothing more than for Kamala to lose PA so he can say “told you so.”

2 Likes

IMG_1907

4 Likes

does this grimmer guy not realize that while you can only bet on a given election once, you can bet on lots of elections over a period of time?

does he think it’s impossible to figure out if someone is a sports betting sharp?

No, doesn’t have the same issue with sports betting. His point is sample size.

1 Like

https://x.com/electionsjoe/status/1831360832376336447?s=46&t=XGja5BtSraUljl_WWUrIUg

4 Likes

He mentions that because a lot of the races are correlated it becomes difficult to suss out how accurate the models are plus everyone’s models involves some kind of fundamentals plus polls and what goes into the fundamentals is always changing so there’s no baseline to compare accuracy. A statistician did make a response to him and their pretty much in agreement. Silver’s models are better than pundits’ feels but how one model is compared to another is going to be difficult to tell in the short term.

I disagree with Grimmer et al. that we can’t distinguish probabilistic election forecasts from coin flips. Election forecasts, at the state and national level, are much better than coin flips, as long as you include non-close elections such as lots of states nowadays and most national elections before 2000. If all future elections are as close in the electoral college as 2016 and 2020, then, sure, the national forecasts aren’t much better than coin flips, but then their conclusion is very strongly leaning on that condition. In talking about evaluation of forecasting accuracy, I’m not offering a specific alternative here–my main point is that the evaluation should use the vote margin, not just win/loss. When comparing to coin flipping, Grimmer et al. only look at predicting the winner of the national election, but when comparing forecasts, they also look at electoral vote totals.

I agree with Grimmer et al. that it is essentially impossible from forecasting accuracy alone to choose between reasonable probabilistic forecasts (such as those from the Economist and Fivethirtyeight in 2020 and 2024, or from prediction markets, or from fundamentals-based models in the Rosenstone/Hibbs/Campbell/etc. tradition). N is just too small, also the models themselves along with the underlying conditions change from election to election, so it’s not even like there are stable methods to make such a comparison.

Doing better than coin flipping is not hard. Once you get to a serious forecast using national and state-level information and appropriate levels of uncertainty, there are lots of ways to go, there are reasons to choose one forecast over another based on your take on the election, but you’re not gonna be able to empirically rate them based on forecast accuracy, a point that Grimmer et al. make clearly in their Table 2.

Grimmer et al. conclude:

We think that political science forecasts are interesting and useful. We agree that the relatively persistent relationship between those models and vote share does teach us something about politics. In fact when one of us (Justin) teaches introduction to political science, his first lecture focuses on these fundamental only forecasts. We also agree it can be useful to average polls to avoid the even worse tendency to focus on one or two outlier polls and overinterpret random variation as systematic changes.

It is a leap to go from the usefulness of these models for academic work or poll averaging to justifying the probabilities that come from these models. If we can never evaluate the output of the models, then there is really no way to know if these probabilities correspond to any sort of empirical reality. And what’s worse, there is no way to know that the fluctuations in probability in these models are any more “real” than the kind of random musing from pundits on television.

OK, I basically agree (even if I think “there is really no way to know if these probabilities correspond to any sort of empirical reality” is a slight overstatement).

Grimmer et al. are making a fair point. My continuation of their point is to say that this sort of poll averaging is gonna be done, one way or another, so it makes sense to me that news organizations will try to do it well. Which in turn should allow the pundits on television to be more reasonable. I vividly recall 1988, when Dukakis was ahead in the polls but my political scientist told be that Bush was favored because the state of the economy (I don’t recall hearing the term “fundamentals” before our 1993 paper came out). The pundits can do better now, but conditions have changed, and national elections are much closer.

All this discussion is minor compared to horrors such as election denial (Grimmer wrote a paper about that too), and I’ll again say that the total resources spent on probabilistic forecasting is low.

One thing I think we can all agree on is that there are better uses of resources than endless swing-state and national horserace polls, and that there are better things for political observers to focus on than election forecasts. Ideally, probabilistic forecasts should help for both these things, first by making it clear how tiny the marginal benefit is from each new poll, and second by providing wide enough uncertainties that people can recognize that the election is up in the air and it’s time to talk about what the candidates might do if they win. Unfortunately, poll averaging does not seem to have reduced the attention being paid to polls, and indeed the existence of competing forecasts just adds drama to the situation. Which perhaps I’m contributing to, even while writing a post saying that there are too many polls and that poll aggregation isn’t all that.

Let me give the last word to Sean Westwood (the third author of the above-discussed paper), who writes:

Americans are confused by polls and even more confused by forecasts. A significant point in our work is that without an objective assessment of performance, it is unclear how Americans should evaluate these forecasts. Is being “right” in a previous election a sufficient reason to trust a forecaster or model? I do not believe this can be the standard. Lichtman claims past accuracy across many elections, and people evaluated FiveThirtyEight in 2016 with deference because of their performance in 2008 and 2012. While there is value in past accuracy, there is no empirical reason to assume it is a reliable indicator of overall quality in future cycles. We might think it is, but at best this is a subjective assessment.

Agreed.

https://statmodeling.stat.columbia.edu/2024/08/30/why-are-we-making-probabilistic-election-forecasts-and-why-dont-we-put-much-total-effort-into-them/

it’s the same thing.

not really gonna read all that but the idea that there aren’t enough elections to really know anything is insane. There are 430 something house races every two years! It’s not hard to check how good the models are. If they say something is 30%, it should happen ~30% of the time, and we can … check the results.

eg:

1 Like

The article is about presidential elections.

“Are these calculated probabilities any good? Right now, we simply don’t know. In a new paper I’ve co-authored with the University of Pennsylvania’s Dean Knox and Dartmouth College’s Sean Westwood, we show that even under assumptions very favorable to forecasters, we wouldn’t know the answer for decades, centuries, or maybe even millenia.

To see why, consider one way to evaluate the forecasts: calibration. A forecast is considered calibrated if the estimated probability of an event happening corresponds to how often the event actually happens. So, if a model predicts Harris has a 59 percent chance of winning, then a calibrated model would expect her (or another candidate) to win 59 out of 100 presidential elections.

In our paper, we show that even under best-case scenarios, determining whether one forecast is better calibrated than another can take 28 to 2,588 years. Focusing on accuracy — whether the candidate the model predicted to win actually wins — doesn’t lower the needed time either. Even focusing on state-level results doesn’t help much, because the results are highly correlated. Again, under best-case settings, determining whether one model is better than another at the state level can take at least 56 years — and in some cases would take more than 4,000 years’ worth of elections.“

https://x.com/NateSilver538/status/1831405596828221601?t=8SnFI-7NlXZ75HHoJjTp_Q&s=19

I am still missing the connection here, how does picking Shapiro improve enthusiasm? The only thing I can think of is picking Shapiro hopefully bumps up Pennsylvania’s numbers, the increased numbers makes it look more like Harris will win, and that increases Dem confidence? Is that the causal chain?

Because it would have improved Nate’s enthusiasm. End of story.

13 Likes

I think the argument is, it looks like PA is going to be a must win state, Shapiro would have improved the odds of winning the state (and the overall race), ergo not picking Shapiro looks bad.

I agree that Nate doesn’t really make a link between picking Shapiro and greater enthusiasm… If pushed, he’d probably even agree that Walz was better for short term momentum/vibes, but that Shapiro would be better for long term electoral outcomes once the initial enthusiasm settles down.

(But really, as Trolly said, I think Nate just really wanted Shapiro to be the pick and is marking his territory so he can play the I told you so game if Harris loses)

3 Likes

Hells yeah:

https://x.com/CraigDMauger/status/1831273265265795359

10 Likes

Would NC be in play right now if she had chosen Shapiro? I did hear somewhere that if they lose PA, they need to win 2 swing states. So yeah, PA is important with (19?) electoral votes. I think Walz is a big reason she was able to close the gap in other swing states, so it’s worth the small risk of performing slightly less in PA with Shapiro imo

1 Like

My feelings about all of the alternative scenarios is :woman_shrugging: My personal take is that Walz helps more nationally than Shapiro, (and that most of the Shapiro gains in PA would have been offset by losses in Michigan)… That being said, the polling is close and confounded by a lot of factors and most voters pick based on the top of the ticket, so I’m not even totally sure about that conclusion.

And I’m super skeptical of anyone who would try to actually quantify Walz vs Shapiro on a state by state basis with anything other than massive error bars. That’s one of the reasons why I think Nate’s take is so influenced by his personal priors. I don’t think enough data exists to make a strong claim about Shapiro vs Walz, so I think he’s relying a lot on his own convictions and feels.

1 Like

VP pick literally doesn’t matter. At all.

1 Like

Not good vibes in here these days :harold:

Still feel pretty confident Kamala is going to win

3 Likes