The quote aims to further underscore Nate’s irrelevancy.
Whatever small respect I had for Nate Silver evaporated when he started working for Polymarkets, a prediction market that is owned in part by Peter Thiel. Simply put, I don’t approve of Mr Silver’s bedfellows.
Perhaps I am overly cynical, but I wouldn’t put it past Nate Silver to have placed a major bet on the presidential election, and to use his own widely-distributed predictions to influence the odds.
He let his success go to his head, thinking because he was good at one part of election analysis that he would be good at the other type.
The unique thing he contributed that did not exist yet was a data-based based system. The model itself is ultimately subjective (there's no inherent law about how that data should be weighed) but the data that the model relied on was objective: polling numbers, economic data, previous election history in a region. He was offering something different from what we usually got from people like Wasserman or Sabato or Walter who primarily work with what they hear behind the scenes and their own personal mental assessment of how everything ties together without using a specific model or algorithm.
When Silver decided to offer his "gut feel" assessments as a part of what he offered, he steadily shifted into being just another pundit. We have no shortage of pundits.
There is value in data/model based assessments and also value in personal evaluation based assessments. That doesn't mean someone providing one of those is capable of providing the other with any value or worth.
That's really well stated. At first, he was a breath of fresh air. He became really stale because he stopped trying to be so statistics-based. However, his project of applying baseball-style sabermetrics to polling was also greatly damaged by the fact that polls are not hard numbers like bases on balls or the more sophisticated stats Silver created based on defensive range as well as fielding percentage, on base percentage plus ballpark adjusted slugging and so forth. Ultimately, no matter what corrective measures you may seek to apply to polls, they are still educated guesses, even when they are completely honest, and there is a constant risk of garbage in, garbage out.
As you note, garbage in garbage out put a shelf-life on him approach anyway, but he shortened that shelf life unnecessarily by abandoning what he was good at.
Although, polling isn't strictly dead in the water. I think 2022 polls were fairly accurate, weren't they? We could have had a nice model for that year if it was a proper focus.
I'm generally pretty skeptical of the value data-based election models to begin with, particularly the way that its used in 99.9% of instances. I think what's true of elections and I think this is true of sports as well is that this data is very good as an explanatory tool after the event, I'm pretty skeptical of its value as a predictive tool before the event.
It's worth noting that Silver's first big success was in creating a predictive tool (PECOTA), which was pretty good, though of course such tools can only be so good as there's just too much data that simply can't be input into an algorithmic model.
I think he got pundit brain because he knows the model can only do so much. On top of that, I think the model is just inputting garbage now (because polls are garbage, just universally).
Believe me, I'm aware of Silver's background. But the reason I've been skeptical of him pretty much since day 1 is that I think if you asked baseball scouts, managers, GM's and asked them to predict a player's performance, you'd get just as accurate of information
I get where you're coming from, but in reality we're all relying on data for all of our predictions. What we lack is a thorough and consistent process for applying that data.
Having a model is a way to formalize that data process and make it consistent. We will never see a perfectly predictive model, and that's OK. We shouldn't expect one. Just like we shouldn't expect it from more traditional pundits.
I think we all know that tossup/lean/likely/safe all have degrees within them. Sabato has WI, MI, AZ, NV, PA, GA, and NC all as tossup states for the presidential election. That doesn't truly mean each of them is exactly as likely to go to Harris, but more that they exist in some spectrum of maybe something in the range of 45-55 to 55-45. Similar idea for lean and likely. There's little practical difference between "Lean D" and "70.3% of D win" in that sense. They're both the result of models, one informal and one formal. The exactness of one prediction is a result of it being a formal model and the consequences of it being mathematically based, rather than it having anything approaching that degree of confidence.
So long as we take into account the limitations at play I rather like formal data models. If a prediction changes, it will be known an obvious why. If you feed it the same exact data to two different elections, it will give the same prediction. There's no fretting about emotions and secret sources and personal bias. There's a place for them if they can source good data.
I've heard this before, I just haven't seen any reason to believe that these models are any more accurate than simply asking the people who would know like Sabato and asking them to put percentages on a candidate's likelihood to win
Nate is no longer relevant imo
The quote aims to further underscore Nate’s irrelevancy.
Whatever small respect I had for Nate Silver evaporated when he started working for Polymarkets, a prediction market that is owned in part by Peter Thiel. Simply put, I don’t approve of Mr Silver’s bedfellows.
Perhaps I am overly cynical, but I wouldn’t put it past Nate Silver to have placed a major bet on the presidential election, and to use his own widely-distributed predictions to influence the odds.
He would risk incarceration
So, if he is an addicted gambler that would create all the more rush.
He let his success go to his head, thinking because he was good at one part of election analysis that he would be good at the other type.
The unique thing he contributed that did not exist yet was a data-based based system. The model itself is ultimately subjective (there's no inherent law about how that data should be weighed) but the data that the model relied on was objective: polling numbers, economic data, previous election history in a region. He was offering something different from what we usually got from people like Wasserman or Sabato or Walter who primarily work with what they hear behind the scenes and their own personal mental assessment of how everything ties together without using a specific model or algorithm.
When Silver decided to offer his "gut feel" assessments as a part of what he offered, he steadily shifted into being just another pundit. We have no shortage of pundits.
There is value in data/model based assessments and also value in personal evaluation based assessments. That doesn't mean someone providing one of those is capable of providing the other with any value or worth.
That's really well stated. At first, he was a breath of fresh air. He became really stale because he stopped trying to be so statistics-based. However, his project of applying baseball-style sabermetrics to polling was also greatly damaged by the fact that polls are not hard numbers like bases on balls or the more sophisticated stats Silver created based on defensive range as well as fielding percentage, on base percentage plus ballpark adjusted slugging and so forth. Ultimately, no matter what corrective measures you may seek to apply to polls, they are still educated guesses, even when they are completely honest, and there is a constant risk of garbage in, garbage out.
As you note, garbage in garbage out put a shelf-life on him approach anyway, but he shortened that shelf life unnecessarily by abandoning what he was good at.
Although, polling isn't strictly dead in the water. I think 2022 polls were fairly accurate, weren't they? We could have had a nice model for that year if it was a proper focus.
I'm generally pretty skeptical of the value data-based election models to begin with, particularly the way that its used in 99.9% of instances. I think what's true of elections and I think this is true of sports as well is that this data is very good as an explanatory tool after the event, I'm pretty skeptical of its value as a predictive tool before the event.
It's worth noting that Silver's first big success was in creating a predictive tool (PECOTA), which was pretty good, though of course such tools can only be so good as there's just too much data that simply can't be input into an algorithmic model.
I think he got pundit brain because he knows the model can only do so much. On top of that, I think the model is just inputting garbage now (because polls are garbage, just universally).
Believe me, I'm aware of Silver's background. But the reason I've been skeptical of him pretty much since day 1 is that I think if you asked baseball scouts, managers, GM's and asked them to predict a player's performance, you'd get just as accurate of information
I get where you're coming from, but in reality we're all relying on data for all of our predictions. What we lack is a thorough and consistent process for applying that data.
Having a model is a way to formalize that data process and make it consistent. We will never see a perfectly predictive model, and that's OK. We shouldn't expect one. Just like we shouldn't expect it from more traditional pundits.
I think we all know that tossup/lean/likely/safe all have degrees within them. Sabato has WI, MI, AZ, NV, PA, GA, and NC all as tossup states for the presidential election. That doesn't truly mean each of them is exactly as likely to go to Harris, but more that they exist in some spectrum of maybe something in the range of 45-55 to 55-45. Similar idea for lean and likely. There's little practical difference between "Lean D" and "70.3% of D win" in that sense. They're both the result of models, one informal and one formal. The exactness of one prediction is a result of it being a formal model and the consequences of it being mathematically based, rather than it having anything approaching that degree of confidence.
So long as we take into account the limitations at play I rather like formal data models. If a prediction changes, it will be known an obvious why. If you feed it the same exact data to two different elections, it will give the same prediction. There's no fretting about emotions and secret sources and personal bias. There's a place for them if they can source good data.
I've heard this before, I just haven't seen any reason to believe that these models are any more accurate than simply asking the people who would know like Sabato and asking them to put percentages on a candidate's likelihood to win