Skeptics Guide to the Universe Forums
General Discussions => Skepticism / Science Talk => Topic started by: superdave on April 04, 2019, 08:09:56 PM

they did an analysis of how well their predictions have gone over the years and they are pretty damn good.
https://projects.fivethirtyeight.com/checkingourwork/
of course nay sayers will point out that they got trump wrong, at which point they say they actually predicted he had about a 30 percent chance to win , which isnt shabby at all, etc.. but yeah their numbers check out.

That seems like a pretty lame criticism, "These guys are super accurate"
"Oh yeah! Then why did they get the thing wrong that everybody got wrong?"

the detractors don't grasp that 70% does not mean 100%. Which is the why the followup to the article in the OP was titled "When We Say 70 Percent, It Really Means 70 Percent"
Also most other groups were saying numbers much higher than 70%. Fiverthirty by far was the most accurate prediction. Silver even wrote an article along of the lines of, if Trump wins it will be because he does X, Y and Z, and that's how it went down.

the detractors don't grasp that 70% does not mean 100%. Which is the why the followup to the article in the OP was titled "When We Say 70 Percent, It Really Means 70 Percent"
Also most other groups were saying numbers much higher than 70%. Fiverthirty by far was the most accurate prediction. Silver even wrote an article along of the lines of, if Trump wins it will be because he does X, Y and Z, and that's how it went down.
A good hitter in baseball is "out" 70% of the timeand if you were playing a lottery and had a 30% chance of winning, you have to think hard on not entering.
Bizarre that people equated the 760% as a sure thing, and also bizarre that they take no note of that number was dropping pretty significantly after Comey's letter to congress.

I think they were within their margin of error too.

Pollsters: Donald Trump has only 30% chance of winning ...
....
Pollsters: We were right his chances were 30% and he won
Sent from my iPhone using Tapatalk

the whole point of their article was a meta analysis. Things they predicted to happen 30% of the time really did happen 30% of the time.

the whole point of their article was a meta analysis. Things they predicted to happen 30% of the time really did happen 30% of the time.
More like 35. Some of their biggest errors were in the 25% and 35%
How Good Are FiveThirtyEight Forecasts?  FiveThirtyEight (https://projects.fivethirtyeight.com/checkingourwork/presidentialelections/)
On Aug. 14, 2016, we gave Donald Trump a 6 percent chance of winning Michigan. He won.
If Hillary had won Michigan the Electoral College would have been a lot closer (but still not enough)

the whole point of their article was a meta analysis. Things they predicted to happen 30% of the time really did happen 30% of the time.
More like 35. Some of their biggest errors were in the 25% and 35%
What does that mean? How are you defining an "error."

35% of the time it works all the time.

Did we move into a Pratchettverse where the one in a million chance happens nine times out of ten?

the whole point of their article was a meta analysis. Things they predicted to happen 30% of the time really did happen 30% of the time.
More like 35. Some of their biggest errors were in the 25% and 35%
What does that mean? How are you defining an "error."
Look at the forecast chart on the page I linked to. Hover over the 25% and the 35% points. It indicates how far off they were from a prediction that something would happen 25% of the time and 35% of the time.

the whole point of their article was a meta analysis. Things they predicted to happen 30% of the time really did happen 30% of the time.
More like 35. Some of their biggest errors were in the 25% and 35%
What does that mean? How are you defining an "error."
Look at the forecast chart on the page I linked to. Hover over the 25% and the 35% points. It indicates how far off they were from a prediction that something would happen 25% of the time and 35% of the time.
I'm still not sure what you're getting at. The chart shows that in this category—presidential elections—events that were given probabilities of 25% and 35% occurred with somewhat smaller frequency: 13% and 18%, respectively. Overall, the chart suggests that for presidential elections the model is somewhat too timid, that is, forecasts tend to be too close to 50%. So predicted probabilities in the range of 25% to 35%, like Trump winning in 2016 (28%), tended to be overstated; unlikely events, like Trump being elected, should have been given even smaller probabilities. Is that what you're saying?

I believe he is saying that he has a deeply ingrained ignorance of stats that he mistakes for the knowledge base of a PHd and a lifetime of experience.

I'm still not sure what you're getting at. The chart shows that in this category—presidential elections—events that were given probabilities of 25% and 35% occurred with somewhat smaller frequency: 13% and 18%, respectively. Overall, the chart suggests that for presidential elections the model is somewhat too timid, that is, forecasts tend to be too close to 50%. So predicted probabilities in the range of 25% to 35%, like Trump winning in 2016 (28%), tended to be overstated; unlikely events, like Trump being elected, should have been given even smaller probabilities. Is that what you're saying?
Not exactly. I'm saying that of their forecasts that had a 25% and a 35% probability came in at a significantly lower percentages.

I'm still not sure what you're getting at. The chart shows that in this category—presidential elections—events that were given probabilities of 25% and 35% occurred with somewhat smaller frequency: 13% and 18%, respectively. Overall, the chart suggests that for presidential elections the model is somewhat too timid, that is, forecasts tend to be too close to 50%. So predicted probabilities in the range of 25% to 35%, like Trump winning in 2016 (28%), tended to be overstated; unlikely events, like Trump being elected, should have been given even smaller probabilities. Is that what you're saying?
Not exactly. I'm saying that of their forecasts that had a 25% and a 35% probability came in at a significantly lower percentages.
That is what I am saying. So events, like Trump winning in 2016, which the 538 model assigned probabilities of around 25%, should have been assigned lower probabilities, in the neighborhood of 13%. The data in the chart suggest that for presidential elections, the 538 model is too timid, in general assigning probabilities closer to 50% than are warranted. The data suggest, perhaps ironically, that 538 gave Trump to high a chance of winning.

I'm still not sure what you're getting at. The chart shows that in this category—presidential elections—events that were given probabilities of 25% and 35% occurred with somewhat smaller frequency: 13% and 18%, respectively. Overall, the chart suggests that for presidential elections the model is somewhat too timid, that is, forecasts tend to be too close to 50%. So predicted probabilities in the range of 25% to 35%, like Trump winning in 2016 (28%), tended to be overstated; unlikely events, like Trump being elected, should have been given even smaller probabilities. Is that what you're saying?
Not exactly. I'm saying that of their forecasts that had a 25% and a 35% probability came in at a significantly lower percentages.
That is what I am saying. So events, like Trump winning in 2016, which the 538 model assigned probabilities of around 25%, should have been assigned lower probabilities, in the neighborhood of 13%. The data in the chart suggest that for presidential elections, the 538 model is too timid, in general assigning probabilities closer to 50% than are warranted. The data suggest, perhaps ironically, that 538 gave Trump to high a chance of winning.
I don't think it's valid to apply that generalization to a specific case. All of the things to which they assigned a 25% probability of happening happened 13% of the time.
But that doesn't imply that that specific example should have been given a lower probability. It implies that in general those predictions were too high.

I'm still not sure what you're getting at. The chart shows that in this category—presidential elections—events that were given probabilities of 25% and 35% occurred with somewhat smaller frequency: 13% and 18%, respectively. Overall, the chart suggests that for presidential elections the model is somewhat too timid, that is, forecasts tend to be too close to 50%. So predicted probabilities in the range of 25% to 35%, like Trump winning in 2016 (28%), tended to be overstated; unlikely events, like Trump being elected, should have been given even smaller probabilities. Is that what you're saying?
Not exactly. I'm saying that of their forecasts that had a 25% and a 35% probability came in at a significantly lower percentages.
That is what I am saying. So events, like Trump winning in 2016, which the 538 model assigned probabilities of around 25%, should have been assigned lower probabilities, in the neighborhood of 13%. The data in the chart suggest that for presidential elections, the 538 model is too timid, in general assigning probabilities closer to 50% than are warranted. The data suggest, perhaps ironically, that 538 gave Trump to high a chance of winning.
I don't think it's valid to apply that generalization to a specific case. All of the things to which they assigned a 25% probability of happening happened 13% of the time.
But that doesn't imply that that specific example should have been given a lower probability. It implies that in general those predictions were too high.
Say you roll a die. What probability would you give it, before the roll, that it would come up "six." If it then does comes up "six," was the probability you gave it wrong?

This is more like rolling 1000 die each with 20 sides, and the die is loaded, and you're trying to predict the probability of each number turning up based on what you think you know about the loading and how you think the loading affects the probability. And the loading might change with each roll. And based on the accuracy of your overall predictions you're making a claim about the probability of a single row, where the loading was unique.

This is more like rolling 1000 die each with 20 sides, and the die is loaded, and you're trying to predict the probability of each number turning up based on what you think you know about the loading and how you think the loading affects the probability. And the loading might change with each roll. And based on the accuracy of your overall predictions you're making a claim about the probability of a single row, where the loading was unique.
That is incoherent.

This is more like rolling 1000 die each with 20 sides, and the die is loaded, and you're trying to predict the probability of each number turning up based on what you think you know about the loading and how you think the loading affects the probability. And the loading might change with each roll. And based on the accuracy of your overall predictions you're making a claim about the probability of a single row, where the loading was unique.
That is incoherent.
So is comparing a totally random occurrence (tossing a die) with an anything but random outcome (voting results).

This is more like rolling 1000 die each with 20 sides, and the die is loaded, and you're trying to predict the probability of each number turning up based on what you think you know about the loading and how you think the loading affects the probability. And the loading might change with each roll. And based on the accuracy of your overall predictions you're making a claim about the probability of a single row, where the loading was unique.
That is incoherent.
So is comparing a totally random occurrence (tossing a die) with an anything but random outcome (voting results).
A die roll is no more fundamentally random than an election result — hell, a die roll is completely determined by Newtonian physics. Can we say that about election results?
The random events in the 538 forecast are the polling results. Each poll takes a random sample of voters. Polling results are statistical estimates. Their results depend on the random sample of respondents they happen to select. In addition, polls make systematic errors, such as incorrectly estimating the probability that a "likely voter" will actually vote. 538 converts uncertainty in polling results into probabilistic forecasts of elections results. If there were no uncertainty in the polls, 538's predictions would always be 100% or 0% for a particular candidate. But uncertainty in the polls makes this impossible.

It will be interesting to see if they can continue such accuracy in what seems to be quite a different political landscape to previous elections.
The likelihood (and hope) is that Trump and the effect of social media is an anomaly, but if not then accounting for such variables may be difficult.
I would still put a fair amount of stock in their opinion though, given that we have little else to go off of.

I think some of the issue of maybe a low probability of Trump's odds of winning the election was that a lot of people were kind of embarrassed about saying they would vote for him. I don't think this next election will be the case. Also the way they pole the public has changed. Caller ID makes it so people do not answer the phone unless they know the number. So polling companies will need to change the way they get their data.

they did an analysis of how well their predictions have gone over the years and they are pretty damn good.
I know this is true because they told me it is true

they did an analysis of how well their predictions have gone over the years and they are pretty damn good.
I know this is true because they told me it is true
Dave, unlike you, actually looked at the analysis.

they did an analysis of how well their predictions have gone over the years and they are pretty damn good.
I know this is true because they told me it is true
Dave, unlike you, actually looked at the analysis.
That must be true, because you said it is.

they did an analysis of how well their predictions have gone over the years and they are pretty damn good.
I know this is true because they told me it is true
Dave, unlike you, actually looked at the analysis.
That must be true, because you said it is.
do not confuse skepticism with cynicism.

I looked at their report, but I am not able to grok most of it. They do seem to have the most reliable political forecasts. (I do get the difference in how they use 'forecast' and 'predict'.)

This is more like rolling 1000 die each with 20 sides, and the die is loaded, and you're trying to predict the probability of each number turning up based on what you think you know about the loading and how you think the loading affects the probability. And the loading might change with each roll. And based on the accuracy of your overall predictions you're making a claim about the probability of a single row, where the loading was unique.
I bought twenty 20sided dice at Dragon*Con last year so that whenever I get a couple of low rolls in a row when playing D&D, I can throw the offending dice in the trash as a warning to the other dice to roll higher. It hasn't been working, I assume, because dice don't mind being in the trash. I am now looking for a lowcost way to shatter uncooperative dice in front of the others with a hydraulic press or something. That should do the trick.

This is more like rolling 1000 die each with 20 sides, and the die is loaded, and you're trying to predict the probability of each number turning up based on what you think you know about the loading and how you think the loading affects the probability. And the loading might change with each roll. And based on the accuracy of your overall predictions you're making a claim about the probability of a single row, where the loading was unique.
I bought twenty 20sided dice at Dragon*Con last year so that whenever I get a couple of low rolls in a row when playing D&D, I can throw the offending dice in the trash as a warning to the other dice to roll higher. It hasn't been working, I assume, because dice don't mind being in the trash. I am now looking for a lowcost way to shatter uncooperative dice in front of the others with a hydraulic press or something. That should do the trick.
You need precision dice: http://www.gamescience.com/Gamescience®Inc_bymfg_101.html
Gamescience, They are the original and still the best. You may still need a crayon.

This is more like rolling 1000 die each with 20 sides, and the die is loaded, and you're trying to predict the probability of each number turning up based on what you think you know about the loading and how you think the loading affects the probability. And the loading might change with each roll. And based on the accuracy of your overall predictions you're making a claim about the probability of a single row, where the loading was unique.
I bought twenty 20sided dice at Dragon*Con last year so that whenever I get a couple of low rolls in a row when playing D&D, I can throw the offending dice in the trash as a warning to the other dice to roll higher. It hasn't been working, I assume, because dice don't mind being in the trash. I am now looking for a lowcost way to shatter uncooperative dice in front of the others with a hydraulic press or something. That should do the trick.
You need precision dice: http://www.gamescience.com/Gamescience®Inc_bymfg_101.html
Gamescience, They are the original and still the best. You may still need a crayon.
I have a big bag of dice I need to test in brine. Some were just awful. Others look ok, but could have voids.