A new set of Times/Siena polls, including one with The Philadelphia Inquirer, reveal an erosion of support for the president among young and nonwhite voters upset about the economy and Gaza.
I dislike doubting polls, but there’s just some odd stuff in here.
10% go for RFK Jr, and it’s equal siphoning from both parties? 10%?!
20% more people blame Biden for Roe being overturned than Trump?
They’re TIED with Gen Z voters? TIED?!
After the absolute thrashing that Republicans have received on abortion, only like 50% of women would break for Biden?
This is a poll of just the 5 key states, but this part of their methodology gives me significant pause as well: "To further ensure that the results reflect the entire voting population, not just those willing to take a poll, we give more weight to respondents from demographic groups underrepresented among survey respondents, like people without a college degree. "
Emphasis mine. There could be a huge skew. And these results don’t make sense. The other NYT poll from several months ago was also incredibly unusual and had very weird findings – to the point that the Guardian wrote something was very fucky with the results.
This isn’t to say this can’t be what’s going on, but we need corroboration from other polling groups. And it isn’t summer yet, which makes polls rather inaccurate too.
TLDR: Something’s fucky, we need more information and to monitor this.
EDIT: I just want to use my bully pulpit here to say that my criticisms by no means disprove the poll results. There’s oddities, but that doesn’t make the results an impossibility. Don’t only give credence to criticism of polls. If someone has reasons they believe the poll is accurate, you should give equal attention to it. At the end of the day, we don’t know what the actual truth is, and we won’t until the election is over. Just remember that we don’t want to just win, we want to dominate. We want massive margins. And that means we need to see wins even in less than accurate polls.
we give more weight to respondents from demographic groups underrepresented among survey respondents, like people without a college degree
Oooooohhh
All of sudden it makes sense
Here’s their methodology page, with in addition to that fuckin fascinating tidbit you quoted, some other things of note:
The New York Times/Siena College Poll is conducted by phone using live interviewers at call centers based in Florida, New York, South Carolina, Texas and Virginia. Respondents are randomly selected from a national list of registered voters, and we call voters both on landlines and cellphones.
In the end, fewer than 2 percent of the people our callers try to reach will respond. We try to keep our calls short — less than 15 minutes — because the longer the interview, the fewer people stay on the phone.
We call more people who seem unlikely to respond, like those who don’t vote in every election.
But the truth is that there’s no way to be absolutely sure that the people who respond to surveys are like demographically similar voters who don’t respond. It’s always possible that there’s some hidden variable, some extra dimension of nonresponse that we haven’t considered.
To be clear, polling theory is totally valid and an established science within statistics.
But the challenge is always with methodology, because you can never get a perfect simple random sample. And the methodology here certainly seems terrible.
If the 2024 presidential election were held today, who would you vote for if the candidates were
Then it lists the usual suspects including third parties. The only age group for that question voting for Biden is 65 and older. Maybe so, but that doesn’t seem right.
I suspect that out of the 2% of people who answered the phone (and the smaller percentage that stayed on for the whole poll), there were some number of young people whose parents answered the phone and then answered all the poll questions for them, or something weird like that.
Maybe not. But in general, the whole methodology starts to look like a big pile of garbage the closer you look at it. It’s not surprising for some answers to come out of it that are very obviously wrong.
If you’ve been following the polling there is nothing different or unique about this one. It’s consistent with pretty much all polling over the past 400 days. Biden is losing. Polling is definitely still broken, but it’s consistent. There is no fuckery.
Biden needs to be up by 4-12 in those states if he wants to win.
See my posts in !data_vizualisations@lemmy.world . I make a map of the offset in polling Biden needs to win a given state based on the fact that polls consistently overestimate how well Biden will do, and underestimate how well Trump will do.
When you see these poll numbers, you should subtract 4 for Biden, and add 8 for Trump. That was the offsets we observed from the 2020 election.
So keeping in mind data you already have about Trump, Biden, polling and it’s departure from real election results, it’s not even a question. Mortgage you house and out all your money on Trump to win. You have a differential polling error of 12 points in a Biden Trump head to head. Biden needs to be in the mid to high fifties across the board to have a chance.
He’s in the low forties.
If you don’t end up clicking the link:
Relative polling error for Biden V Trump, 2020.
My only critique is that I don’t think 2020 skew is valid anymore. After Dobbs, the landscape seems to have significantly changed. 2022 was predicted to favor Republicans by a strong margin, but it ended up being a tie pretty much. And a lot of special elections have had surprising results too.
My personal opinion is that polling methodology may have overcorrected for 2020, and we’re getting a picture now that’s skewed right, versus left from beforehand.
It’s really hard to say though. There weren’t a lot of great polls to start with in 2022, and special elections don’t have significant polling either. It’s a weird position where the only good data set we have is from 2020, but there have been so many changes in the national environment that we have reason to doubt the skews from 2020 are still valid. But at the same time, what else do we have? Vibes and feelings and anecdotes. And the engineer in me dislikes dismissing data in favor of vibes. It’s important to consider still I think, because none of this is infallible. But I honestly couldn’t tell you what the “right” outlook to have is. Maybe I’m onto something, but maybe I’m just letting optimism bleed into my better judgement.
My personal opinion is that polling methodology may have overcorrected for 2020, and we’re getting a picture now that’s skewed right, versus left from beforehand.
I won’t say that you’re wrong about what the pollsters are doing – but to me this strikes me as very obviously the wrong way to do it.
If you find out your polls were wrong, and then instead of digging into detail as to what exactly went wrong, and then fixing the methodology going forward, using non-phone polls, doing a more accurate calculation to make sure you’re weighting the people who are going to vote and not the people who aren’t going to vote, things like that … you just make up a fudge factor for how wrong the polls were last time, and assume that if you just add that fudge factor in then you don’t have to fix all the things that went wrong on a more fundamental level, that seems guaranteed to keep being wrong for as long as you’re doing it.
Again I won’t say you’re wrong about how they’re going about it. (And, I’m not saying it’s necessarily easy to do or anything.) But I think you’ve accurately captured the flaw in just adding a fudge factor and then assuming you’ll be able to learn anything from the now-corrected-for-sure-until-next-time-when-we-add-in-how-wrong-we-were-this-time answers.
That’s the thing, we don’t know how they’re correcting for it, and if it is just a fudge factor. The issue is there’s more confounding factors that anyone could list which could be the culprit here.
A fudge factor is easy, but the wrong solution here. But the right solution is incredibly complex and difficult to even identify. In my field we can get away with using a timer instead of a precise calculation sometimes. That really isn’t an option for polls. I don’t favor the people trying to fix the models.
I mean if we’re stepping off the data into editorialism, Trump out performed all other Republicans in 2020, like he also did in 2016. As well, Trump endorsed candidates struggled in 2018, and 2022, and special elections. My read of this evidence and I’ve seen it suggested elsewhere, is that whatever property it is that causes Trump to consistently over perform isn’t transitive. So evaluating how well Trump will perform against how well Republicans are performing is misguided. You should evaluate candidates individually, and that would agree with their performance.
Also, this is one poll. The aggregate of polling agrees with this one poll. The minor methodological changes they make from year to year are infact extremely minor and they are doing the appropriate statistical accounting afaict. There is nothing weird or wonky about these polls: Biden is just performing very very poorly. I’ve been saying this for months to an onslaught of downvotes from people who simply don’t want to believe this to be the case.
Finally, I’ll argue that the ‘right’ outlook is always the one that aligns most closely with the data. We should believe stories we tell about data less than data itself. There is nothing to suggest that this election will really be anything that different than the 3 previous, and in terms of landscapes, the best proxy appears to be 2016 in terms of contested states. You should believe the data that is telling you that Joe Biden is losing this election. Biden has been setting up to lose the upper Midwest since December. These are the same states Hillary lost.
maybe I’m just letting optimism bleed into my better judgement
I agree. It’s also what the political pundit class did when they completely wiffed on 2016, and it’s what they’re doing right now. 90% of Lemmy also agrees with your sentiment, and in both Lemmy’s and the punditry’s refusal to be critical of Biden, to drag him towards more popular policies, they’re setting Trump up for victory.
I don’t really disagree with anything you’ve said, it’s a very valid take – and you’re spot on about underestimating Trump but overestimating Republican cohorts in polls. My only qualifier there is that we don’t know if 2022 models were overtuned for only Republicans, or also Trump support.
I don’t know if we can take 2016 as representative of our current dynamic. I think it’s certainly more representative than 2020 however, but shifting populations and world/domestic events have had massive impacts.
In short? I don’t know which outlook is more accurate. What I can reasonably assert though is that the reality will be somewhere between the less optimistic and the more optimistic outlooks. Taking these poll results at face value is probably the better strategic option anyway to create pressure to go vote and campaign.
I agree though, we shouldn’t be totally dismissive of these polls. It’s fine to scrutinize and question them like I’ve said, but it shouldn’t take away from the very real possibility that these are correct. Oddities don’t create impossibilities.
If you’ve been following the polling there is nothing different or unique about this one.
They posted their methodology and to me, as an unqualified lay person, it’s clearly shit, and there’s no reason to think it’ll yield anything even resembling an accurate picture of how people are going to vote in the election. It’s not surprising to me that recent polls in general tend to be as inaccurate as you’re saying they are.
I would be interested to go back and look at some of the polling that led up to recent special elections where Democrats won, and see how the poll results compared with the election results – if you follow polling in detail (which again, I don’t), do you happen to know where I could look to find that?
They posted their methodology and to me, as an unqualified lay person (…)
So like, if you know the above statement to be true, that’s exactly where you should stop in your reasoning. This is something that I find Americans to be guilty if constantly, which is to have the humility to understand that they shouldn’t have an opinion, and the proceed to arrogantly have the opinion they just acknowledged they shouldn’t have. I think it’s a deeply human thing, that we evolved to have to deal with missing information and so our brain fills in gaps and gives us convincing narratives. However, you have to resist the tendency when you know you really don’t know: and even more so when your beliefs go against what the data is.
If you can find me some sources of data on special elections, I’ll happily analyze it for you. I think it would be interesting if nothing else to see the offset. I’m not on my desktop machine, but I’ll give you some sources for data since you asked.
Surely as a qualified non lay person you’ll be able to do a detailed takedown of all the criticism I arrived at for the poll’s methodology from like 2 minutes of looking, instead of just making a broad assertion that if the polling was wrong by a certain amount in a previous year we should add that amount to this year’s polling to arrive at reality, and that’s all that’s needed and then this year’s corrected poll will always be accurate.
Because to me, that sounds initially plausible but then when you look at it for a little bit longer you say, oh wait hang on, if that was all that was needed the professional pollsters could just do that, and their answers would always be right. And you wouldn’t need to look closely at the methodology at all, just trust that “it’s a poll” means it’s automatically equal to every other poll (once you apply the magic correction factor.)
To me that sounds, on close scientific examination, like a bunch of crap once you think about it for a little bit. But what do I know. I’m unqualified. I’ll wait for you to educate me.
I think the right answer is to do what you described, in the aggregate. Don’t do it on a pollster to pollster basis, do it at the state level, across all polls. You don’t do this as a pollster because that isn’t really what you are trying to to model with a poll, and polls being wrong or uncertain is just a part of the game.
So it’s important to not conflate polling with the meta-analysis of polling.
I’m not so much interested in polls or polling but in being able to use them as a source of data to model outcomes that individually they may not be able to to predict. Ultimately a poll needs to be based on the data it samples from to be valid. If there is something fundamentally flawed in the assumptions that form the basis of this, there isn’t that much you can do to fix it with updates to methods.
the -4, 8 spread is the prior I’m walking into this election year with. That inspire of their pollsters best efforts to come up with a unbiased sample, they can’t predict the election outcome is fine. We can deal with that in the aggregate. This is very similar to Nate Silvers approach.
If there is something fundamentally flawed in the assumptions that form the basis of this, there isn’t that much you can do to fix it with updates to methods.
The largest cohort of people in this country want neither Biden nor Trump. Some of that cohort are willing to step out on a limb and support a third party.
It’s really hard to support somebody like Kennedy when you know his actual positions, and that they’re driven by the fact that a worm ate part of his brain
I’d vote for the worm before I voted for Kennedy, but whomever you are trying to convince isn’t listening. If you want to convince people of something, you need to understand them and why they do what they do.
Clearly at least 10% of voters see having a complete brain as less of a deal breaker than being either Biden or Trump. We should be curious as to why that is.
Your issue is you fail to understand how stupid and shortsighted and uninformed the average person is.
I live in one of the most educated states in american, and plenty of people here are complete idiots when it comes to politics and pretty much everyone under 30 is anti-biden and anti-trump and pro-being a whiny removed about life who refuses to take any accountability but delights in trash talking how everything sucks and the world will end. Along with some grandstanding about how ‘alt’ they are for being anarchist or communist or whatever. But if you ask them specific policy questions they lose their shit because they don’t know and don’t care. Politics is all about image/branding for them so they can feel smug and superior to ‘boring old status quo people’ like myself who will vote for Biden.
I dislike doubting polls, but there’s just some odd stuff in here.
This is a poll of just the 5 key states, but this part of their methodology gives me significant pause as well: "To further ensure that the results reflect the entire voting population, not just those willing to take a poll, we give more weight to respondents from demographic groups underrepresented among survey respondents, like people without a college degree. "
Emphasis mine. There could be a huge skew. And these results don’t make sense. The other NYT poll from several months ago was also incredibly unusual and had very weird findings – to the point that the Guardian wrote something was very fucky with the results.
This isn’t to say this can’t be what’s going on, but we need corroboration from other polling groups. And it isn’t summer yet, which makes polls rather inaccurate too.
TLDR: Something’s fucky, we need more information and to monitor this.
EDIT: I just want to use my bully pulpit here to say that my criticisms by no means disprove the poll results. There’s oddities, but that doesn’t make the results an impossibility. Don’t only give credence to criticism of polls. If someone has reasons they believe the poll is accurate, you should give equal attention to it. At the end of the day, we don’t know what the actual truth is, and we won’t until the election is over. Just remember that we don’t want to just win, we want to dominate. We want massive margins. And that means we need to see wins even in less than accurate polls.
Oooooohhh
All of sudden it makes sense
Here’s their methodology page, with in addition to that fuckin fascinating tidbit you quoted, some other things of note:
It is, indeed, always possible.
To be clear, polling theory is totally valid and an established science within statistics.
But the challenge is always with methodology, because you can never get a perfect simple random sample. And the methodology here certainly seems terrible.
Something fucky is going on. From the page:
Then it lists the usual suspects including third parties. The only age group for that question voting for Biden is 65 and older. Maybe so, but that doesn’t seem right.
I suspect that out of the 2% of people who answered the phone (and the smaller percentage that stayed on for the whole poll), there were some number of young people whose parents answered the phone and then answered all the poll questions for them, or something weird like that.
Maybe not. But in general, the whole methodology starts to look like a big pile of garbage the closer you look at it. It’s not surprising for some answers to come out of it that are very obviously wrong.
If you’ve been following the polling there is nothing different or unique about this one. It’s consistent with pretty much all polling over the past 400 days. Biden is losing. Polling is definitely still broken, but it’s consistent. There is no fuckery.
Biden needs to be up by 4-12 in those states if he wants to win.
See my posts in !data_vizualisations@lemmy.world . I make a map of the offset in polling Biden needs to win a given state based on the fact that polls consistently overestimate how well Biden will do, and underestimate how well Trump will do.
When you see these poll numbers, you should subtract 4 for Biden, and add 8 for Trump. That was the offsets we observed from the 2020 election.
So keeping in mind data you already have about Trump, Biden, polling and it’s departure from real election results, it’s not even a question. Mortgage you house and out all your money on Trump to win. You have a differential polling error of 12 points in a Biden Trump head to head. Biden needs to be in the mid to high fifties across the board to have a chance.
He’s in the low forties.
If you don’t end up clicking the link: Relative polling error for Biden V Trump, 2020.
This is quite interesting, thanks for sharing!
My only critique is that I don’t think 2020 skew is valid anymore. After Dobbs, the landscape seems to have significantly changed. 2022 was predicted to favor Republicans by a strong margin, but it ended up being a tie pretty much. And a lot of special elections have had surprising results too.
My personal opinion is that polling methodology may have overcorrected for 2020, and we’re getting a picture now that’s skewed right, versus left from beforehand.
It’s really hard to say though. There weren’t a lot of great polls to start with in 2022, and special elections don’t have significant polling either. It’s a weird position where the only good data set we have is from 2020, but there have been so many changes in the national environment that we have reason to doubt the skews from 2020 are still valid. But at the same time, what else do we have? Vibes and feelings and anecdotes. And the engineer in me dislikes dismissing data in favor of vibes. It’s important to consider still I think, because none of this is infallible. But I honestly couldn’t tell you what the “right” outlook to have is. Maybe I’m onto something, but maybe I’m just letting optimism bleed into my better judgement.
All I know is that I don’t know.
I won’t say that you’re wrong about what the pollsters are doing – but to me this strikes me as very obviously the wrong way to do it.
If you find out your polls were wrong, and then instead of digging into detail as to what exactly went wrong, and then fixing the methodology going forward, using non-phone polls, doing a more accurate calculation to make sure you’re weighting the people who are going to vote and not the people who aren’t going to vote, things like that … you just make up a fudge factor for how wrong the polls were last time, and assume that if you just add that fudge factor in then you don’t have to fix all the things that went wrong on a more fundamental level, that seems guaranteed to keep being wrong for as long as you’re doing it.
Again I won’t say you’re wrong about how they’re going about it. (And, I’m not saying it’s necessarily easy to do or anything.) But I think you’ve accurately captured the flaw in just adding a fudge factor and then assuming you’ll be able to learn anything from the now-corrected-for-sure-until-next-time-when-we-add-in-how-wrong-we-were-this-time answers.
That’s the thing, we don’t know how they’re correcting for it, and if it is just a fudge factor. The issue is there’s more confounding factors that anyone could list which could be the culprit here.
A fudge factor is easy, but the wrong solution here. But the right solution is incredibly complex and difficult to even identify. In my field we can get away with using a timer instead of a precise calculation sometimes. That really isn’t an option for polls. I don’t favor the people trying to fix the models.
I mean if we’re stepping off the data into editorialism, Trump out performed all other Republicans in 2020, like he also did in 2016. As well, Trump endorsed candidates struggled in 2018, and 2022, and special elections. My read of this evidence and I’ve seen it suggested elsewhere, is that whatever property it is that causes Trump to consistently over perform isn’t transitive. So evaluating how well Trump will perform against how well Republicans are performing is misguided. You should evaluate candidates individually, and that would agree with their performance.
Also, this is one poll. The aggregate of polling agrees with this one poll. The minor methodological changes they make from year to year are infact extremely minor and they are doing the appropriate statistical accounting afaict. There is nothing weird or wonky about these polls: Biden is just performing very very poorly. I’ve been saying this for months to an onslaught of downvotes from people who simply don’t want to believe this to be the case.
Finally, I’ll argue that the ‘right’ outlook is always the one that aligns most closely with the data. We should believe stories we tell about data less than data itself. There is nothing to suggest that this election will really be anything that different than the 3 previous, and in terms of landscapes, the best proxy appears to be 2016 in terms of contested states. You should believe the data that is telling you that Joe Biden is losing this election. Biden has been setting up to lose the upper Midwest since December. These are the same states Hillary lost.
I agree. It’s also what the political pundit class did when they completely wiffed on 2016, and it’s what they’re doing right now. 90% of Lemmy also agrees with your sentiment, and in both Lemmy’s and the punditry’s refusal to be critical of Biden, to drag him towards more popular policies, they’re setting Trump up for victory.
I don’t really disagree with anything you’ve said, it’s a very valid take – and you’re spot on about underestimating Trump but overestimating Republican cohorts in polls. My only qualifier there is that we don’t know if 2022 models were overtuned for only Republicans, or also Trump support.
I don’t know if we can take 2016 as representative of our current dynamic. I think it’s certainly more representative than 2020 however, but shifting populations and world/domestic events have had massive impacts.
In short? I don’t know which outlook is more accurate. What I can reasonably assert though is that the reality will be somewhere between the less optimistic and the more optimistic outlooks. Taking these poll results at face value is probably the better strategic option anyway to create pressure to go vote and campaign.
I agree though, we shouldn’t be totally dismissive of these polls. It’s fine to scrutinize and question them like I’ve said, but it shouldn’t take away from the very real possibility that these are correct. Oddities don’t create impossibilities.
They posted their methodology and to me, as an unqualified lay person, it’s clearly shit, and there’s no reason to think it’ll yield anything even resembling an accurate picture of how people are going to vote in the election. It’s not surprising to me that recent polls in general tend to be as inaccurate as you’re saying they are.
I would be interested to go back and look at some of the polling that led up to recent special elections where Democrats won, and see how the poll results compared with the election results – if you follow polling in detail (which again, I don’t), do you happen to know where I could look to find that?
So like, if you know the above statement to be true, that’s exactly where you should stop in your reasoning. This is something that I find Americans to be guilty if constantly, which is to have the humility to understand that they shouldn’t have an opinion, and the proceed to arrogantly have the opinion they just acknowledged they shouldn’t have. I think it’s a deeply human thing, that we evolved to have to deal with missing information and so our brain fills in gaps and gives us convincing narratives. However, you have to resist the tendency when you know you really don’t know: and even more so when your beliefs go against what the data is.
If you can find me some sources of data on special elections, I’ll happily analyze it for you. I think it would be interesting if nothing else to see the offset. I’m not on my desktop machine, but I’ll give you some sources for data since you asked.
Surely as a qualified non lay person you’ll be able to do a detailed takedown of all the criticism I arrived at for the poll’s methodology from like 2 minutes of looking, instead of just making a broad assertion that if the polling was wrong by a certain amount in a previous year we should add that amount to this year’s polling to arrive at reality, and that’s all that’s needed and then this year’s corrected poll will always be accurate.
Because to me, that sounds initially plausible but then when you look at it for a little bit longer you say, oh wait hang on, if that was all that was needed the professional pollsters could just do that, and their answers would always be right. And you wouldn’t need to look closely at the methodology at all, just trust that “it’s a poll” means it’s automatically equal to every other poll (once you apply the magic correction factor.)
To me that sounds, on close scientific examination, like a bunch of crap once you think about it for a little bit. But what do I know. I’m unqualified. I’ll wait for you to educate me.
I think the right answer is to do what you described, in the aggregate. Don’t do it on a pollster to pollster basis, do it at the state level, across all polls. You don’t do this as a pollster because that isn’t really what you are trying to to model with a poll, and polls being wrong or uncertain is just a part of the game.
So it’s important to not conflate polling with the meta-analysis of polling.
I’m not so much interested in polls or polling but in being able to use them as a source of data to model outcomes that individually they may not be able to to predict. Ultimately a poll needs to be based on the data it samples from to be valid. If there is something fundamentally flawed in the assumptions that form the basis of this, there isn’t that much you can do to fix it with updates to methods.
the -4, 8 spread is the prior I’m walking into this election year with. That inspire of their pollsters best efforts to come up with a unbiased sample, they can’t predict the election outcome is fine. We can deal with that in the aggregate. This is very similar to Nate Silvers approach.
On this, we 100% agree.
A lot of polls have been putting RFK Jr at 10+ percent. There are a ton of low-information voters who see the name and not much else
Not only putting RFK at 10%, but in states where he isn’t even on the ballot yet and likely will never be.
He’s on the ballot in WI and NV, which is enough that he has the potential to serve as a spoiler
I mean that’s pretty dismissive.
The largest cohort of people in this country want neither Biden nor Trump. Some of that cohort are willing to step out on a limb and support a third party.
It’s really hard to support somebody like Kennedy when you know his actual positions, and that they’re driven by the fact that a worm ate part of his brain
I’d vote for the worm before I voted for Kennedy, but whomever you are trying to convince isn’t listening. If you want to convince people of something, you need to understand them and why they do what they do.
Clearly at least 10% of voters see having a complete brain as less of a deal breaker than being either Biden or Trump. We should be curious as to why that is.
I’m listening — and when you talk to people, a lot of them don’t know.
Huh. Well I’m willing to believe that’s reality then.
Yeah I don’t see 10% of people voting for a guy who had a fucking worm somehow get into his brain, chew through part of his frontal lobe, and then die, and is still in there.
they are voting for his name. not him
Your issue is you fail to understand how stupid and shortsighted and uninformed the average person is.
I live in one of the most educated states in american, and plenty of people here are complete idiots when it comes to politics and pretty much everyone under 30 is anti-biden and anti-trump and pro-being a whiny removed about life who refuses to take any accountability but delights in trash talking how everything sucks and the world will end. Along with some grandstanding about how ‘alt’ they are for being anarchist or communist or whatever. But if you ask them specific policy questions they lose their shit because they don’t know and don’t care. Politics is all about image/branding for them so they can feel smug and superior to ‘boring old status quo people’ like myself who will vote for Biden.