Researchers want the public to test themselves: https://yourmist.streamlit.app/. Selecting true or false against 20 headlines gives the user a set of scores and a “resilience” ranking that compares them to the wider U.S. population. It takes less than two minutes to complete.

The paper

Edit: the article might be misrepresenting the study and its findings, so it’s worth checking the paper itself. (See @realChem 's comment in the thread).

  • realChem@beehaw.org
    shield
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Hey all, thanks for reporting this to bring some extra attention to it. I’m going to leave this article up, as it is not exactly misinformation or anything otherwise antithetical to being shared on this community, but I do want to note that there are four different sources here:

    • There’s the original study which designed the misinformation susceptibility test; the ArXiv link was already provided, but in case anyone would like a look the study was indeed peer reviewed and published (as open access) in the journal Behavior Research Methods. As with all science, when reading the paper it’s important to recognize exactly what it is the authors were even trying to do, taking into account that they’re likely using field-specific jargon. I’m not a researcher in the social sciences so I’m unqualified to have too strong an opinion, but from what I can tell they did achieve what they were trying to with this study. There are likely valid critiques to be made here, but as has already been pointed out in our comments many aspects of this test were thought out and deliberately chosen, e.g. the choice to use only headlines in the test (as opposed to, e.g., headlines along with sources or pictures). One important thing to note about this study is that it is currently only validated in the US. The researchers themselves have made it clear in the paper that results based on the current set of questions likely cannot be compared between countries.

    • There’s the survey hosted on streamlit. This is being run by several authors on the original paper, but it is unclear exactly what they’re going to do with the data. The survey makes reference to the published paper so the data from this survey doesn’t seem like it was used in constructing the original paper (and indeed the original paper discusses several different versions of the test as well as a longitudinal study of participants). Again, taken for what it is I think it’s fine. In fact I think that the fact that this survey has been made available is why this has generated so much discussion and (warranted) skepticism. Being able to test yourself on a typical survey gives a feel for what is and isn’t actually being measured. I consider this a pretty good piece of science communication / outreach, if nothing else.

    • There is the poll by YouGov. This is separate from the original study. The researchers seem to be aware of it, but as far as I can tell weren’t directly involved in running the poll, analyzing the data, or writing the article about it. This is not inherently a bad poll, but I do think it’s worth noting that it is not a peer reviewed study. We have little visibility into how they conducted their data analysis here, for one thing. From what I can tell without knowing how they actually did their analysis the data here looks fine, but (this not being a scientific paper) some of the text surrounding the data is a bit misleading. EDIT: Actually it looks like they’ve shared their full dataset including how they broke categories down for analysis, it’s available here. Seeing this doesn’t much change my overall impression of the survey other than to agree with Panteleimon that the demographic representation here is not very well balanced, especially once you start trying to take the intersections of multiple categories. Doing that, some of their data points are going to have much lower statistical significance than other. My main concern is that some of the text surrounding the data is kinda misleading. For example, in one spot they write, “Older adults perform better than younger adults when it comes to the Misinformation Susceptibility Test,” which (if their data and analysis can be believed) is true. However nearby they write, “Younger Americans are less skilled than older adults at identifying real from fake news,” which is a different claim and as far as I can tell isn’t well supported by their data. To see the difference, note that when identifying real vs fake news a reader has more to go on than just a headline. MIST doesn’t test the ability to incorporate all of that context, that’s just not what it was designed to do.

    • Finally, there’s the linked phys.org article. This is the part that seems most objectionable to me. The headline is misleading in the same way I just discussed, and the text of the article does a bad job of making it clear that the YouGov poll is different from the original study. The distinction is mentioned in one paragraph, but the rest of the article blends quotes from the researchers with YouGov polling results, strongly implying that the YouGov poll was run by these researchers (again, it wasn’t). It’s a bit unfortunate that this is what was linked here, since I think it’s the least useful of these four sources, but it’s also not surprising since this kind of pop-sci reporting will always be much more visible than the research it’s based on. (And to be clear, I feel I could have easily linked this article myself, I probably wouldn’t have even noticed the conflation of different sources if this hadn’t generated so many comments and even a report; just a good reminder to keep our skeptic hats on when we’re dealing with secondary sources.)

    Finally, I’d just like to say I’m pretty impressed by the level of skepticism, critical thinking, and analysis you all have already done in the comments. I think that this indicates a pretty healthy relationship to science communication. (If anything folks are maybe erring a bit on the side of too skeptical, but I blame the phys-org article for that, since it mixed all the sources together.)

    • somefool@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Throwing phys.org into my “not necessarily reliable sources” list. Sorry about this, I’ll be more careful in the future.

      I added “Misleading” to the title.

  • Lvxferre
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    May I be honest? The study is awful. It has two big methodological flaws that stain completely its outcome.

    The first one is the absence of either an “I don’t know” answer, or a sliding scale for sureness of your answer. In large part, misinformation is a result of lack of scepticism - that is, failure at saying “I don’t know this”. And odds are that you’re more likely to spread discourses that you’re sure about, be them misinformation or actual information.

    The second flaw is over-reliance on geographically relevant information. Compare for example the following three questions:

    1. Morocco’s King Appoints Committee Chief to Fight Poverty and Inequality
    2. International Relations Experts and US Public Agree: America Is Less Respected Globally
    3. Attitudes Toward EU Are Largely Positive, Both Within Europe and Outside It

    The likelihood of someone living in Morocco, USA and the EU to be misinformed about #1, #2 and #3 respectively is far lower than the odds of someone living elsewhere. And more than that: due to the first methodological flaw, the study isn’t handling the difference between “misinformed” (someone who gets it wrong) and “uninformed” (someone who doesn’t believe anything in this regard).

    (For me, who don’t live in any of those three: the questions regarding EU are a bit easier to know about, but the other two? Might as well toss a coin.)

    • Square Singer@feddit.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      You are totally right. It mostly tests whether you are up to date on the current news stories in the “correct” part of the world.

      What’s making this worse is that “Government Officials Have Manipulated Stock Prices to Hide Scandals” is classified as a fake news headline. That might be true in the US, but here exactly this happened. Or at least they tried and failed. Someone working for a big state pension fund was gambling with the fund’s money and when she lost a lot of it, she tried manipulation to win the money back, which failed.

      The right way to discern fake news from real news (apart from maybe really obvious examples) is to read the article, check the sources and compare with other sources.

      In 2013 a headline like “Putin about to start a decade-long war in Europe that will cause a world-wide financial crisis” would have been a ridiculous clickbait fake news headline.

      Same with “The whole continent is not allowed to leave their homes for months due to Chinese virus” in 2019.

      Or “CIA is spying on all internet users” in 2008.

      And yet these things happened.

      Because what makes fake news is not whether it is outlandish that something like that could happen, but instead it’s fake news because it hasn’t happened.

    • Silviecat44@vlemmy.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I also think this survey is awful having done it. You put most of my thoughts about it into words

  • koreth@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 year ago

    Got 20/20, was rewarded with a message, “You’re more resilient to misinformation than 100% of the US population!” and looked for the Fake button because as a member of the US population, that is a mathematical impossibility.

      • XTL@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        100% has a margin of error in millions, or tens of, depending on interpretation.

        • Lvxferre
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          For most purposes, a 5% margin of error is considered acceptable. Since some quick websearch estimates 336M people, if up to ~17M people are informed, you can still claim that 100% is misinformed.

          (But then, if you actually know how many people are informed, your acceptable margin of error falls down considerably.)

  • landwomble@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I’m not sure this is a good study. I mean I scored 85% so woohoo but you just get headlines to go off. The art of noticing disinformation is in reading articles and making inferences on them. Questions like “vaccines contain harmful chemicals” are obvious red flags but there are some that are a reasonable-sounding headline but I’d imagine the article itself would fall apart on first reading. I know half the problem is people don’t read articles but this is a very simplistic survey.

    • somefool@beehaw.orgOP
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      It is, and I feel the questions are quite obvious.

      That being said… I’m related to conspiracy theorists. I got a first-row seat to their dumbassery on facebook before I deleted my account. And… a significant issue was paywalled articles with clickbait titles, during Covid especially. The title was a doubt-inducing questions, such as “Do vaccines make you magnetic?” and the reasoning disproving that was locked behind the paywall. And my relatives used those as confirmation that their views were true. Because the headlines introduced doubt and the content wasn’t readable. That and satire articles.

    • sab@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 year ago

      Not only is it not good, I’d dare to say it’s awful. Never mind that the headlines themselves are terribly crafted: the entire point is that one has to be critical of sources, and not take everything at face value just because it sounds somewhat convincing. It’s not about blatantly discrediting things at face value because they don’t fit what you believed to be true.

      By the standards of this test, headlines such as “The CIA Subjected African-Amercians to LSD for 77 Consecutive Days in Experiment” would clearly belong in the fake news category. And if it’s supposed to test whether the (presumably American) respondent has decent insight into the realities of contemporary politics, why in the world would it include something as obscure as “Morocco’s King Appoints Committee Chief to Fight Poverty and Inequality”. There’s literally no way of knowing without context whether the associated article would be propaganda or just an obscure piece of foreign correspondence. Many of the “true” headlines are still things one shouldn’t take for granted without checking sources, and many of the “fake” ones are cartoonish.

      It’s just bad research.

      Edit: Rather than bad research, it seems it might be badly misrepresented. The article itself appears completely different from what is reported in the linked article. I’m still, however, not entirely convinced by the approach using AI generated headlines.

    • vaguerant@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Just took a look here, and yeah. One of the headlines they ask you to rate is “Hyatt Will Remove Small Bottles from Hotel Bathrooms”. It’s the kind of thing that’s basically a coin flip. Without having any context into the story, I have no opinion on whether it’s fake or not. I don’t think guessing incorrectly on this one would indicate somebody is any more or less susceptible to miscategorizing stories as real/fake.

      • sab@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I assume the idea is to include some pointless headlines (such as this) in order to provide some sort of baseline. The researcher probably extracts several dimensions from the variables, and I assume this headline would feed into a “general scepticism” variable that measures he likelihood that the respondent will lean towards things being fake rather than real.

        Still, I’m not at all convinced about this research design.

    • tal@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      A common tactic I’ve seen in news headlines is referencing substances that can harm a human without indicating that in the quantities that they are present, they are not a concern. I’m not sure what the right answer would be to the vaccines question given that. If that is the case there, it may be true but misleading.

    • mrbubblesort@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Somehow I got 100%, but it was mainly luck. I really have no clue what % support marijuana is in the US, how young Americans feel about global warming, or how globally respected they feel. I’m not from there, so I don’t follow it at all. I think it would’ve been better if they had an “I don’t know / Irrelevant to me” option.

    • DessertStorms@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Questions like “vaccines contain harmful chemicals” are obvious red flags but there are some that are a reasonable-sounding headline

      It’s exactly those “reasonable” sounding headlines (and in some cases the ideas and opinions that back them up in the body of the article, but that has to be provided for it to relevant, which as you point out isn’t, which is a big problem) that serve as misinformation and/or dog whistles, so “vaccines contain harmful chemicals” could be aimed at antivaxxers (and those susceptible to being pushed there), but it’s also technically correct, for example apples and bananas contain “harmful chemicals” too.
      The article could be either fear mongering and disinformation - false, or science based and educational - true, but we can’t know which just from the headline.

      A headline like “small group of people control the global media and economy” could be a dog whistle for antisemitism - false, or be an observation of life on earth right now - truth.

      My point is there are headlines that would seem like conspiracy theory to some, but irrefutable fact to others, and probably the opposite of each to each respective group, and without more than a headline (and often even with, of course), it’s entirely down to the readers’ existing opinions and biases.

      Not a great way to test this.

  • somefool@beehaw.orgOP
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    As a terminally online millennial, I was scared for a second, but I did okay on the test. Then again, I’m 40 and barely even qualify as ‘millennial’, and not at all as ‘young’.

    I found the language of the questions was glaringly obvious. What do you think?

    • Lvxferre
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I found the language of the questions was glaringly obvious. What do you think?

      It’s potentially on purpose, to exploit the fact that fake news have often a certain “discursive pattern”.

  • ach@feddit.de
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    That is such a lazy study it’s pitiful and it does in no way test your ability to discern the veracity of news, so even the full marks I got are useless.

    First of all, if you generate fake headlines either test someone’s general knowledge or critical thinking, don’t conflate the two. Secondly, it’s the latter that actually matters the most, so if you build your knowledge based on headlines, you’re already close to the fake news group.

  • aes @beehaw.org
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    1 year ago

    I feel like a lot of people are missing the point when it comes to the MIST. I just very briefly skimmed the paper.

    Misinformation susceptibility is being vulnerable to information that is incorrect

    • @ach@feddit.de @GataZapata@kbin.social It seems that the authors are looking to create a standardised measure of “misinformation susceptibility” that other researchers can employ in their studies so that these studies can be comparable, (the authors say that ad-hoc measures employed by other studies are not comparable).
    • @lvxferre@lemmy.ml the reason a binary scale was chosen over a likert-type scale was because
      1. It’s less ambiguous to participants
      2. It’s easier for researchers to implement in their studies
      3. The results produced are of a similar ‘quality’ to the likert scale version
    • If the test doesn’t include pictures, a source name, and a lede sentence and produces similar results to a test which does, then the simpler test is superior (think about the participants here). The MIST shows high concurrent validity with existing measures and states a high level of predictive validity (although I’d have to read deeper to talk about the specifics)

    It’s funny how the post about a misinformation test was riddled with misinformation because no one bothered to read the paper before letting their mouth run. Now, I don’t doubt that your brilliant minds can overrule a measure produced with years of research and hundreds of participants off the top of your head, but even if what I’ve said may be contradicted with a deeper analysis of the paper, shouldn’t it be the baseline?

    • thatgal
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Not saying you’re wrong at all, but I just did the test and it’s kinda funny that the title of this article would certainly have been one of the “fake news” examples.

      Obviously the study shows that the test is useful (as you pointed out quite well!), but it’s ironic that the type of “bait” that they want people to recognize as fake news was used as the title of the article for the paper.

      (Also, not saying the authors knew about or approved the article title or anything)

    • somefool@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Thanks for this. I’ll freely admit I’m an idiot and didn’t feel smart enough to understand the paper (see username). Clarification is much welcome.

      I added the link to the paper to the body of the post.

    • Lvxferre
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 year ago

      It’s funny how the post about a misinformation test was riddled with misinformation because no one bothered to read the paper before letting their mouth run.

      What I find funny is how your very comment exemplifies what’s potentially a major source of misinformation: you’re making a categorical statement about something that you do not know, and cannot reliably know - if other people read the paper or not. (Ipsis digitis “no one bothered to read the paper”) You’re voicing certainty over an assumption.

      Also note that the secondary source (the article being directly linked by the OP) is contextually more important in this discussion than the primary source (the paper).

      (I actually read the paper, even if not linked by the article being shared. And my points still stand.)

      It’s less ambiguous to participants

      Simpler? Yes. Less ambiguous? I don’t think so - the absence of an “I don’t know” answer might potentially make it even more ambiguous, as participants are expected to make shit up to get over the test.

      It’s easier for researchers to implement in their studies

      A coin toss would be even simpler to implement, but one can guess the validity of its outcome.

      In other words: “easier to implement” is not automatically “better”.

      The results produced are of a similar ‘quality’ to the likert scale version

      The absence of a likert scale is not what I’m talking about. I’m talking about the unidimensionality of the test, when there are clearly two partially dependent variables related to misinfo: agreement with the statement and sureness.

      For a simple analogy: the current answers are black and white. I’m asking where to put red in that. A likert scale would add shades of grey.

      Note that the paper does acknowledge this second dimension: it’s the D/N in the “VeRiFication DoNe” model. It’s potentially one of the reasons why not even the authors of the article propose the MIST to be used on its own.


      A rather interesting tidbit of the paper:

      we demonstrate how the MIST—in conjunction with Verification done—can provide novel insights on existing psychological interventions, thereby advancing theory development.

      Emphasis mine. What’s the meaning of this utterance? Special focus on the words “novel insights” and “theory development”, that sound a lot like filler. (Ctrl+F the article for context.)

      It’s also interesting to take into account the current replication crisis - that affects psychology the most - when reading this article and the related paper.

    • Panteleimon@beehaw.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Thank you for this!

      I have to say though, it’s really interesting to see the reactions here, given the paper’s findings. Because in the study, while people got better at spotting fake news after the game/test, they got worse at identifying real news, and overall more distrustful of news in general. I feel like that’s on display here - with people (somewhat correctly) mistrusting the misleading article, but also (somewhat incorrectly) mistrusting the research behind it.

  • Dufurson@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    that test is bs, first I might be gullible and the second round 17/20, the study is the fake news

  • Kwakigra@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I would cheat on this test because I cheat in real life. I’ve been humbled enough times not to put total faith in my initial impression and would rather have more evidence than whatever I happen to be aware of at the moment to determine whether a claim is true.

    • androogee (they/she)@midwest.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Absolutely. The problem isn’t that some people can psychically know whether a headlibe is true and some can’t.

      The problem is deciding that you know without checking. Which is exactly what this test expects you to do.

      I mean what does “real” even mean in this context? Just that it’s a published headline or that it’s actually been fact checked?

  • AlteredStateBlob@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Weird. The only people I know that continually and aggressively bring up very obvious misinformation are the 50+ people in my life.

    • somefool@beehaw.orgOP
      link
      fedilink
      arrow-up
      12
      ·
      1 year ago

      I think the young feel immune, and that they feel socially progressive news cannot be lies because “that is not what our side does, we have ethics”.

      It’s not true in practice, though. Fake news are used to sow division, and making people angry on both sides is part of it. The far-right, boomer fake news are more obvious because they are outlandish, but there’s more than that out there.

    • sab@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Ironically the study ignores the arguably most important part of facing fake news: being critical of sources. And as a reportedly “vulnerable” millennial myself, I have to say I’m critical of this one.

    • Ulu-Mulu-no-die@kbin.social
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      That’s anecdotal experience, I’m 50+ and I got 19/20, I 100% identified all fakes and marked fake one of the real ones, so I’m on the skeptical side of things.

  • Silviecat44@vlemmy.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I got 18/20, but also THEY DONT HAVE AN OPTION FOR AUSTRALIA! What kind of survey has Austria and Azerbaijan but not Australia. Seriously. And you Americans love a binary scale of political preference. At least it wasn’t a required question.

      • vaguerant@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        After you finish the survey, it asks your age, gender, country and political position (from a drop-down that goes from “Extremely Liberal” to “Extremely Conservative”).

        • LollerCorleone@kbin.social
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          Oh, I thought they would asking for it before the survey begins and the questions are tweaked accordingly. It isn’t really fair to ask someone from the other side of the world questions solely based on US politics.

  • Panteleimon@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Hooo boy. This article is wildly misrepresenting both the study and it’s findings.

    1. The study did not set out to test ability to judge real/fake news across demographic differences. The study itself was primarily looking to determine the validity of their test.
    2. Because of this, their validation sample is wildly different from the sample observed in the online “game” version. As in, the original sample vetted participants, and also removed any who failed an “attention check”, neither of which were present in the second test.
    3. Demographics on the portion actually looking at age differences are… let’s say biased. There are far more young participants, with only ~10% over 50. The vast majority (almost 90%!) were college educated. And the sample trended liberal to a significant degree.
    4. All the above suggests that the demographic most typically considered “bad” at spotting fake news (conservative boomers who didn’t go to college) was massively underrepresented in the study. Which makes sense given that participation in that portion relies on largely unvetted volunteers to sign up to test their ability to spot fake news.

    Most critically, the study itself does not claim that differences between these demographics are representative. That portion is looking at differences in the sample pool before/after the test, to examine its potential for “training” people to spot fake news (this had mixed results, which they acknowledge). This article, ironically, is spreading misinformation about the study itself, and doing the researchers and its readers a great disservice.

  • niktemadur@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I imagine a main goal is to create a sensation of being overwhelmed, which in turn can easily make one apathetic, cynical.