Researchers want the public to test themselves: https://yourmist.streamlit.app/. Selecting true or false against 20 headlines gives the user a set of scores and a “resilience” ranking that compares them to the wider U.S. population. It takes less than two minutes to complete.

The paper

Edit: the article might be misrepresenting the study and its findings, so it’s worth checking the paper itself. (See @realChem 's comment in the thread).

  • aes @beehaw.org
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    1 year ago

    I feel like a lot of people are missing the point when it comes to the MIST. I just very briefly skimmed the paper.

    Misinformation susceptibility is being vulnerable to information that is incorrect

    • @ach@feddit.de @GataZapata@kbin.social It seems that the authors are looking to create a standardised measure of “misinformation susceptibility” that other researchers can employ in their studies so that these studies can be comparable, (the authors say that ad-hoc measures employed by other studies are not comparable).
    • @lvxferre@lemmy.ml the reason a binary scale was chosen over a likert-type scale was because
      1. It’s less ambiguous to participants
      2. It’s easier for researchers to implement in their studies
      3. The results produced are of a similar ‘quality’ to the likert scale version
    • If the test doesn’t include pictures, a source name, and a lede sentence and produces similar results to a test which does, then the simpler test is superior (think about the participants here). The MIST shows high concurrent validity with existing measures and states a high level of predictive validity (although I’d have to read deeper to talk about the specifics)

    It’s funny how the post about a misinformation test was riddled with misinformation because no one bothered to read the paper before letting their mouth run. Now, I don’t doubt that your brilliant minds can overrule a measure produced with years of research and hundreds of participants off the top of your head, but even if what I’ve said may be contradicted with a deeper analysis of the paper, shouldn’t it be the baseline?

    • thatgal
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Not saying you’re wrong at all, but I just did the test and it’s kinda funny that the title of this article would certainly have been one of the “fake news” examples.

      Obviously the study shows that the test is useful (as you pointed out quite well!), but it’s ironic that the type of “bait” that they want people to recognize as fake news was used as the title of the article for the paper.

      (Also, not saying the authors knew about or approved the article title or anything)

    • somefool@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Thanks for this. I’ll freely admit I’m an idiot and didn’t feel smart enough to understand the paper (see username). Clarification is much welcome.

      I added the link to the paper to the body of the post.

    • Lvxferre
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 year ago

      It’s funny how the post about a misinformation test was riddled with misinformation because no one bothered to read the paper before letting their mouth run.

      What I find funny is how your very comment exemplifies what’s potentially a major source of misinformation: you’re making a categorical statement about something that you do not know, and cannot reliably know - if other people read the paper or not. (Ipsis digitis “no one bothered to read the paper”) You’re voicing certainty over an assumption.

      Also note that the secondary source (the article being directly linked by the OP) is contextually more important in this discussion than the primary source (the paper).

      (I actually read the paper, even if not linked by the article being shared. And my points still stand.)

      It’s less ambiguous to participants

      Simpler? Yes. Less ambiguous? I don’t think so - the absence of an “I don’t know” answer might potentially make it even more ambiguous, as participants are expected to make shit up to get over the test.

      It’s easier for researchers to implement in their studies

      A coin toss would be even simpler to implement, but one can guess the validity of its outcome.

      In other words: “easier to implement” is not automatically “better”.

      The results produced are of a similar ‘quality’ to the likert scale version

      The absence of a likert scale is not what I’m talking about. I’m talking about the unidimensionality of the test, when there are clearly two partially dependent variables related to misinfo: agreement with the statement and sureness.

      For a simple analogy: the current answers are black and white. I’m asking where to put red in that. A likert scale would add shades of grey.

      Note that the paper does acknowledge this second dimension: it’s the D/N in the “VeRiFication DoNe” model. It’s potentially one of the reasons why not even the authors of the article propose the MIST to be used on its own.


      A rather interesting tidbit of the paper:

      we demonstrate how the MIST—in conjunction with Verification done—can provide novel insights on existing psychological interventions, thereby advancing theory development.

      Emphasis mine. What’s the meaning of this utterance? Special focus on the words “novel insights” and “theory development”, that sound a lot like filler. (Ctrl+F the article for context.)

      It’s also interesting to take into account the current replication crisis - that affects psychology the most - when reading this article and the related paper.

    • Panteleimon@beehaw.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Thank you for this!

      I have to say though, it’s really interesting to see the reactions here, given the paper’s findings. Because in the study, while people got better at spotting fake news after the game/test, they got worse at identifying real news, and overall more distrustful of news in general. I feel like that’s on display here - with people (somewhat correctly) mistrusting the misleading article, but also (somewhat incorrectly) mistrusting the research behind it.