Researchers want the public to test themselves: https://yourmist.streamlit.app/. Selecting true or false against 20 headlines gives the user a set of scores and a “resilience” ranking that compares them to the wider U.S. population. It takes less than two minutes to complete.
The paper
Edit: the article might be misrepresenting the study and its findings, so it’s worth checking the paper itself. (See @realChem 's comment in the thread).
I feel like a lot of people are missing the point when it comes to the MIST. I just very briefly skimmed the paper.
Misinformation susceptibility is being vulnerable to information that is incorrect
It’s funny how the post about a misinformation test was riddled with misinformation because no one bothered to read the paper before letting their mouth run. Now, I don’t doubt that your brilliant minds can overrule a measure produced with years of research and hundreds of participants off the top of your head, but even if what I’ve said may be contradicted with a deeper analysis of the paper, shouldn’t it be the baseline?
Not saying you’re wrong at all, but I just did the test and it’s kinda funny that the title of this article would certainly have been one of the “fake news” examples.
Obviously the study shows that the test is useful (as you pointed out quite well!), but it’s ironic that the type of “bait” that they want people to recognize as fake news was used as the title of the article for the paper.
(Also, not saying the authors knew about or approved the article title or anything)
Thanks for this. I’ll freely admit I’m an idiot and didn’t feel smart enough to understand the paper (see username). Clarification is much welcome.
I added the link to the paper to the body of the post.
What I find funny is how your very comment exemplifies what’s potentially a major source of misinformation: you’re making a categorical statement about something that you do not know, and cannot reliably know - if other people read the paper or not. (Ipsis digitis “no one bothered to read the paper”) You’re voicing certainty over an assumption.
Also note that the secondary source (the article being directly linked by the OP) is contextually more important in this discussion than the primary source (the paper).
(I actually read the paper, even if not linked by the article being shared. And my points still stand.)
Simpler? Yes. Less ambiguous? I don’t think so - the absence of an “I don’t know” answer might potentially make it even more ambiguous, as participants are expected to make shit up to get over the test.
A coin toss would be even simpler to implement, but one can guess the validity of its outcome.
In other words: “easier to implement” is not automatically “better”.
The absence of a likert scale is not what I’m talking about. I’m talking about the unidimensionality of the test, when there are clearly two partially dependent variables related to misinfo: agreement with the statement and sureness.
For a simple analogy: the current answers are black and white. I’m asking where to put red in that. A likert scale would add shades of grey.
Note that the paper does acknowledge this second dimension: it’s the D/N in the “VeRiFication DoNe” model. It’s potentially one of the reasons why not even the authors of the article propose the MIST to be used on its own.
A rather interesting tidbit of the paper:
Emphasis mine. What’s the meaning of this utterance? Special focus on the words “novel insights” and “theory development”, that sound a lot like filler. (Ctrl+F the article for context.)
It’s also interesting to take into account the current replication crisis - that affects psychology the most - when reading this article and the related paper.
Thank you for this!
I have to say though, it’s really interesting to see the reactions here, given the paper’s findings. Because in the study, while people got better at spotting fake news after the game/test, they got worse at identifying real news, and overall more distrustful of news in general. I feel like that’s on display here - with people (somewhat correctly) mistrusting the misleading article, but also (somewhat incorrectly) mistrusting the research behind it.
That’s a very interesting anecdote, now that you say it