Online reviews serve as a guide for consumer choice. With advancements in large language models (LLMs) and generative AI, the fast and inexpensive creation of human-like text may threaten the feedback function of online reviews if neither readers nor platforms can differentiate between human-written and AI-generated content. In two experiments, we found that humans cannot recognize AI-written reviews. Even with monetary incentives for accuracy, both Type I and Type II errors were common: human reviews were often mistaken for AI-generated reviews, and even more frequently, AI-generated reviews were mistaken for human reviews. This held true across various ratings, emotional tones, review lengths, and participants’ genders, education levels, and AI expertise. Younger participants were somewhat better at distinguishing between human and AI reviews. An additional study revealed that current AI detectors were also fooled by AI-generated reviews. We discuss the implications of our findings on trust erosion, manipulation, regulation, consumer behavior, AI detection, market structure, innovation, and review platforms.
Yeah, reviews are relatively easy to fake with current technology. They’re short and most of them follow a fairly limited set of formats. This isn’t like generating hands where there are a ton of ways for an AI to give itself away. Not that most humans are very good at drawing hands.
I mean, just look at reddit. It’s full of whole fake threads of bots talking to bots using copied comments and the only way you can guess it’s a bot is by going through their history.
Yeah, reviews are relatively easy to fake with current technology. They’re short and most of them follow a fairly limited set of formats. This isn’t like generating hands where there are a ton of ways for an AI to give itself away. Not that most humans are very good at drawing hands.
I mean, just look at reddit. It’s full of whole fake threads of bots talking to bots using copied comments and the only way you can guess it’s a bot is by going through their history.