To reference Stack Overflow moderator Machavity, AI chatbots are like parrots. ChatGPT, for example, doesn’t understand the responses it gives you; it simply associates a given prompt with information it has access to and regurgitates plausible-sounding sentences. It has no way to verify that the responses it’s providing you with are accurate. ChatGPT is not a writer, a programmer, a scientist, a physicist, or any other kind of expert our network of sites is dependent upon for high-value content. When prompted, it’s just stringing together words based upon the information it was trained with. It does not understand what it’s saying. That lack of understanding yields unverified information presented in a way that sounds smart or citations that may not support the claims, if the citations aren’t wholly fictitious. Furthermore, the ease with which a user can simply copy and paste an AI-generated response simply moves the metaphorical “parrot” from the chatbot to the user. They don’t really understand what they’ve just copied and presented as an answer to a question.

Content posted without innate domain understanding, but written in a “smart” way, is dangerous to the integrity of the Stack Exchange network’s goal: To be a repository of high-quality question and answer content.

AI-generated responses also represent a serious honesty issue. Submitting AI-generated content without attribution to the source of the content, as is common in such a scenario, is plagiarism. This makes AI-generated content eligible for deletion per the Stack Exchange Code of Conduct and rules on referencing. However, in order for moderators to act upon that, they must identify it as AI generated content, which the private AI generated content policy limits to extremely narrow circumstances which happen in only a very low percentage of AI generated content that is posted to the sites.

c/o https://social.coop/@lukem@kbin.social/110490440953441074