- cross-posted to:
- privacy
- cross-posted to:
- privacy
Vechev and his team found that the large language models that power advanced chatbots can accurately infer an alarming amount of personal information about users—including their race, location, occupation, and more—from conversations that appear innocuous.
LOL. Nice!
I wouldn’t expect ChatGPT to be well-versed in forensic linguistics; I suspect a human expert could make better guesses based on seemingly-innocuous things like sentence structure and word choices. I’ve seen some research on estimating age and gender based on writing. There’s a primitive example of that here: https://www.hackerfactor.com/GenderGuesser.php
My last comment is a bit short (it wants 300 words or more), but I am amused by the results:
Genre: Informal Female = 338 Male = 309 Difference = -29; 47.75% Verdict: Weak FEMALE
I’ll pat myself on the back for writing more or less down the middle. :)
Your wording makes you sound like such a Weak FEMALE. /s