- cross-posted to:
- privacy
- cross-posted to:
- privacy
Vechev and his team found that the large language models that power advanced chatbots can accurately infer an alarming amount of personal information about users—including their race, location, occupation, and more—from conversations that appear innocuous.
But what does it actually infer? The article is very low on details on that.
Probably because it was written by an ai chat bot.
Things like race, sex, orientation and job. Stuff that a human could probably also infer from talking to you. I think this article is a little alarmist. I could look at you and infer your race lol.
Also companies already infer that same information about you in other ways so this won’t really change much. Just makes it more accurate and faster while costing more.