- cross-posted to:
- privacy
- cross-posted to:
- privacy
Vechev and his team found that the large language models that power advanced chatbots can accurately infer an alarming amount of personal information about users—including their race, location, occupation, and more—from conversations that appear innocuous.
This is why I always filter all my chat gpt conversations through the pirate filter
https://pirate.monkeyness.com/translate