- cross-posted to:
- technology@hexbear.net
- cross-posted to:
- technology@hexbear.net
University researchers have developed a way to “jailbreak” large language models like Chat-GPT using old-school ASCII art. The technique, aptly named “ArtPrompt,” involves crafting an ASCII art “mask” for a word and then cleverly using the mask to coax the chatbot into providing a response it shouldn’t.Read Entire Article
You must log in or # to comment.