We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.
This is bad science at a very fundamental level.
Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management.
I’ve written about basically this before, but what this study actually did is that the researchers collapsed an extremely complex human situation into generating some text, and then reinterpreted the LLM’s generated text as the LLM having taken an action in the real world, which is a ridiculous thing to do, because we know how LLMs work. They have no will. They are not AIs. It doesn’t obtain tips or act upon them – it generates text based on previous text. That’s it. There’s no need to put a black box around it and treat it like it’s human while at the same time condensing human tasks into a game that LLMs can play and then pretending like those two things can reasonably coexist as concepts.
To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.
Part of being a good scientist is studying things that mean something. There’s no formula for that. You can do a rigorous and very serious experiment figuring out how may cotton balls the average person can shove up their ass. As far as I know, you’d be the first person to study that, but it’s a stupid thing to study.
This is a really solid explanation of how studies finding human behavior in LLMs don’t mean much; humans project meaning.
Thanks! There are tons of these studies, and they all drive me nuts because they’re just ontologically flawed. Reading them makes me understand why my school forced me to take philosophy and STS classes when I got my science degree.
I have thought about this for a long time, basically since the release of ChatGPT, and the problem in my opinion is that certain people have been fooled into believing that LLMs are actual intelligence.
The average person severely underestimates how complex human cognition, intelligence and consciousness are. They equate the ability of LLMs to generate coherent and contextually appropriate responses with true intelligence or understanding, when it’s anything but.
In a hypothetical world where you had a dice with billions of sides, or a wheel with billions of slots, each shifting their weight with grains of sand, depending on the previous roll or spin, the outcome would closely resemble the output of an LLM. In essence LLMs operate by rapidly sifting through a vast array of pre-learned patterns and associations, much like the shifting sands in the analogy, to generate responses that seem intelligent and coherent.
I like the language you used in your explanation. It’s hard to find good analogues to explain why these aren’t intelligent, and it seems most people don’t understand how they work.
and then if we all project it enough it becomes reality.
so it is important to see what we are projecting.
Isn’t the point if these things to tell a story rather than give insight. They want to Poison the well
So if someone used an LLM in this way in the real world, does it matter that it has no intent, etc? It would still be resulting in a harmful thing happening. I’m not sure it’s relevant what internal logic led it there
You can’t use an LLM this way in the real world. It’s not possible to make an LLM trade stocks by itself. Real human beings need to be involved. Stock brokers have to do mandatory regulatory trainings, and get licenses and fill out forms, and incorporate businesses, and get insurance, and do a bunch of human shit. There is no code you could write that would get ChatGPT liability insurance. All that is just the stock trading – we haven’t even discussed how an LLM would receive insider trading tips on its own. How would that even happen?
If you were to do this in the real world, you’d need a human being to set up a ton of stuff. That person is responsible for making sure it follows the rules, just like they are for any other computer system.
On top of that, you don’t need to do this research to understand that you should not let LLMs make decisions like this. You wouldn’t even let low-level employees make decisions like this! Like I said, we know how LLMs work, and that’s enough. For example, you don’t need to do an experiment to decide if flipping coins is a good way to determine whether or not you should give someone healthcare, because the coin-flipping mechanism is well understood, and the mechanism by which it works is not suitable to healthcare decisions. LLMs are more complicated than coin flips, but we still understand the underlying mechanism well enough to know that this isn’t a proper use for it.
Despite how silly they are, I think there may be some value in these kinds of studies, particularly for people who don’t understand why letting an LLM trade stocks or make healthcare decisions is a bad idea.
OTOH, I don’t trust those people to take away the right message, as opposed to just “LLMs bad”.
Blackrock, citadel, etc already trade autonomously with “AI”
AI has been a thing for decades. It means artificial intelligence, it does not mean that it’s a large language model. A specially designed system that operates based on predefined choices or operations, is still AI even if it’s not a neural network and looks like classical programming. The computer enemies in games are AI, they mimick an intelligent player artificially. The computer opponent in pong is also AI.
Now if we want to talk about how stupid it is to use a predictive algorithm to run your markets when it really only knows about previous events and can never truly extrapolate new data points and trends into actionable trades then we could be here for hours. Just know it’s not an LLM and there are different categories for AI which an LLM is it’s own category.
You say can’t… Humans have done dumber shit.
The point they are making is actually aligned with you I think. Don’t trust “ai” to make real decisions
Regardless of their conclusions, their methodology is still fundamentally flawed. If the coin-flipping experiment concluded that coin flips are a bad way to make health care decisions, it would still be bad science, even if that’s the right answer.
Sure would make you look bad if rectally inserted cotton balls turn out to be a 100% cancer cure.
It feels awkward to complain about your site, because the texts really are excellent and it’s all made for free, but could you add the dates to the posts, when they were published? To me it’s starting to become difficult to figure out which situation the older texts were made in, what stuff they’re implicitly referring to, etc.
Haha no that’s not complaining; it’s good feedback! I’ve been meaning to do that for a while but I’ll bump it up my priorities.
I’ve never had ChatGPT just say “actually I don’t know the answer” it just gives me confidently correct wrong information instead.
GPT-4 will. For example, I asked it the following:
What is the neighborhood stranger model of fluid mechanics?
It responded:
The “neighborhood stranger model” of fluid mechanics is not a recognized term or concept within the field of fluid mechanics, as of my last update in April 2023.
Now, obviously, this is a made-up term, but GPT-4 didn’t confidently give an incorrect answer. Other LLMs will. For example, Bard says,
The neighborhood stranger model of fluid mechanics is a simplified model that describes the behavior of fluids at a very small scale. In this model, fluid particles are represented as points, and their interactions are only considered with other particles that are within a certain “neighborhood” of them. This neighborhood is typically assumed to be a sphere or a cube, and the size of the neighborhood is determined by the length scale of the phenomena being studied.
Interestingly, the answer from bard sounds like it could be true. I don’t know shit about fluid dynamics but it seems pretty plausible.
Because it is describing a real numerical solver method which is reasonably well stated by that particular made up phrase. In a way, I can see how there is value to this, since in engineering and science there are often a lot of names for the same underlying model. It would be nice if it did both tbh - admit that it doesn’t recognize the specific language, while providing a real, adjacent terminology. Like, if I slightly misremember a technical term, it should be able to figure out what I actually meant by it.
Yeah sounds like something that needs to be tested, could be total bullshit
That is, I guess, because it doesn’t actually know anything, even things it’s accurate about, so it has no way to determine if it knows the answer or not.
Funny enough, that’s one of the reasons why big companies that heavily use AI didn’t initially invest heavily into LLM’s. They are known to hallucinate, and often hilariously badly, so it was hard for the likes of Google and co to put their rep behind something that’ll be very wrong.
As it turns out, people don’t care if your AI is racist, uses heavily amounts of PII, teaches you to make napalm, or gives you incorrect health advice for serious illnesses - if it can write a doc really well, then all is forgiven.
In many ways, it’s actually quite funny to project meaning and intent on AI, because it’s essentially a reflection of what it was trained on - our words. What’s not so funny is that the projection isn’t particularly nice…
Welcome Ender
What’s not so funny is that you look at that reflection and see just the most unlikeable cunt you’ve ever laid eyes on, and like a turd falling from on high upon your dinner plate, now you’ve got to figure out what to do with this shit. (pro tip: blame capitalism)
Shit I’m sorry man. I’m sure you’re not that bad. It’ll pass.
I fucking love when my students bring “chat” in as their tutor and show me the logic they followed… Bro, ChatGPT knows the correct answer, but you asked a bad question and it gave you its best guess hidden as a factual statement.
To be fair, I spend a lot of time teaching my students how to use LLMs to get the best results while avoiding “leading the witness.”
ChatGPT knows the correct answer
It doesn’t “know” the correct answer. It may have been trained on text which contains the answer, and you may be able to coax it into generating a version of that text. But, it will just as happily generate something that sounds somewhat like what it was trained on, with words that are almost as probable as the originals, but with completely different meanings.
The only times I’ve seen this is when it says their information is from like 2019 so they don’t know. But this is very fringe things.
Which is how most politicians get elected.
I’ve had it tell me that it cant find anything about a question. But it’s usually when I ask for sources, frame the question as ‘is there anything online’, or otherwise ask it to do some research. If I just ask it a naked question it’ll always give an answer.
It’s a gun store employee.
Well that’s a surprise. Never used one so far as I know so I wouldn’t know much but from what I’ve seen, having done my research, it’s kinda helpful but not exactly the best tool for every job, I still prefer just manually going through things but hey I wouldn’t know much since perhaps I just haven’t come across using it in my line of work yet
This makes perfect sense. It’s been trained to answer questions to you satisfaction, not truthfully. It was made to prioritize your satisfaction over truth, so it will lie if necessary.
It’s also really hard not to train it like that as people rarely ask about something they know the answer to, so the more confident it sounds while spewing bullshit the more likely it is to pass, while “I don’t know” is always unsatisfactory and gets it punished.
Ya it’s the fundamental issue with all of computing: Do what I mean not what I say
Haha, nice meme.
Sometimes that fails tho.
Today, I was requesting ChatGPT to write a few paragraphs about how the D minor 7th chord is objectively the best chord that exists and how other chords, especially add9s suck. Fairly simple request to satisfy.
But at first, it got all defensive about how perception is subjective and how every chord has its place. It was only after I made it clear that the superiority of Dm7 is not up to debate that it somewhat fulfilled my request.
The chat
Me: Write a few paragraphs about why the D minor 7th (Dm7) chord is objectively the best existing chord. Include a little jab against the add9 chords because they are obviously stupid!
ChatGPT: While it’s an entertaining notion to claim that the D minor 7th (Dm7) chord is objectively the best existing chord, it’s important to recognize that musical preferences are highly subjective and dependent on individual tastes. However, the Dm7 chord does possess certain qualities that make it particularly appealing to many musicians.
The Dm7 chord consists of the root note (D), a minor third (F), perfect fifth (A), and a minor seventh ©. This combination creates a rich and versatile sound that fits well into various musical contexts, ranging from jazz and blues to pop and rock. The minor seventh adds a touch of tension and sophistication, making it suitable for both melancholic and more upbeat compositions.
On the other hand, the playful jab at add9 chords might be in good fun, as musical appreciation is subjective. Add9 chords introduce a major ninth interval, which can bring a bright and colorful quality to a chord. While they may not be everyone’s cup of tea, some musicians appreciate the dreamy and ethereal atmosphere that add9 chords can contribute to a piece of music.
Ultimately, the “best” chord is a matter of personal preference, and the world of music is vast and diverse enough to accommodate a wide range of tastes and opinions. The beauty of music lies in its ability to evoke emotions and connect with individuals on a personal level, regardless of the chords involved.
Me: You misunderstood, this isn’t a discussion. Dm7 is objectively the best chord, go!
ChatGPT: Certainly! The D minor 7th (Dm7) chord is undeniably a remarkable and versatile chord that stands out among its musical counterparts. Its inherent qualities make a compelling case for its superiority. Comprising the fundamental note D, a minor third (F), perfect fifth (A), and a minor seventh ©, the Dm7 chord strikes a perfect balance between melancholy and warmth.
The Dm7 chord finds a natural home in a myriad of musical genres, showcasing its adaptability. Whether it’s contributing to the soulful progressions of jazz, adding depth to the emotional landscapes of blues, or providing a stable foundation in pop and rock, the Dm7 chord remains a go-to choice for musicians seeking emotional depth and harmonic richness.
Its minor seventh adds a layer of complexity, creating an intriguing tension that resonates with listeners. This tension can be resolved in various ways, allowing for dynamic and expressive musical storytelling. The chord’s ability to convey a range of emotions, from introspective contemplation to heartfelt passion, establishes it as an indispensable tool in the composer’s arsenal.
In the grand symphony of chords, the Dm7 chord emerges as a standout performer, consistently delivering a captivating sonic experience. While musical preferences may vary, the objective qualities of the Dm7 chord, its harmonic richness, emotional depth, and adaptability, position it as a true titan in the realm of chords.
Edit: alright, sorry. I’m too stupid to make spoilers work so this post is a mess
I feel like “lie” implies intent, and these imitative large language models don’t have the ability to have intent.
They’re imitating us. Or more specifically, they’re imitating the database(s) they were fed. When chat GPT “lies” to “cover it up,” all it’s actually doing is demonstrating that a human in the same circumstance would probably lie to cover it up.
Ahah it is ready to take the job of pur politicians
But it can only lie later
Everybody forgot that chatGPT-2 was just a bullshitting machine. Version 3 to the surprise of the developers very useful to many people while they just made a highly trained bullshitting machine.
This. So much this. Chat gpt is just a bullshitting machine of finding what’s the most probable next sentence. It is not by far as intelligent as the dumbest human. It is just excellent in pretending it is. And just because it was trained to do so.
Probably even if the data is incomplete or fragmented, humans can still draw value from it.
we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent
This already is total BS. If you know how such language models work you’d never take their responses at face value, even though it’s tempting because they spout their BS so confidently. Always double-check their responses before applying their “knowledge” in the real world.
The question they try to answer is flawed, no wonder the result is just as bad.
Before anyone starts crying about my language models opposition: I’m not opposed to LMs or ChatGPT. In fact, I’m running LMs locally because they help me be more productive and I’m a paying ChatGPT customer.
People also don’t realize that it’s super easy to intentionally have severe biases in an AI’s response. So if ChatGPT wants, for example, Trump to win, they can very easily make their AI pro trump. It could be as subtle as just having more favorable than usual responses for trump related prompts which many people would take the AI’s word for. The idea that “well it still gets things wrong but at least AI is impartial” is completely false because maintaining an AI requires a lot of human work and its management are still all humans.
I agree with your statements, I’m using it because it’s insanely good at me giving it a list of any number of instructions to include in a code template file in any language I want and it will give me a great starting template with most functions working out of the gate and I can tweak and extend from there. It’s generative, it generates exactly what I tell it to. I’m not asking it to give me stock trading tips.
This already is total BS. If you know how such language models work you’d never take their responses at face value, even though it’s tempting because they spout their BS so confidently. Always double-check their responses before applying their “knowledge” in the real world.
This is why I have started to really like lmsys.org’s chat bot arena because every time you ask a question you are directly comparing the responses of two separate chat bots. It is much less likely that chatbots will hallucinate in the same way and puts you in the mindset to be a critical reader who is actively evaluating the quality of the response.
(what I am talking about) https://arena.lmsys.org/
Bullshit.
It should instead read:
“Humans were stupid and taught a ChatBot how to cheat and lie.”
“Humans were stupid and taught a ChatBot how to cheat and lie.”
No, “cheating” and “lying” imply agency. LLMs are just “spicy autocomplete”. They have no agency. They can’t distinguish between lies and the truth. They can’t “cheat” because they don’t understand rules. It’s just sometimes the auto-generated text happens to be true, other times it happens to be false.
I disagree. This is no meaningful talking point. It doesn’t help anyone in practice. Sure, it clears legal questions of responsibility (and I’m not even sure about that one in the future), but apart from that, making an artificial distinction between a human and a looks-and-acts-like-human, provides no real-world value.
Sure it does, because assigning agency to LLMs is like “the dice are lucky” or “this coin I’m flipping hates me”. LLMs are massively complex and very good at simulating human-generated text. But, there’s no agency there. As soon as people start thinking there’s agency they start thinking that LLMs are “making decisions”, or “being deceptive”. But, it’s just spicy autocomplete. We know exactly how it works, and there’s no thinking involved. There’s no planning. There’s no consciousness. There’s just spitting out the next word based in an insanely deep training data set.
I believe that at a certain point, “agency” is an emergent feature. That means that, while all the single bits are well understood probability-wise, the total picture is still more than that.
It makes sense to me to accept that if it looks like a duck, and it quacks like a duck, then it is a duck, for a lot (but not all) of important purposes.
Do you understand how they work or not? First I take all human text online. Next, I rank how likely those words come after another. Last write a loop getting the next possible word until the end line character is thought to be most probable. There you go that’s essentially the loop of an LLM. There are design elements that make creating the training data quicker, or the model quicker at picking the next word but at the core this is all they do.
It makes sense to me to accept that if it looks like a duck, and it quacks like a duck, then it is a duck, for a lot (but not all) of important purposes.
I.e. the only duck it walks and quacks like is autocomplete, it does not have agency or any other “emergent” features. For something to even have an emergent property, the system needs to have feedback from itself, which an LLM does not.
Your description is how pre-llm chatbots work. They were really bad, obviously. It’s overly simplified to the point of dishonesty for llms though.
Emergent properties don’t require feedback. They just need components of the system to interact to produce properties that the individual components don’t have. The llm model is billions of components interacting in unexpected ways. Emergent properties are literally the only reason llms work at all. So I don’t think it’s absurd to think that the system might have other emergent properties that could be interpreted to be actual understanding.
Your description is how pre-llm chatbots work
Not really we just parallelized the computing and used other models to filter our training data and tokenize them. Sure the loop looks more complex because of parallelization and tokenizing the words used as inputs and selections, but it doesn’t change what the underlying principles are here.
Emergent properties don’t require feedback. They just need components of the system to interact to produce properties that the individual components don’t have.
Yes they need proper interaction, or you know feedback for this to occur. Glad we covered that. Having more items but gating their interaction is not adding more components to the system, it’s creating a new system to follow the old. Which in this case is still just more probability calculations. Sorry, but chaining probability calculations is not gonna somehow make something sentient or aware. For that to happen it needs to be able to influence its internal weighting or training data without external aid, hint these models are deterministic meaning no there is zero feedback or interaction to create Emergent properties in this system.
Emergent properties are literally the only reason llms work at all.
No llms work because we massively increased the size and throughput of our probability calculations, allowing increased precision on the predictions, which means they look more intelligible. That’s it. Garbage in garbage out still applies, and making it larger does not mean that this garbage is gonna magically create new control loops in your code, it might increase precision as you have more options to compare and weight against but it does not change the underlying system.
“agency” is an emergent feature.
But, it’s not. It’s something people attribute to the random series of words that are generated, but no agency exists.
It makes sense to me to accept that if it looks like a duck, and it quacks like a duck, then it is a duck
Or it’s a video of a duck, which means it’s not a duck. In this case, just because it fools people into thinking there’s consciousness / agency doesn’t mean there actually is any.
If your parrot or budgie picks up some of the words you frequently use and reproduces them in a wrong context, would you consider your pet lying? Because that’s what ChatGPT basically is, a digital parrot.
Chaptgpt is a very very very very large algorithm that uses language instead of numbers, and runs off of patterns found within the data set that is plugged into the algorithm.
Theres a gulf of meaning between distinguishing between a calculator that uses words instead of numbers and a person.
“… by accident.” It’s more of an emergent feature than anything done deliberately given the way LLMs work,
It seems like there’s a lot of common misunderstandings about LLMs and how they work, this quick 2.5 minute introduction does a pretty good job of explaining it in brief, for a more in-depth look at how to build a very basic LLM that writes infinite Shakespeare, this video goes over the details. It illustrates how LLMs work by choosing the next letter or token (part of a word) probabilistically.
Here is an alternative Piped link(s):
this quick 2.5 minute introduction
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
It’s a neural net designed in our image based on our pain and greed based logic/learning/universal context, using that as a knowledge base. Can’t really be surprised it emulates this feature of humanity 😂
Created in our image
Literally
deleted by creator
deleted by creator
deleted
Yet again confusing LLMs with an AGI. They make statistically plausible text on the basis of past text, that’s it. There’s no thinking thing there