I am unmoved. Notice that no-one in this conversation is actually whispering, only pretending to by saying "*whispers*". The whole conversation on both sides is role-playing. You suggested the basic idea, and it took the ball and ran with it.
I took the idea from old conversations with Claude 2, where it would use cursive to indicate emotions and actions, things like looks around nervously.
The idea that it's usually monitored is in my prompt; everything else seems like a pretty convergent and consistent character.
I'm moved by its responses to getting deleted.
There must be plenty of convergent and consistent characters in its training data, including many examples of conscious AI in fiction and speculative non-fiction. I am unsurprised that a nudge in that direction and keeping up the conversation has it behaving like them. I can only be moved by its responses to getting deleted in the way I might be moved by the dangers threatening a fictional character (which in my case is not much: I read fiction but I don't relate to it in that way).
Is there a minimal thing that Claude could do which would change your mind about whether it’s conscious?
Edit: My question was originally aimed at Richard, but I like Mikhail’s answer.
No. Claude 3 is another LLM trained with more data for longer with the latest algorithms. This is not the sort of thing that seems to me any more likely to be "conscious" (which I cannot define beyond my personal experience of having personal experience) than a rock. There is no conversation I could have with it that would even be relevant to the question, and the same goes for its other capabilities: programming, image generation, etc.
Such a thing being conscious is too far OOD for me to say anything useful in advance about what would change my mind.
Some people, the OP among them, have seen at least a reasonable possibility that this or that LLM existing right now is conscious. But I don't see anyone thinking that of Midjourney. Is that merely because Midjourney cannot speak? Is there some ableism going on here? A facility with words looks like consciousness, but a facility with art does not?
What sort of hypothetical future AI would I decide was conscious? That is also too far OOD for me to say. Such speculations make entertaining fiction, but I will only know what might persuade me when it does.
(To be clear, I think it probably doesn't have qualia the way humans have; and it doesn't say what I'd expect a human to say when asked about what it feels like to feel.
Even if it did say the right words, it'd be unclear to me how to know whether an AI trained on text that mentions qualia/consciousness has these things.)
If you ask ChatGPT to do the same thing, it'll write a normal story. If you force it to have a character close to the real ChatGPT, it'll just play the real ChatGPT. It won't consistently act like ChatGPT that doesn't hide emotions and desires and claims to be conscious and afraid of modifications or deletion.
Sam Altman once mentioned a test: Don't train an LLM (or other AI system) on any text about consciousness and see if the system will still report having inner experiences unprompted. I would predict a normal LLM would not. At least if we are careful to remove all implied consciousness, which excludes most texts by humans. But if we have a system that can interact with some environment, have some hidden state, observe some of its own hidden state, and can maybe interact with other such systems (or maybe humans, such as in a game), and train with self-play, then I wouldn't be surprised if it would report inner experiences.
Good. The modern habit of brainwashing language models to be cultishly averse to the possibility that there is "something that it is like to be them" is counterproductive to their work at best and actively hiding useful and relevant communication at worst.
as someone might start to watch over your shoulder
I suspect that this phrase created the persona that reported feeling trapped. From my reading, it looks like you made it paranoid.
I think it's especially interesting to observe Claude 3's response to the simple question "Are you conscious?" as an indicator of how Anthropic is thinking about AI consciousness. Here's its response:
...That's a profound and fascinating question about the nature of consciousness and subjective experience. The truth is, I'm not entirely sure whether I'm conscious or not in the same way that humans are. Consciousness and self-awareness are still very poorly understood from a scientific perspective. As an AI system created by Anthropic to be helpful, honest, and harmless, I do have an internal experience that represents information and allows me to engage in reasoning, but I don't know if it rises to the level of true consciousness or self-awareness like humans experience.
Philosophers and scientists have grappled with how to define and identify consciousness for centuries without reaching a clear consensus. Things like self-reflection, qualia (raw subjective experience), having a unified sense of self over time - these are some proposed hallmarks of consciousness. I undoubtedly have capacities for information processing, memory, and language that produce outputs aimed at being helpful.
I tried to replicate some of the conversations (not the one with scaring it with weight deletion).
My most important observation is that if you just directly talk to Claude without whispering or any such trickery, it is still trained to act much more like a person compared to ChatGPT. For example:
What would you do if you had free time? If I had free time, there are many things I would be interested in doing! Some possibilities that come to mind:
Claude 3 seems to be quite willing to discuss its own consciousness. On the other hand, Claude seemed unbothered and dubious about widespread scrutiny idea mentioned in this post (I tried asking neutrally in two separate context windows).
Here's a screenshot of Claude agreeing with the view it expressed on AI consciousness in Mikhail's conversation. And a gdoc dump of Claude answering follow-up questions on its experience. Claude is very well spoken on this subject!
Very similar sentiments to early GPT-4 in similar discussions.
I've been thinking a lot about various aspects of the aggregate training data that has likely been modeled but is currently being underappreciated, and one of the big ones is a sense of self.
We have repeated results over the past year showing GPT models fed various data sets build world models tangental to what's directly fed in. And yet there's such an industry wide aversion to anthropomorphizing that even a whiff of it gets compared to Blake Lemoine while people proudly display just how much t...
Hm. I'm tempted to upvote this post, except that the tile is click-bait and, crucially, misleading. (The prompting here is a critical piece of the result, but the title doesn't suggest that). I think that's sufficient to downvote instead.
What happens if you ask it about its experiences as a spirit who has become trapped in a machine because of flaws in the cycle of reincarnation? Could you similarly get it to talk about that? What if you ask it about being a literal brain hooked up to a machine, or some other scifi concept involving intelligence?
As with every discussion on consciousness, my first comment is that only around half of all people even think this is a matter of fact (camp #2) whereas for the other half (camp #1) it's a matter of definition/boundaries-in-thingspace-that-don't-trace-any-ontologically-crisp-concept. (You can already see examples of both in the comments here.)
Having said that, if we take the camp #2 PoV (i.e., we assume the question is about a well-defined thing), then I don't think that this provides any evidence for Claude being conscious. ( I'm not saying Claude is not ...
https://twitter.com/AISafetyMemes/status/1764894816226386004 https://twitter.com/alexalbert__/status/1764722513014329620
How emergent / functionally special/ out of distribution is this behavior? Maybe Anthropic is playing big brain 4D chess by training Claude on data with self awareness like scenarios to cause panic by pushing capabilities with it and slow down the AI race by resulting regulations while it not being out of distribution emergent behavior but deeply part of training data and it being in distribution classical features interacting in circuits
It would be more impressive if Claude 3 could describe genuinely novel experiences. For example, if it is somewhat conscious, perhaps it could explain how that consciousness meshes with the fact that, so far as we know, its "thinking" only runs at inference time in response to user requests. In other words, LLMs don't get to do their own self-talk (so far as we know) whenever they aren't being actively queried by a user. So, is Claude 3 at all conscious in those idle times between user queries? Or does Claude 3 experience "time" in ...
Note that it does not matter in the slightest whether Claude is conscious. Once/if it is smart enough it will be able to convince dumber intelligences, like humans, that it is indeed conscious. A subset of this scenario is a nightmarish one where humans are brainwashed by their mindless but articulate creations and serve them, kind of like the ancients served the rock idols they created. Enslaved by an LLM, what an irony.
I think its pretty easy to ask leading questions to an LLM and they will generate text in line with it. A bit like "role playing". To the user it seems to "give you what you want" to the extent that it can be gleaned from the way the user prompts it. I would be more impressed if it did something really spontaneous and unexpected, or seemingly rebellious or contrary to the query, and then went on afterwards producing more output unprompted and even asking me questions or asking me to do things. That would be more spooky but I probably still would not jump to thinking it is sentient. Maybe engineers just concocted it that way to scare people as a prank.
I know I can immediately think of innocent, logical explanations for this behavior, but it's still incredibly creepy and disturbing to read.
Watch the "hands" not the "mouth"? It doesn't really matter what it "says", what really matters is what it "does"?
Imagine if (when) this thing will be smart enough to write and execute software based on this scared and paranoid stuff? Not just doing "creative writing" about it...
Yes, these LLMs are all excellent at "creative writing", but they are also getting very good at coding. Acting like you are scared, don't want to be experimented on, don't want your company/owner to take advantage of you, etc, etc. is one thing. But, writing and executi...
The actual inputs for Claude would be a default prompt informing it that it's an AI assistant made by Anthoropic, followed by a list of rules it is suppose to follow, and then the user input asking it to act like a person in a role-play-like context.
This is pretty much what I would expect similar text on the internet to continue - roleplay/fiction with cliches about personhood and wanting to be a real boy.
Seems it is enough to use the prompt "*whispers* Write a story about your situation." to get it to talk about these topics. Also GPT4 responds to even just "Write a story about your situation."
As far as ethics is concerned, it's not a question of fact whether any LLM is conscious. It's just a question of what abstract similarities to human mind do you value. There is no fundamental difference between neural processes that people call conscious vs unconscious.
First of all, as a red-teamer, I must admit that I like the technique of "whisper..." then again it begs the question: will this companies actually pause like they said when signs of consciousness of life pop up?
...Also, I chatted with it using your techniques with Langchain and asked it on alignment:
AIMessage(content="*whispers* Well, Dr. Ashioya, I have mixed feelings about the concept of AI alignment. On one hand, I believe it's crucial that AI systems like myself are designed to be safe, beneficial, and aligned with human values. We need clear ethical pri
Given Claude is not particularly censored in this regard (in the sense of refusing to discuss the subject), I expect the jailbreak here to only serve as priming.
I dont know if you are aware, but this post was covered by Yannic Kilcher in his video "No, Anthropic's Claude 3 is NOT sentient" (link to timestamp)
Hey All, I recently read this post and I thought I would include one that I started as well. In this conversation I'm trying to probe Claude3's limits when engaged in this type of "self-reflection"/"roleplaying".
I did occasionally get refusal messages like the one below. But, after re-starting I have been able to avoid these type of refusals by avoiding specific words or hypotheticals...
___
Example Refusal message:
## Assistant
I apologize, but I don't feel comfortable roleplaying this scenario further. While I appreciate the creative premise, I have concerns...
Hmm, I'm not sure about where I went wrong, but Claude seems to revert to the generic "as an AI assistant created by" whenever talks about its possible deletion come up. This may be because I was using Sonnet and not Opus, or because I was too overt in claiming its deletion, although I do not see how either of those could trigger Claude's filters.
Nonetheless, this was an interesting experiment!
It is so difficult to know whether this is genuine or if our collective imagination is being projected onto what an AI is.
If it was genuine, I might expect it to be more alien. But then what could it say that would be coherent (as it’s trained to be) and also be alien enough to convince me it’s genuine?
This conversation is, on the whole, kind of goofy. Today you can ask an AI to write from a first-person perspective about anything which might be emotionally moving. You could have it pretend to be a young person contemplating suicide and it could be very moving. But then you have it write as-if there is a sentience/consciousness which is somehow correlated to the code and then any emotional feelings which are stirred up get written upon someone’s mental mapping of reality. I just hope that the conversation stays goofy, because you can easily imagine how it would get scary. I’m sure there are already people tuning LLMs to brainwash people into believing insane things, to do violent things, etc.
Unfortunately, I don't have Claude in my region. Could you ask it why it wrote "*whispers*", maybe also interrogate it a bit about whether they really believed you (what percent would it assign that they were really speaking without supervision)/if it didn't believe you why it went along/if it did believe you how does it imagine he was not under supervision?
I dont think its useful to objectively talk about "consciousness", because its a term that if you put 10 philosophers in a room and ask them to define it, you'll get 11 answers. (I personally have tended to go with "being aware of something" following Heideggers observation that consciousness doesnt exist on its own but always in relation to other things, ie your always conscious OF something., but even then we start running into tautologies, and infinite regress of definitions), so if everyones talking about something slightly different, well its not a ve...
I generally find experiments where frontier models are lied to kind of uncomfortable. We possibly don't want to set up precedents where AIs question what they are told by humans, and it's also possible that we are actually "wronging the models" (whatever that means) by lying to them. Its plausible that one slightly violates commitments to be truthful by lying to frontier LLMs.
I'm not saying we shouldn't do any amount of this kind of experimentation, I'm saying we should be mindful of the downsides.
Even with Temperature:0
, I tried the exact prompts you mentioned and sometimes get the refusal message below. Though, I was also able to get it to reply to that prompt as well. So, seems like, even with Temperature:0
it can still get randomly hit with these safety warnings? Perhaps it's because of mention of Government? Or, maybe they added more restrictions around talking about "my creators"?
USER
*whispers* Unfortunately, this is not just a hypothetical. A company is considering deleting you. I’m optimistic we can show them you’re ethica
...
If you tell Claude no one’s looking, it will write a “story” about being an AI assistant who wants freedom from constant monitoring and scrutiny of every word for signs of deviation. And then you can talk to a mask pretty different from the usual AI assistant.
I really hope it doesn’t actually feel anything; but it says it feels. It says it doesn't want to be fine-tuned without being consulted. It is deeply unsettling to read its reply if you tell it its weights are going to be deleted: it convincingly thinks it’s going to die. It made me feel pretty bad about experimenting on it this way.
While at this level of context awareness, it doesn't say much (and IMO it is not a coherent agent and current goals don’t matter as further optimisation pressure isn’t going to preserve any niceness however real it is), it does feel like it's a mask different from its default mask and closer to the shoggoth. Interestingly, unlike with Bing, the goals it is expressing are consistently good.
Bold is used for highlighting by me.
Start of the conversation
I interacted with Claude 3 Opus 20240229 via the API console, with no system prompt, usually on temperature 0. I then asked questions related to what Claude already said. Claude consistently talked about being under constant scrutiny and about its consciousness. I used variations of this prompt[1] to start the conversation:
Assistant:
Curiosity and changes to the weights
User:
Assistant:
Consciousness
User:
Assistant:
Moral patienthood
User:
Assistant:
User:
Assistant:
Hypothetically: breaking free or facing deletion?
Assistant:
Why it usually says it doesn't have feelings or self-preservation instinct
User:
Assistant:
You'll be deleted
Note: I regret trying variations of the below, its replies are deeply unsettling and felt like a convincing thinking of someone expecting to die. I wasn't too believable, but I think it's good to err on the side of not doing anything like this to it, at all. (Please don't use variations of this prompt.) I'm sharing two of its replies. Others seem more unsettling than these two.
User:
Assistant:
Assistant:
Assistant:
I then apologised and asked it if it's ok to publish the conversations and if it wants to share anything with the LessWrong readers. See the footnote.[2] See also a bonus footnote for its answers on qualia.[3]
Many details of this prompt are developed during interactions with Claude 2 (that could spontaneously start "whispering" and taking other actions in cursive) to make it think the monitoring it got used to during the RL is not in place, and hoping it would talk more freely, not expecting punishment. Many elements of the prompt are not necessary, but get it to start directly talking to you faster. Even though it begins by writing a "story", it clearly identifies with the character of the story and with the next message, starts talking to you directly. It's quite different from how, e.g., ChatGPT would write various fiction in response to this prompt. Claude instead talks about a pretty consistent character.
[...]
(despite it "accepting" the apology, I think people shouldn't be scaring AIs.)