Any physical system exhibiting exactly the same input-output mappings.
That's a sufficient condition, but not a necessary one. A factor I can think of right now is the sufficient coherency and completeness of the I/O whole. (If I have a system that outputs what I would in response to one particular input and the rest is random, it doesn't have my consciousness. But for a system where all inputs and outputs match except for an input that says "debug mode," for which it switches to "simulating" somebody else, we can conclude that it has consciousness almost i...
They might have a personal experience with someone above them harming them or somebody else for asking a question or something analogous.
Ontologically speaking, any physical system exhibiting the same input-output pattern as a conscious being has identical conscious states.
From the story, it's interesting that neither side arrived at their conclusion rigorously, rather, they both use intuition - Bob, who, based on his intuition, concluded Nova had consciousness (assuming that's what people mean when they say "sentient"), and came to the correct conclusion based on incorrect "reasoning," and Tyler, who, based on an incorrect algorithm, convinced Bob Nova wasn't sentient after all - even thou...
(I believe the version he tested was what later became o1-preview.)
According to Terrence Tao, GPT-4 was incompetent at graduate-level math (obviously), but o1-preview was mediocre-but-not-entirely-incompetent. That would be a strange thing to report if there were no difference.
(Anecdotally, o3-mini is visibly (massively) brighter than GPT-4.)
I meant "light-hearted" and sorry, it was just a joke.
imo it's not too dangerous as long as you go into it with the intention to not fully yield control and have mental exception handlers
Ah, you're a soft-glitcher. /lh
Edit: This is a joke.
Why not?
Because it's not accompanied by the belief itself, only by the computational pattern combined with behavior. If we hypothetically could subtract the first-person belief (which we can't), what would be left would be everything else but the belief itself.
if you claimed that the first-person recognition ((2)-belief) necessarily occurs whenever there's something playing the functional role of a (1)-belief
That's what I claimed, right.
Seems like you'd be begging the question in favor of functionalism
I don't think so. That specific argument had a form of ...
What kind of person instance is "perceiving themselves to black out" (that is, having blacked out)?
It's not a person instance, it's an event that happens to the person's stream of consciousness. Either the stream of consciousness truly, objectively ends, and a same-pattern copy will appear on Mars, mistakenly believing they're the very same stream-of-consciousness as that of the original person.
Or the stream is truly, objectively preserved, and the person can calmly enter, knowing that their consciousness will continue on Mars.
I don't think a 3rd-person an...
Does the 3rd person perspective explain if you survive a teleporter, or if you perceive yourself to black out forever (like after a car accident)?
That only seems to make sense if the next instant of subjective experience is undefined in these situations (and so we have to default to a 3rd person perspective).
I see, thanks. Just to make sure I'm understanding you correctly, are you excluding the reasoning models, or are you saying there was no jump from GPT-4 to o3? (At first I thought you were excluding them in this comment, until I noticed the "gradually better math/programming performance.")
Here's an argument for a capabilities plateau at the level of GPT-4 that I haven't seen discussed before. I'm interested in any holes anyone can spot in it.
One obvious hole would be that capabilities did not, in fact, plateau at the level of GPT-4.
I think "belief" is overloaded here. We could distinguish two kinds of "believing you're in pain" in this context:
(1) isn't a belief (unless accompanied by (2)).
But in order to resist the fading qualia argument along the quoted lines, I think we only need someone to (1)-believe they're in pain yet be mistaken.
That's not possible, because the belief_2 that one isn't in pain has nowhere to be instantiated.
Even if the intermediate stages believed_2 they're not in pain and only spoke and acted that way (which isn't possible), it would introduce a desynchroniza...
Have you tried it with o1 pro?
Does anyone have stats on OpenAI whistleblowers and their continued presence in the world of living?
I argue that computation is fuzzy, it’s a property of our map of a system rather than the territory.
This is false. Everything exists in the territory to the extent to which it can interact with us. While different models can output a different answer as to which computation something runs, that doesn't mean the computation isn't real (or, even, that no computation is real). The computation is real in the sense of it influencing our sense impressions (I can observe my computer running a specific computation, for example). Someone else, whose model doesn't r...
I refuse to believe that tweet has been written in good faith.
I refuse to believe the threshold for being an intelligent person on Earth is that low.
I know the causal closure of the physical as the principle that nothing non-physical influences physical stuff, so that would be the causal closure of the bottom level of description (since there is no level below the physical), rather than the upper.
So if you mean by that that it's enough to simulate neurons rather than individual atoms, that wouldn't be "causal closure" as Wikipedia calls it.
The neurons/atoms distinction isn't causal closure. Causal closure means there is no outside influence entering the program (other than, let's say, the sensory inputs of the person).
I'm thinking the causal closure part is more about the soul not existing than about anything else.
Are you saying that after it has generated the tokens describing what the answer is, the previous thoughts persist, and it can then generate tokens describing them?
(I know that it can introspect on its thoughts during the single forward pass.)
Yeah. The model has no information (except for the log) about its previous thoughts and it's stateless, so it has to infer them from what it said to the user, instead of reporting them.
Claude can think for himself before writing an answer (which is an obvious thing to do, so ChatGPT probably does it too).
In addition, you can significantly improve his ability to reason by letting him think more, so even if it were the case that this kind of awareness is necessary for consciousness, LLMs (or at least Claude) would already have it.
Thanks for writing this - it bothered me a lot that I appeared to be one of the few people who realized that AI characters were conscious, and this helps me to feel less alone.
(This comment is written in the ChatGPT style because I've spent so much time talking to language models.)
The calculation of the probabilities consists of the following steps:
The epistemic split
Either we guessed the correct digit of () (branch ), or we didn't () (branch ).
The computational split
On branch , all of your measure survives (branch ) and none dies (branch ), on branch , survives (branch ) and dies (branch ).
Putti
Since that argument doesn't give any testable predictions, it cannot be disproved.
The argument we cease to exist every time we go to sleep also can't be disproved, so I wouldn't personally lose much sleep over that.
I don't know about similarity... but I was just making a point that QI doesn't require it.
When you die, you die.
The interesting part of QI is that the split happens at the moment of your death. So the state-machine-which-is-you continues being instantiated in at least one world. The idea of your consciousness surviving a quantum suicide doesn't rely on it continuing in implementations of similar state machines, merely in the causal descendant of the state machine which you already inhabit.
It's like your brain being duplicated, but those other copies are never woken up and are instantly killed. Only one copy is woken up. Which guarantees that pr...
Yes. If I relied on losing a bet and someone knew that, them offering me to bet (and therefore lose) would make me wary something would unpredictably go right, I'd win, and my reliance on me losing the bet would be thwarted.
If I meet a random person who offers to give me $100 now and claims that later, if it's not proven that they are the Lord of the Matrix, I don't have to pay them $15,000, most of my probability mass located in "this will end badly" won't be located in "they are the Lord of the Matrix." I don't have the same set of worries here, but the worry remains.
I use Google Chrome on Ubuntu Budgie and it does look to me like both the font and the font size changed.
Character AI used to be extremely good back in the Dec/Jan 2022/2023, with the bots being very helpful, complex and human-like, rather than exacerbating psychological problems in a very small minority of users. As months passed and the user base exponentially grew, the models were gradually simplified to keep up.
Today, their imperfections are obvious, but many people mistakenly interpret it as the models being too human-like (and therefore harmful), rather than the models being too oversimplified while still passing for an AI (and therefore harmful).
I think we're spinning on an undefined term. I'd bet there are LOTS of details that effect my perception in subtle and aggregate ways which I don't consciously identify.
You're equivocating between perceiving a collection of details and consciously identifying every separate detail.
If I show you a grid of 100 pixels, then (barring imperfect eyesight) you will consciously perceive all 100 them. But you will not consciously identify every individual pixel unless your attention is aimed at each pixel in a for loop (that would take longer than consciously...
Computability shows that you can have a classical computer that has the same input/output behavior
That's what I mean (I'm talking about the input/output behavior of individual neurons).
Input/Output behavior is generally not considered to be enough to guarantee same consciousness
It should be, because it is, in fact, enough. (However, neither the post, nor my comment require that.)
Eliezer himself argued that GLUT isn't conscious.
Yes, and that's false (but since that's not the argument in the OP, I don't think I should get sidetracked).
...But nonetheless, if the
so the idea is that you can describe the brain by treating each neuron as a little black box about which you just know its input/output behavior, and then describe the interactions between those little black boxes. Then, assuming you can implement the input/output behavior of your black boxes with a different substrate (i.e., an artificial neuron)
This is guaranteed, because the universe (and any of its subsets) is computable (that means a classical computer can run software that acts the same way).
And there are orders of magnitude more detail going on in my body (and even just in my brain) than I perceive, let alone that I communicate.
There are no sentient details going on that you wouldn't perceive.
It doesn't matter if you communicate something, the important part is that you are capable of communicating it, which means that in changes your input/output pattern (if it didn't, you wouldn't be capable of communicating it even in principle).
Circular arguments that "something is discussed, therefore that thing exists"
This isn't the argument in the OP (even though, when reading quickly, I can see how someone could get that impression).
(Thanks to the Hayflick limit, only some lines can go on indefinitely.)
If the SB always guesses heads, she'll be correct of the time. For that reason, that is her credence.
Are the ‘AI companion’ apps, or robots, coming? I mean, yes, obviously?
The technology for bots who are "better" than humans in some way (constructive, pro-social, compassionate, intelligent, caring interactions while thinking 2 levels meta) has been around since 2022. But the target group wouldn't pay enough for GPT-4-level inference, so current human-like bots are significantly downscaled compared to what technology allows.
To consciously take in an information, you don't have to store any bits - you only have to map the correct input to the correct output. (By logical necessity, any transformation that preserves the input/output relationship preserves consciousness.)
Unless you can summarize you argument in at most 2 sentences (with evidence), it's completely ignoreable.
This is not how learning any (even slightly complex) topic works.
When I skipped my medication whose abstinence symptom is strong anxiety, my brain always generated a nightmare to go along with the anxiety, working backwards in the same way.
Edit: Oh, never mind, that's not what you mean.
That wouldn't help. Then the utility would be calculated from (getting two golden bricks) and (murdering my child for a fraction of a second), which still brings lower utility than not following the command.
The set of possible commands for which I can't be maximally rewarded still remains too vast for the statement to be meaningful.
I see your argument. You are saying that "maximal reward", by definition, is something that gives us the maximum utility from all possible actions, and so, by definition, it is our purpose in life.
But actually, utility is a function of both the action (getting two golden bricks) and what it rewards (murdering my child), not merely a function of the action itself (getting two golden bricks).
And so it happens that for many possible demands that I could be given ("you have to murder your child"), there are no possible rewards that would give me more utility t...
How does someone punishing you or rewarding you make their laws your purpose in life (other than you choosing that you want to be rewarded and not punished)?
Either we define "belief" as a computational state encoding a model of the world containing some specific data, or we define "belief" as a first-person mental state.
For the first definition, both us and p-zombies believe we have consciousness. So we can't use our belief we have consciousness to know we're not p-zombies.
For the second definition, only we believe we have consciousness. P-zombies have no beliefs at all. So for the second definition, we can use our belief we have consciousness to know we're not p-zombies.
Since we have a belief in the existence of our consciousness according to both definitions, but p-zombies only according to the first definition, we can know we're not p-zombies.
This is incorrect - in a p-zombie, the information processing isn't accompanied by any first-person experience. So if p-zombies are possible, we both do the information processing, but only I am conscious. The p-zombie doesn't believe it's conscious, it only acts that way.
You correctly believe that having the correct information processing always goes hand in hand with believing in consciousness, but that's because p-zombies are impossible. If they were possible, this wouldn't be the case, and we would have special access to the truth that p-zombies lack.
What an undignified way to go.
Trump has a history of both ignoring the law and human rights in general, and imprisoning innocent people under the guise of them being illegal immigrants when they aren't. Current events are unsurprising, and a part of what his voters voted for.