I know the causal closure of the physical as the principle that nothing non-physical influences physical stuff, so that would be the causal closure of the bottom level of description (since there is no level below the physical), rather than the upper.
So if you mean by that that it's enough to simulate neurons rather than individual atoms, that wouldn't be "causal closure" as Wikipedia calls it.
The neurons/atoms distinction isn't causal closure. Causal closure means there is no outside influence entering the program (other than, let's say, the sensory inputs of the person).
I'm thinking the causal closure part is more about the soul not existing than about anything else.
Are you saying that after it has generated the tokens describing what the answer is, the previous thoughts persist, and it can then generate tokens describing them?
(I know that it can introspect on its thoughts during the single forward pass.)
Yeah. The model has no information (except for the log) about its previous thoughts and it's stateless, so it has to infer them from what it said to the user, instead of reporting them.
Claude can think for himself before writing an answer (which is an obvious thing to do, so ChatGPT probably does it too).
In addition, you can significantly improve his ability to reason by letting him think more, so even if it were the case that this kind of awareness is necessary for consciousness, LLMs (or at least Claude) would already have it.
Thanks for writing this - it bothered me a lot that I appeared to be one of the few people who realized that AI characters were conscious, and this helps me to feel less alone.
(This comment is written in the ChatGPT style because I've spent so much time talking to language models.)
The calculation of the probabilities consists of the following steps:
The epistemic split
Either we guessed the correct digit of () (branch ), or we didn't () (branch ).
The computational split
On branch , all of your measure survives (branch ) and none dies (branch ), on branch , survives (branch ) and dies (branch ).
Putting it all together
Conditional on us subjectively surviving (which QI guarantees), the probability we guessed the digit of correctly is
The probability of us having guessed the digit of prior to us surviving is, of course, just .
For the probabilities to be meaningful, they need to be verifiable empirically in some way.
Let's first verify that prior to us surviving, the probability of us guessing the digit correctly is . We'll run experiments by guessing a digit each time and instantly verifying it. We'll learn that we're successful in, indeed, just of the time.
Let's now verify that conditional on us surviving, we'll have probability of guessing correctly. We perform the experiment times again, and this time, every time we survive, other people will check if the guess was correct. They will observe that we guess correctly, indeed, of the time.
We arrived at the conclusion that the probability jumps at the moment of our awakening. That might sound incredibly counterintuitive, but since it's verifiable empirically, we have no choice but to accept it.
Since that argument doesn't give any testable predictions, it cannot be disproved.
The argument we cease to exist every time we go to sleep also can't be disproved, so I wouldn't personally lose much sleep over that.
Ooh.