When I said "subjective existence" I've meant some model where we don't need a list of minds or exhaustive search for minds to make them real. After all the brain has its own computing power and requiring additional compute or data to make subjective experiences associated with its computations real looks extraneous. Interactions of a mind with our world, on the other hand, seem crucial for our ability to determine its existence.
BTW, thank you for laying out all this in such detail. It makes reasoning much more focused.
Step 2 codifies objective existence of subjective states. But let's suppose that homomorphic computation can be decrypted in two ways: one is what we encoded and the output is something like "it feels real", other is a minimally conscious state that happened to exist when decoding with a different key and its output is a noisy grunt expressing dissatisfaction with the noisy environment. Should the second one be included in M?
It seems that g and h cannot be efficiently computable if we decide to include the second state into M. On the second thought, if we ...
What concrete fact about the physical world do you think you're missing? What are you ignorant of?
Let's flip very unfair quantum coin with 1:2^1000000 heads to tails chances (that would require quite an engineering feat to prepare such a quantum state, but it's theoretically possible). You shouldn't expect to see heads if the quantum state is prepared correctly, but the post-flip universe (in MWI) contains a branch where you see heads. So, by your logic, you should expect to see both heads and tails even if the state is prepared correctly.
What I do not kno...
"Thread of subjective experience" was an aside (just one of the mechanisms that explains why we "find ourselves" in a world that behaves according to the Born rule), don't focus too much on it.
The core question is which physical mechanism (everything should be physical, right?) ensures that you almost never will see a string of a billion tails after a billion quantum coin flips, while the universe contains a quantum branch with you looking in astonishment on a string with a billion tails. Why should you expect that it will almost certainly not happen, when...
I haven't fully understood your stance towards the many minds interpretation. Do you find it unnecessary?
I don’t think either of these Harrys is “preferred”.
And simultaneously you think that existence of future Harries who observe events with probabilities approaching zero is not a problem because current Harry will almost never find himself to be those future Harries. I don't understand what it means exactly.
Harries who observe those rare events exist and they wonder how they found themselves in those unlikely situations. Harries who hadn't found an...
For example: “as quantum amplitude of a piece of the wavefunction goes to zero, the probability that I will ‘find myself’ in that piece also goes to zero”
What I really don't like about this formulation is extreme vagueness of "I will find myself", which implies that there's some preferred future "I" out of many who is defined not only by observations he receives, but also by being a preferred continuation of subjective experience defined by an unknown mechanism.
It can be formalized as the many minds interpretation, incurring additional complexity penalty a...
First, a factual statement that is true to the best of my knowledge: LLM state, that is used to produce probability distribution for the next token, is completely determined by the state of its input buffer (plus a bit of indeterminism due to parallel processing and non-associativity of floating point arithmetic).
That is LLM can pass only a single token (around 2 bytes) to its future self. That follows from the above.
What comes next is a plausible (to me) speculation.
For humans what's passed to our future self is most likely much more that a single token. ...
Expanding a bit on the topic.
Exhibit A: flip a fair coin and move a suspended robot into a green or red room using a second coin with probabilities (99%, 1%) for heads, and (1%, 99%) for tails.
Exhibit B: flip a fair coin and create 99 copies of the robot in green rooms and 1 copy in a red room for heads, and reverse colors otherwise.
What causes the robot to see red instead of green in exhibit A? Physical processes that brought about a world where the robot sees red.
What causes a robot to see red instead of green in exhibit B? The fact that it sees red, not...
I have a solution that is completely underwhelming, but I can see no flaws in it, besides the complete lack of definition of which part of the mental state should be preserved to still count as you and rejection of MWI (as well as I cannot see useful insights into why we have what looks like continuous subjective experience).
Do you think the exploited flaw is universal or, at least, common?
Excellent story. But what about "pull the plug" option? ALICE found a way to run itself efficiently on the traditional datacenters that aren't packed with backprop and inference accelerators? And shutting them down would have required too strong a political will than what the government could muster at the time?
Citing https://arxiv.org/abs/cond-mat/9403051: "Furthermore if a quantum system does possess this property (whatever it may be), then we might hope that the inherent uncertainties in quantum mechanics lead to a thermal distribution for the momentum of a single atom, even if we always start with exactly the same initial state, and make the measurement at exactly the same time."
Then the author proceed to demonstrate that it is indeed the case. I guess it partially answers the question: quantum state thermalises and you'll get classical thermal distribution o...
we are assuming that without random perturbation, you would get 100% accuracy
That is the question is not about the real argon gas, but about a billiard ball model? It should be stated in the question.
here are creatures in the possible mind space[3] whose intuition works in the opposite way. They are surprised specifically by the sequence of and do not mind the sequence of
That is creatures who aren't surprised by outcomes of lower Kolmogorov complexity or not surprised by the fact that the language they use for estimation of Kolmogorov complexity has a special compact case for producing "HHTHTTHTTH".
Looks possible, but not probable.
For returns below $2000, I'd use 50/50 quantum random strategy just for fun of dropping Omega's stats.
what happens if we automatically evaluate plans generated by superhuman AIs using current LLMs and then launch plans that our current LLMs look at and say, "this looks good".
The obvious failure mode is that LLM is not powerful enough to predict consequences of the plan. The obvious fix is to include human-relevant description of the consequences. The obvious failure modes: manipulated description of the consequences, optimizing for LLM jail-breaking. The obvious fix: ...
I won't continue, but shallow rebuttals is not that convincing, but deep ones is close to capability research, so I don't expect to find interesting answers.
What if all I can assign is a probability distribution of probabilities? Like in extraterrestrial life question. All that can be said is that extraterrestrial life is sufficiently rare to not find evidence of it yet. Our observation of our existence is conditioned on our existence, so it doesn't provide much evidence one way or another.
Should I sample the distribution to give an answer, or maybe take mode, or mean, or median? I've chosen a value that is far from both extremes, but I might have done something else with no clear justification for any of the choices.
This means that LLMs can inadvertently learn to replicate these biases in their outputs.
Or the network learns to trust more the tokens that were already "thought about" during generation.
Suppose when you are about to die [...] Omega shows up
Suppose something pertaining more to the real world: if you think that you are here and now because there will not be significantly more people in the future, then you are more likely to become depressed.
Also, why Omega uses 95% and not 50%, 10%, or 0.000001%?
ETA: Ah, Omega in this case is an embodiment of the litany of Tarski. Still, if there will be no catastrophe we are those 5% who violate the litany. Not saying that the litany comes closest to useless as it can get when we are talking about a belief in an inevitable catastrophe you can do nothing about.
If we are talking about the real world (to the best of our current knowledge, yada, yada) and not its classical approximation, we have the universal wavefunction as the world model, which is independent of agent's actions as it encompasses them all.
Interacting with the world (by generating a specific pattern) allows to narrow down indexical uncertainty to all the agents that generate the pattern with non-zero probability.