All of Mergimio H. Doefevmil's Comments + Replies

I suspect that the paradigm of computation one chooses plays an important role here. The paradigm of a deterministic Turing machine leads to what I described in the post - one dimensional sequences and guaranteed solipsism. The paradigm of a a nondeterministic Turing machine allows for multi-dimensional sequences. I will edit the post to reflect on this.

2JBlack
Solomonoff induction is about computable models that produce conditional probabilities for an input symbol (which can represent anything at all) given a previous sequence of input symbols. The models are initially weighted by representational complexity, and for any given input sequence are further weighted by the probability assigned to the observed sequence. The distinction between deterministic and non-deterministic Turing machines is not relevant since the same functions are computable by both. The distinction I'm making is between models and input. They are not the same thing. This part of your post Confuses the two. The input is a sequence of states. World-models are any computable structure at all that provide predictions as output. Not even the predictions are sequences of states - they're conditional probabilities for next input given previous input, and so can be viewed as a distribution over all finite sequences.

Solomonoff indiction doesn’t say anything about larger world models that contain the one-dimensional sequences that form the Solomonoff distribution. You appear to be saying that although the predicted sequence is always solipsistic from the point of view of the inductor, there can be a larger reality that contains that sequence, but that is an extra add-on that doesn’t appear anywhere in the original Solomonoff induction. 

2JBlack
A Solomonoff hypothesis can be any computable model that predicts the sequence, including any model that also happens to predict a larger reality if queried in that way. There are always infinitely many such "large world" models that are compatible with the input sequence up to any given point, and all of them are assigned nonzero probability. It is possible that there may be a simpler model that predicts the same sequence and does not model the existence of any other reality in any meaningful sense, but I suspect that a general universe model plus a fixed-size "you are here" will in a universe with computable rules remain pretty close to optimal.

Real men wear pink shirts that say "REAL MEN WEAR PINK".

Realer men wear pink shirts that don’t say anything.

Even realer men wear break the spell and wear what they actually like.

And the realest men of them all wear fedoras.

Interesting. What is the difference then between illusionism and eliminativism? Is eliminativism the even more "hard-core" position, whereby while illusionism only denies the existence of phenomenal properties, but not experience, eliminativism denies the existence of any experience altogether?

1Benjy Forstadt
Hmm. It gets tricky because we get into like, what does the English word “experience” mean. “Phenomenal properties” is supposed to pick out the WOW! aspect of experiences, that thing that’s really obvious and vivid that makes us speculate about dualism and zombies. I think Frankish uses “experience” basically to mean whatever neural events cause us to talk about pain, hunger etc, so I don’t think an eliminativist would deny those exist. But I’m not sure.

The referent of the label 2+2 is strictly identical to the referent of the label 4, but the labels 2+2 and 4 themselves are obviously not identical.

What would the statement of illusionism be then? That 🟩 is an illusion? Surely, yes, but digging deeper, you would get to some form of brain activity.

2TAG
Illusionists thinks the illusion is brain activity, yes.

It is my impression that certain people think that illusionists deny that there is any 🟩 even in the map, and I have never heard any illusionist make that argument (maybe I just haven’t been paying enough attention though). The conversation seems to be getting stuck somewhere at the level of misunderstandings concerning labels and referents. The key insight that I am trying to communicate here is that when we say that A is B, we generally do not mean that A is strictly identical to B - which it clearly isn’t. This applies even when we say things like 2+2 ... (read more)

4Benjy Forstadt
This seems to mix up labels and referents. 2+2 is strictly identical to 4. The statement “2+2=4” is not the same as the statement “‘2+2’=‘4’”

In that case, perhaps I could leave it as an exercise for the reader to deduce what was there originally. Maybe it could be a good intelligence test for GPT-4...

More seriously, the reason I am reluctant to use the heart is that a heart shape is usually mentally associated with all kinds of things that are entirely irrelevant to this discussion, and could generate confusion. When writing the post, I chose the green square in a deliberate manner as the shape least likely to be distracting. 

Of course, if most readers see a placeholder box, it will generate even more confusion, so... To edit or not to edit, that is the question.

I am beginning to think that I should not have used the quotes at all. I used them more or less as a highlighting tool, perhaps a different font style would accomplish this better. I might edit the post. Regarding existence, I am using the verb "to exist" as synonymous with the verb "to be". And it is my impression that illusionists generally do not deny that (not going to use any quotes this time) 🟩 is. If some here do, I would be interested in hearing their arguments. 

5Shmi
This is a rabbit hole, but existence of something is far from a clear-cut settled question. Do numbers exist? Do fairies exist? Does nothing(ness) exist? 

Yes, we would be even worse off if we randomly pulled out a superintelligent optimizer out of the space of all possible optimizers. That would, with almost absolute certainty, cause swift human extinction. The current techniques are somewhat better than taking a completely random shot in the dark. However, especially given point No.2, that can be of only very little comfort to us. 

All optimizers have at least one utility function. At any given moment in time, an optimizer is behaving in accordance with some utility function. It might not be explicitly... (read more)

1TAG
That's definitional. It doesn't show there are any optmisers, that all AI's are optimisers,etc.

I took it as self evident that a superintelligent optimizer with a utility function the optimum of which does not contain any humans would put the universe in a state which does not contain any humans. Hence, if such an optimizer is developed, the entire human population will end and there will be no second chances.

One point deserving of being stressed is that this hypothetical super-optimizer would be more incentivized to exterminate humanity in particular than to exterminate (almost) any other existing structure occupying the same volume of space. In oth... (read more)

6TAG
Most theoretically possible UF 's don't contain humans, but that doesn't mean that an AI see construct will have such a UF, because we are not taking a completely random shot into mindshare ... We could not, even i we wanted to. That's one of the persistent holes in the argument. (Another is the assumption that an AI will necessarily have a UF). The argument doesn't need summarising: it needs to be rendered valid by closing the gaps.