I've heard that point of view from several people. It's a natural extension of LW-style beliefs, but I'm not sure I buy it yet. There are several lines of attack, the most obvious one is trying to argue that coinflips still behave as coinflips even when the person betting on them is really stupid and always bets on heads. But we've already explored that line a little bit, so I'm gonna try a different one:
Are you saying that evolution has equipped our minds with a measure of caring about all possible worlds according to simplicity? If yes, can you guess which of our ancestor organisms were already equipped with that measure, and which ones weren't? Monkeys, fishes, bacteria?
(As an alternative, there could be some kind of law of nature saying that all minds must care about possible worlds according to simplicity. But I'm not sure how that could be true, given that you can build a UDT agent with any measure of caring.)
A: Wait, if simplicity changes preferences, but does not change the level of existence, how do you explain the fact that we appear to be in a world that is simple? Isn’t that a priori extremely unlikely?
B: This is where it gets a little bit fuzzy, but I do not think that question makes sense. Unlikely by what measure? You are presupposing an existence measure on the collection of theoretical worlds just to ask that question.
I saw a good explanation of this point (in the #lesswrong IRC sometime last month), which I'll try to reproduce here.
Imagine that there is a specific flavor of Universal Turing Machine which runs everything in Tegmark IV; all potential-universe objects are encoded as tapes for this UTM to simulate.
Take a simple set of rules. There is only one tape that corresponds to these precise rules. Now add an incredibly-specific edge case which applies in exactly one situation that could possibly arise, but otherwise have the same simple set of rules hold; this adds another tape that will produce precisely the same observations from the inside as the simple rules, except in one case that is grotesquely uncommon and very likely will never occur (because it happens to req...
My personal objection to this seems to be: "But what if some things DO exist?" IE, what if existence actually does turn out to be a thing, in limited supply?
Let me try to unfold this gut reaction into something which makes more sense.
I think what I'm "really" saying is that, for me, this view does not add up to normality. If I were to discover that "existence" is meaningless as a global predicate, and (to the extent that it's meaningful) all possibilities exist equally, I would change the way I care about things.
I think I valu...
Thank you all so much for all of your comments.
Three separate comment threads simultaneously lead to the refutation that I seem to be unfairly biased towards agents that happen to be born in simple universes. I think this is a good point, so I am starting a new comment thread to discuss that issue. Here is my counter point.
First, notice that our finite intuitions do not follow over nicely here. In saying that beings in universes with 20 fewer bits are a million times as important, I am not saying that the happiness of this one person is more important tha...
As I understand it, your attempted solution to the Problem of Induction is this:
(a) Deny that there is a fact of the matter about what our future experiences will be like
(b) Care about things in inverse proportion to the Kolmogorov complexity of the structure in which they are embedded.
This is why it all adds up to normality. Without (a), people could say: Go ahead, care about whatever you want, but under your belief system you ought to expect the world to dissolve into high-complexity chaos immediately! And without (b), people could say: Go ahead, deny th...
Interesting approach. The way I would put it, the number you want to maximize is expectation of U(X) where U is a utility function and X is a random universe taken from a Solomonoff ensemble. The way you put it, the number is the same but you don't interpret the sum over universes as expectation value, you just take it to be part of your utility function.
What I feel is missing in your approach is that U by its nature is arbitrary / complex, whereas the Solomonoff prior is simple and for some reason has to be there. I.e. something would go wrong on the phil...
So why haven't you tried walking into a casino and caring really hard about winning? I'm not just being a prick here, this is the most concise way I could think to frame my core objection to your thesis.
This is an incorrect interpretation of Coscott's philosophy. "Caring really hard about winning" = preferring winning to losing. The correct analogy would be "Caring about [whatever] only in case I win". The losing scenarios are not necessarily assigned low utilities: they are assigned similar utilities. This philosophy is not saying: "I will win because I want to win". It is saying: "If I lose, all the stuff I normally care about becomes unimportant, so when I'm optimizing this stuff I might just as well assume I'm going to win". More precisely, it is saying "I will both lose and win but only the winning universe contains stuff that can be optimized".
Ever since learning the Anselmian ontological argument for the existence of god, I have tended towards the Kantian idea that existence is not a predicate. A ball that is not red still bounces, a ball that does not exist... doesn't exist. I'll say right up front I am not a philosopher, but rather a physicist and an engineer. I discard ideas that I can't use to build things that work, I decide that they are not worth spending time on. I don't know what I can build using the idea that a ball that does not exist is still a ball. And if I can build somethi...
What are the semantics of "exist" on this view? When people say things like "Paris exists but Minas Tirith doesn't" are they saying something meaningful? True? It seems like such statements do convey actual information to someone who knows little about both a French history book and a Tolkein book. Why not just exercise some charitable interpretation upon your fellow language users when it comes to "exist"? We use "existence" concepts to explain our experiences, without any glaring difficulties that I can see (so a charitable interpretation is likely possible). But maybe I've missed some glaring difficulties.
Reminds me of some discussions I had long ago.
The general principle was: Take some human idea or concept and drive it to its logical conclusion. To its extreme. The result: It either became trivial or stopped to make sense.
The reason for this is that sometimes you can't separate out components of a complex system and try to optimize them in isolation without losing important details.
You can't just optimize happyness. It leads to wireheading and that's not what you want.
You can't just optimize religious following. It leads to crussades and witch hunts.
You ...
Nice dialogue.
It's true that probability and importance are interchangeable in an expected-utility calculation, but if you weight A twice as much as B because you care twice as much about it, that implies equal probabilities between the two. So if you use a Solomonoff-style prior based on how much you care, that implies a uniform prior on the worlds themselves. Or maybe you're saying expected utility is the sum of caring times value, with no probabilities involved. But in that case your probability is just how much you care.
If we were in a complex world, i...
I'm not beginning to think I identified a real flaw in this approach.
The usual formulation of UDT assumes the decision algorithm to be known. In reality, the agent doesn't know its own decision algorithm. This means there is another flavor of uncertainty w.r.t. which the values of different choices have to be averaged. I call this "introspective uncertainty". However, introspective uncertainty is not independent: it is strongly correlated with indexical uncertainty. Since introspective uncertainty can't be absorbed into the utiltity function, indexical uncertainty cannot either.
I have a precise model of this kind of UDT in mind. Planning to write about it soon.
Instead of saying that you care about simpler universes more, couldn't a similar preference arise out of pure utilitarianism? Simpler universes would be more important because things that happen within them will be more likely to also happen within more complicated universes that end up creating a similar series of states. For every universe, isn't there an infinite number of more complicated universes that end up with the simpler universe existing within part of it?
Cross-posted on By Way of Contradiction
My current beliefs say that there is a Tegmark 4 (or larger) multiverse, but there is no meaningful “reality fluid” or “probability” measure on it. We are all in this infinite multiverse, but there is no sense in which some parts of it exist more or are more likely than any other part. I have tried to illustrate these beliefs as an imaginary conversation between two people. My goal is to either share this belief, or more likely to get help from you in understanding why it is completely wrong.
A: Do you know what the game of life is?
B: Yes, of course, it is a cellular automaton. You start with a configuration of cells, and they update following a simple deterministic rule. It is a simple kind of simulated universe.
A: Did you know that when you run the game of life on an initial condition of a 2791 by 2791 square of live cells, and run it for long enough, creatures start to evolve. (Not true)
B: No. That’s amazing!
A: Yeah, these creatures have developed language and civilization. Time step 1,578,891,000,000,000 seems like it is a very important era for them, They have developed much technology, and it someone has developed the theory of a doomsday device that will kill everyone in their universe, and replace the entire thing with emptyness, but at the same time, many people are working hard on developing a way to stop him.
B:How do you know all this?
A: We have been simulating them on our computers. We have simulated up to that crucial time.
B: Wow, let me know what happens. I hope they find a way to stop him
A: Actually, the whole project is top secret now. The simulation will still be run, but nobody will ever know what happens.
B: Thats too bad. I was curious, but I still hope the creatures live long, happy, interesting lives.
A: What? Why do you hope that? It will never have any effect over you.
B: My utility function includes preferences between different universes even if I never get to know the result.
A: Oh, wait, I was wrong. It says here the whole project is canceled, and they have stopped simulating.
B: That is to bad, but I still hope they survive.
A: They won’t survive, we are not simulating them any more.
B: No, I am not talking about the simulation, I am talking about the simple set of mathematical laws that determine their world. I hope that those mathematical laws if run long enough do interesting things.
A: Even though you will never know, and it will never even be run in the real universe.
B: Yeah. It would still be beautiful if it never gets run and no one ever sees it.
A: Oh, wait. I missed something. It is not actually the game of life. It is a different cellular automaton they used. It says here that it is like the game of life, but the actual rules are really complicated, and take millions of bits to describe.
B: That is too bad. I still hope they survive, but not nearly as much.
A: Why not?
B: I think information theoretically simpler things are more important and more beautiful. It is a personal preference. It is much more desirable to me to have a complex interesting world come from simple initial conditions.
A: What if I told you I lied, and none of these simulations were run at all and never would be run. Would you have a preference over whether the simple configuration or the complex configuration had the life?
B: Yes, I would prefer if the simple configuration to have the life.
A: Is this some sort of Solomonoff probability measure thing?
B: No actually. It is independent of that. If the only existing things were this universe, I would still want laws of math to have creatures with long happy interesting lives arise from simple initial conditions.
A: Hmm, I guess I want that too. However, that is negligible compared to my preferences about things that really do exist.
B: That statement doesn’t mean much to me, because I don’t think this existence you are talking about is a real thing.
A: What? That doesn’t make any sense.
B: Actually, it all adds up to normality.
A: I see why you can still have preferences without existence, but what about beliefs?
B: What do you mean?
A: Without a concept of existence, you cannot have Solomonoff induction to tell you how likely different worlds are to exist.
B: I do not need it. I said I care more about simple universes than complicated ones, so I already make my decisions to maximize utility weighted by simplicity. It comes out exactly the same, I do not need to believe simple things exist more, because I already believe simple things matter more.
A: But then you don’t actually anticipate that you will observe simple things rather than complicated things.
B: I care about my actions more in the cases where I observe simple things, so I prepare for simple things to happen. What is the difference between that and anticipation?
A: I feel like there is something different, but I can’t quite put my finger on it. Do you care more about this world than that game of life world?
B: Well, I am not sure which one is simpler, so I don’t know, but it doesn’t matter. It is a lot easier for me to change our world than it is for me to change the game of life world. I therefore will make choices that roughly maximizes preferences about the future of this world in the simplest models.
A: Wait, if simplicity changes preferences, but does not change the level of existence, how do you explain the fact that we appear to be in a world that is simple? Isn’t that a priori extremely unlikely?
B: This is where it gets a little bit fuzzy, but I do not think that question makes sense. Unlikely by what measure? You are presupposing an existence measure on the collection of theoretical worlds just to ask that question.
A: Okay, it seems plausible, but kind of depressing to think that we do not exist.
B: Oh, I disagree! I am still a mind with free will, and I have the power to use that will to change my own little piece of mathematics — the output of my decision procedure. To me that feels incredibly beautiful, eternal, and important.