Followup to: L-zombies! (L-zombies?)
Reply to: Coscott's Preferences without Existence; Paul Christiano's comment on my l-zombies post

In my previous post, I introduced the idea of an "l-zombie", or logical philosophical zombie: A Turing machine that would simulate a conscious human being if it were run, but that is never run in the real, physical world, so that the experiences that this human would have had, if the Turing machine were run, aren't actually consciously experienced.

One common reply to this is to deny the possibility of logical philosophical zombies just like the possibility of physical philosophical zombies: to say that every mathematically possible conscious experience is in fact consciously experienced, and that there is no kind of "magical reality fluid" that makes some of these be experienced "more" than others. In other words, we live in the Tegmark Level IV universe, except that unlike Tegmark argues in his paper, there's no objective measure on the collection of all mathematical structures, according to which some mathematical structures somehow "exist more" than others (and, although IIRC that's not part of Tegmark's argument, according to which the conscious experiences in some mathematical structures could be "experienced more" than those in other structures). All mathematically possible experiences are experienced, and to the same "degree".

So why is our world so orderly? There's a mathematically possible continuation of the world that you seem to be living in, where purple pumpkins are about to start falling from the sky. Or the light we observe coming in from outside our galaxy is suddenly replaced by white noise. Why don't you remember ever seeing anything as obviously disorderly as that?

And the answer to that, of course, is that among all the possible experiences that get experienced in this multiverse, there are orderly ones as well as non-orderly ones, so the fact that you happen to have orderly experiences isn't in conflict with the hypothesis; after all, the orderly experiences have to be experienced as well.

One might be tempted to argue that it's somehow more likely that you will observe an orderly world if everybody who has conscious experiences at all, or if at least most conscious observers, see an orderly world. (The "most observers" version of the argument assumes that there is a measure on the conscious observers, a.k.a. some kind of magical reality fluid.) But this requires the use of anthropic probabilities, and there is simply no (known) system of anthropic probabilities that gives reasonable answers in general. Fortunately, we have an alternative: Wei Dai's updateless decision theory (which was motivated in part exactly by the problem of how to act in this kind of multiverse). The basic idea is simple (though the details do contain devils): We have a prior over what the world looks like; we have some preferences about what we would like the world to look like; and we come up with a plan for what we should do in any circumstance we might find ourselves in that maximizes our expected utility, given our prior.

*

In this framework, Coscott and Paul suggest, everything adds up to normality if, instead of saying that some experiences objectively exist more, we happen to care more about some experiences than about others. (That's not a new idea, of course, or the first time this has appeared on LW -- for example, Wei Dai's What are probabilities, anyway? comes to mind.) In particular, suppose we just care more about experiences in mathematically really simple worlds -- or more precisely, places in mathematically simple worlds that are mathematically simple to describe (since there's a simple program that runs all Turing machines, and therefore all mathematically possible human experiences, always assuming that human brains are computable). Then, even though there's a version of you that's about to see purple pumpkins rain from the sky, you act in a way that's best in the world where that doesn't happen, because that world has so much lower K-complexity, and because you therefore care so much more about what happens in that world.

There's something unsettling about that, which I think deserves to be mentioned, even though I do not think it's a good counterargument to this view. This unsettling thing is that on priors, it's very unlikely that the world you experience arises from a really simple mathematical description. (This is a version of a point I also made in my previous post.) Even if the physicists had already figured out the simple Theory of Everything, which is a super-simple cellular automaton that accords really well with experiments, you don't know that this simple cellular automaton, if you ran it, would really produce you. After all, imagine that somebody intervened in Earth's history so that orchids never evolved, but otherwise left the laws of physics the same; there might still be humans, or something like humans, and they would still run experiments and find that they match the predictions of the simple cellular automaton, so they would assume that if you ran that cellular automaton, it would compute them -- except it wouldn't, it would compute us, with orchids and all. Unless, of course, it does compute them, and a special intervention is required to get the orchids.

So you don't know that you live in a simple world. But, goes the obvious reply, you care much more about what happens if you do happen to live in the simple world. On priors, it's probably not true; but it's best, according to your values, if all people like you act as if they live in the simple world (unless they're in a counterfactual mugging type of situation, where they can influence what happens in the simple world even if they're not in the simple world themselves), because if the actual people in the simple world act like that, that gives the highest utility.

You can adapt an argument that I was making in my l-zombies post to this setting: Given these preferences, it's fine for everybody to believe that they're in a simple world, because this will increase the correspondence between map and territory for the people that do live in simple worlds, and that's who you care most about.

*

I mostly agree with this reasoning. I agree that Tegmark IV without a measure seems like the most obvious and reasonable hypothesis about what the world looks like. I agree that there seems no reason for there to be a "magical reality fluid". I agree, therefore, that on the priors that I'd put into my UDT calculation for how I should act, it's much more likely that true reality is a measureless Tegmark IV than that it has some objective measure according to which some experiences are "experienced less" than others, or not experienced at all. I don't think I understand things well enough to be extremely confident in this, but my odds would certainly be in favor of it.

Moreover, I agree that if this is the case, then my preferences are to care more about the simpler worlds, making things add up to normality; I'd want to act as if purple pumpkins are not about to start falling from the sky, precisely because I care more about the consequences my actions have in more orderly worlds.

But.

*

Imagine this: Once you finish reading this article, you hear a bell ringing, and then a sonorous voice announces: "You do indeed live in a Tegmark IV multiverse without a measure. You had better deal with it." And then it turns out that it's not just you who's heard that voice: Every single human being on the planet (who didn't sleep through it, isn't deaf etc.) has heard those same words.

On the hypothesis, this is of course about to happen to you, though only in one of those worlds with high K-complexity that you don't care about very much.

So let's consider the following possible plan of action: You could act as if there is some difference between "existence" and "non-existence", or perhaps some graded degree of existence, until you hear those words and confirm that everybody else has heard them as well, or until you've experienced one similarly obviously "disorderly" event. So until that happens, you do things like invest time and energy into trying to figure out what the best way to act is if it turns out that there is some magical reality fluid, and into trying to figure out what a non-confused version of something like a measure on conscious experience could look like, and you act in ways that don't kill you if we happen to not live in a measureless Tegmark IV. But once you've had a disorderly experience, just a single one, you switch over to optimizing for the measureless mathematical multiverse.

If the degree to which you care about worlds is really proportional to their K-complexity, with respect to what you and I would consider a "simple" universal Turing machine, then this would be a silly plan; there is very little to be gained from being right in worlds that have that much higher K-complexity. But when I query my intuitions, it seems like a rather good plan:

  • Yes, I care less about those disorderly worlds. But not as much less as if I valued them by their K-complexity. I seem to be willing to tap into my complex human intuitions to refer to the notion of "single obviously disorderly event", and assign the worlds with a single such event, and otherwise low K-complexity, not that much lower importance than the worlds with actual low K-complexity.
  • And if I imagine that the confused-seeming notions of "really physically exists" and "actually experienced" do have some objective meaning independent of my preferences, then I care much more about the difference between "I get to 'actually experience' a tomorrow" and "I 'really physically' get hit by a car today" than I care about the difference between the world with true low K-complexity and the worlds with a single disorderly event.

In other words, I agree that on the priors I put into my UDT calculation, it's much more likely that we live in measureless Tegmark IV; but my confidence in this isn't extreme, and if we don't, then the difference between "exists" and "doesn't exist" (or "is experienced a lot" and "is experienced only infinitesimally") is very important; much more important than the difference between "simple world" and "simple world plus one disorderly event" according to my preferences if we do live in a Tegmark IV universe. If I act optimally according to the Tegmark IV hypothesis in the latter worlds, that still gives me most of the utility that acting optimally in the truly simple worlds would give me -- or, more precisely, the utility differential isn't nearly as large as if there is something else going on, and I should be doing something about it, and I'm not.

This is the reason why I'm trying to think seriously about things like l-zombies and magical reality fluid. I mean, I don't even think that these are particularly likely to be exactly right even if the measureless Tegmark IV hypothesis is wrong; I expect that there would be some new insight that makes even more sense than Tegmark IV, and makes all the confusion go away. But trying to grapple with the confused intuitions we currently have seems at least a possible way to make progress on this, if it should be the case that there is in fact progress to be made.

*

Here's one avenue of investigation that seems worthwhile to me, and wouldn't without the above argument. One thing I could imagine finding, that could make the confusion go away, would be that the intuitive notion of "all possible Turing machines" is just wrong, and leads to outright contradictions (e.g., to inconsistencies in Peano Arithmetic, or something similarly convincing). Lots of people have entertained the idea that concepts like the real numbers don't "really" exist, and only the behavior of computable functions is "real"; perhaps not even that is real, and true reality is more restricted? (You can reinterpret many results about real numbers as results about computable functions, so maybe you could reinterpret results about computable functions as results about these hypothetical weaker objects that would actually make mathematical sense.) So it wouldn't be the case after all that there is some Turing machine that computes the conscious experiences you would have if pumpkins started falling from the sky.

Does the above make sense? Probably not. But I'd say that there's a small chance that maybe yes, and that if we understood the right kind of math, it would seem very obvious that not all intuitively possible human experiences are actually mathematically possible (just as obvious as it is today, with hindsight, that there is no Turing machine which takes a program as input and outputs whether this program halts). Moreover, it seems plausible that this could have consequences for how we should act. This, together with my argument above, make me think that this sort of thing is worth investigating -- even if my priors are heavily on the side of expecting that all experiences exist to the same degree, and ordinarily this difference in probabilities would make me think that our time would be better spent on investigating other, more likely hypotheses.

*

Leaving aside the question of how I should act, though, does all of this mean that I should believe that I live in a universe with l-zombies and magical reality fluid, until such time as I hear that voice speaking to me?

I do feel tempted to try to invoke my argument from the l-zombies post that I prefer the map-territory correspondences of actually existing humans to be correct, and don't care about whether l-zombies have their map match up with the territory. But I'm not sure that I care much more about actually existing humans being correct, if the measureless mathematical multiverse hypothesis is wrong, than I care about humans in simple worlds being correct, if that hypothesis is right. So I think that the right thing to do may be to have a subjective belief that I most likely do live in the measureless Tegmark IV, as long as that's the view that seems by far the least confused -- but continue to spend resources on investigating alternatives, because on priors they don't seem sufficiently unlikely to make up for the potential great importance of getting this right.

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 9:39 AM

I think the original motivation for Solomonoff Induction wasn't so much that the universal prior is the right prior (which is hard to justify given that the universal prior is parametrized by a universal Turing machine, the choice of which seems arbitrary), but that whatever the right prior is, the universal prior isn't too different from it in some sense (as long as it is in the class of priors that the universal prior is "universal" over, i.e., those computed by Turing machines in the standard formulation of SI). This "not too different" allows Solomonoff Induction to "sum to normality" - after updating on enough observations, its predictions converge to the predictions made by the right prior, whatever that is.

Consider an analogy to this in the caring/values interpretation of probability. It's not so much that we "like simplicity", but rather that given that our brains contain a finite amount of information, it's impossible for us to distribute our care over the multiverse in a way that's too different from some sort of universal prior, which makes an agent using that universal prior look sort of reasonable to us, even though it's actually using the wrong values from our perspective.

So I'd be wary about adopting Solomonoff Induction as a normative standard, or saying that we should or do care more about worlds that are simpler (or that we should or do care about all worlds equally in some model which is equivalent to caring more about worlds that are simpler). At this point, it seems just as plausible that we have (or should use) some other distribution of care, and that Solomonoff Induction / universal prior is just a distraction from finding the ultimate solution. My guess is that in order to make progress on this question (assuming the "caring" interpretation is the right approach in the first place), we first need to better understand meta-ethics or meta-philosophy, so we can say what it means for us to have certain values (given that we are not decision-theoretic agents with built-in utility functions), or what it means for certain values to be the ones that we "should" have, or in generally what it means for a solution to a philosophical problem to be correct.

I think some confusion goes away if you get back to thinking about the details of the process that motivates these questions (i.e. thinking and decision-making performed using a brain), instead of reifying the informal concepts perceived during that process (e.g. "the world", "experience", etc.). What you have when thinking or making decisions, is a map, some kind of theory that might talk about events (e.g. specify/estimate their utility and probability). Decision-making and map-updating only have the map to work with. (When formalizing ideas that you are more or less able to reliably think about, semantic explanations can be used to capture and formulate the laws of that thinking, but they can be misleading when working on ideas that are too confused, trying to capture the laws of thinking that are not there.)

In this setting, "worlds" or programs that somehow describe them are unnecessary. Since different worlds with an agent that has the same map describing them will receive the same actions and map updates, it's not useful to distinguish (or introduce) them when considering agent's reasoning. (Separately, the mysterious "Kolmogorov complexity of worlds" is used without there being any clarity to what it means for a program to describe a world, so in avoiding its use we get rid of another mystery.)

If caring (probability) compares events in agent's map, anticipated simplicity reflects the fact about how the map is updated (agent's prior), giving "simple" events more probability. This is probably caused by how evolution built the map-updating algorithms, killing off various anti-inductive priors that would give more probability to weird events that are unlike related events of high probability (i.e. believing that something will most certainly happen because it never happened before, selected among such possibilities in some way). (When I point to evolution acting in simple worlds and selecting minds with simplicity-favoring priors, and not to magic acting in weird worlds and selecting minds with simplicity-favoring priors, I'm using my own mind's simplicity-favoring prior to select that explanation.)

From this point of view, I don't feel like the issues discussed in the post point to mysteries that are not accounted for. "Mathematical multiverse" corresponds to the language of agent's map, perhaps only mentioning events (propositions) and not their probabilities/utilities (judgements maintained by a particular agent). "Reality fluid" or the prior/caring about worlds of the multiverse correspond to the probabilities (or something) that the map assigns/estimates for the events. These are "subjective" in that different agents have different maps, and "objective" in that they are normative for how that agent thinks (give an idealized map that agent's thinking aspires to understand). (Measureless) mathematical multiverse could also be "more objective" than priors, if descriptions of events could be interpreted between maps of different agents, even if they are assigned different degrees of caring (this is analogous to how the same propositions of a logical language can be shared by many theories with different axioms, which disagree about truth of propositions, but talk about the same propositions).

Imagine this: Once you finish reading this article, you hear a bell ringing, and then a sonorous voice announces: "You do indeed live in a Tegmark IV multiverse without a measure. You had better deal with it." And then it turns out that it's not just you who's heard that voice: Every single human being on the planet (who didn't sleep through it, isn't deaf etc.) has heard those same words.

Suppose instead the same voice says "You do not live in a Tegmark IV multiverse". Wouldn't you still conclude that you do, anyway? It is still a "disorderly experience", isn't it?

Yup, sure.

If you are in a Tegmark IV multiverse you are exactly as likely to hear that you live in a Tegmark IV multiverse as to hear that you do NOT live in a Tegmark IV multiverse. It is still evidence that you do, though, because hearing that would be unlikely in some alternative multiverse forms (such as a Christian one), which would have a corresponding decrease in their probabilities. Hearing that you do not live in a Tegmark IV multiverse should also decrease probability that you live in a Tegmark IV multiverse, because you are more likely to be told that you don't in some other multiverses. So, the evidence provided by the statement probably still points in the expected direction, but it probably isn't as strong as it seems.

My original point was that the message content is barely relevant compared to the fact of the message happening, which also means that it is evidence for the MUH as much as for any other extra-physical model, such as God, simulation, hallucination or a prank. (I'd heavily bet on the last two.)

That's a good point, thank you for the elaboration.

I might care more about the complex universes than I would if one universe existed randomly selected from a probability measure. However, I attribute this to risk aversion. I think that I care similarly in my model to the way I would care with "reality fluid," but I care differently from the way I would care if there was one randomly selected universe with probability measure as from the "reality fluid."

So, I can see that you would care similarly as you would in a multiverse with magical reality fluid that's distributed in the same proportions as your measure of caring, and if your measure of caring is K-complexity with respect to a universal Turing machine (UTM) we would consider simple, it's at least one plausible possibility that the true magical reality fluid that's distributed in roughly those proportions. But given the state of our confusion, I think that conditional on there being a true measure, any single hypothesis as to how that measure is distributed should have significantly less than 50% probability, so "Conditional on there being a true measure, I would act the same way as according to my K-complexity based preferences" sounds wrong to me. (One particularly salient other possibility is that we could have magical reality fluid due to Tegmark I -- infinite space -- and Tegmark III -- many-worlds -- but not due to all mathematically possible universes existing, in which case we surely wouldn't get weightings that are close to K-complexity with a simple UTM. I mean, this is a case of one single universe, but with all possible experiences existing, to different degrees.)

Conditional on there being a true measure, I would think it is reasonably likely that that measure is 100% at one possible universe.

I also have have the confused flag all over my philosophy. I was surprised that after 99 comments worth of conversation from less wrong, there was no significant rebuttal to convince me I was obviously wrong.

If anyone knows of discussion of this topic that I can read, besides "What are probabilities, anyway?" please let me know.

When I have more time, I intend to give you a more complete response. I really appreciate your line of inquiry and I hope we can make progress on it together. For now:

My main issue with the strategy "Deny that there is any sort of measure over the multiverse, and then arrange your values so that it all adds up to normality anyway" is this:

For any given theory, you can make your actions (not your beliefs) add up to normality by suitably picking your values. So the fact that you can do this for measureless multiverse theory is not special. In this case, you don't really value simplicity--you just wish you did, for the sake of having an elegant theory. But that's a fully general defense of any theory, pretty much: If the theory makes awkward predictions, then say "What is probability anyway?" and change your values to balance out whatever predictions the theory makes.

I think that you are lying to yourself if you think that you have managed to make measureless multiverse theory add up to normality by suitably rearranging your values. First of all, I don't think it is plausible that these were really your values all along--before you knew about the measure problem, you probably believed that every world in the multiverse was equally important. You would have decried as "physics racism" anyone who decided that simpler worlds were more valuable. Secondly, I'm willing to bet that if someone were to come along and prove to you that there was an objective measure over the multiverse, and it did favor simplicity in the way that we want it to, you would rejoice and go back to valuing each world equally. So I think that you don't really value simplicity, you just wish you did, because that would be one way to resolve the inconsistency in your theory.

I don't mean to sound harsh. Two years ago I was trying to make myself believe in measureless multiverse theory, pretty much as you described, and this was the argument I couldn't escape. If you can convince me this argument doesn't work, I'll join you, and scratch the Problem of Induction (the measure problem) off my list. :)

I actually think I can convince you, because I think if something does go wrong with this, it will not be that argument.

I also will write more when I have time.

I do not think I am being a "physics racist" I am measuring all worlds in the multiverse equally. However, there are an infinite number of them, and in order to do that I have to choose a measure. I am choosing the K-measure, because it is the most natural to me, and honestly feels to me like the thing that is closest I can get to "measuring all worlds equally." Just saying "Uniform distribution" does not mean anything.

I believe there is no objective measure on the multiverse, so I am putting a subjective measure on it. If there was a proof that there was an objective measure, and that I should value according to my measure, I would update my subjective measure to be that. I would not double count.

Basically, I think that I care according to K-complexity because that is the nicest measure I can think to put on the multiverse.

Good point. I look forward to hearing more.

True, now that I think of it, there are more things that could go wrong with this as well. I'm glad I found LW where people are interested in talking about this stuff. (Two years ago I hadn't heard of LW)

So, I think the essence of your point is in the following sentence:

Secondly, I'm willing to bet that if someone were to come along and prove to you that there was an objective measure over the multiverse, and it did favor simplicity in the way that we want it to, you would rejoice and go back to valuing each world equally.

It seems that if I have my caring measure, and there is also an objective "reality fluid" measure that they should stack and cause me to care "twice" as much about simplicity.

However, my caring measure is just my subjective assignment to how important each world is. If I learned that there was an objective assignment, that would trump my subjective assignment. It is not like there are two variables, subjective weight and objective weight. THere is one variable, weight, and it can also get a subjective or objective flag.

It is similar to objective and subjective morality. If I had a code of morality that I thought was subjective, and learned that was actually an objective morality that happened to be exactly the same, I would continue following it the same way. The only difference is that I might expect others to follow it more.

I do not know what the correct measure to put on the multiverse. I believe that there is no correct measure. I therefore have to put my own on. The measure that I put on is the one that feels "uniform" to me. If I learned that there is a correct measure, my intuition about what is "uniform" would change with it.

I think that is part of my point, but my main point was that many theories can receive this treatment.

For example, suppose you believe in a Big World such that every physically possible thing happens somewhere/somewhen in it.

And suppose you believe that there is a teapot in orbit between Mars and Jupiter.

Couldn't you "prove" your belief by saying "what is probability anyway," pointing out that there are infinitely many copies of you which live in solar systems with teapots between Mars and Jupiter, and saying that you value those copies more than everyone else? Not because you value any one person more than any other, of course--you value everybody equally--but because of the measure you assign over all copies of you in the Big World.

Do you think there is a principled difference between the scenario I just described, and what you are doing with Measureless Multiverse theory? If you say no, you aren't sunk--after all, perhaps MMtheory is more plausible for other reasons than the Big World theory I described.

My answer is no, at least objectively. There is a little caveat here that is related to Eliezer's theory of meta ethics. It is exactly the same as the way I say no, there is no principled reason why killing is bad. From my point of view, killing really is bad, and the fact that I think it is bad is not what causes it to be bad. Similarly, from my point of view simple things are more important, and If I were to change my mind about that, they would not stop being more important.

Okay. Well, this seems to me to be a bad mark against Measureless Multiverse theory.

If it can only be made to add up to normality by pulling a move that could equally well be used to make pretty much any arbitrary belief system add up to normality... then the fact that it adds up to normality is not something that counts in favor of the theory. Perhaps you say, fair enough--there are plenty of other things which count in favor of the theory. But I worry. This move makes adding up to normality a cheap, plentiful feature that many many theories share, and that seems dangerous.

Suppose our mathematical abilities advance to the point where we can take measures/languages and calculate the predictions they make, at least to approximation or something. It might turn out that society is split on which simplicity prior to use, and thus society is split about which predictions to make in some big hypothetical experiment. (I'm imagining a big collider.) Under MMtheory, this would just be an ethical disagreement, one that in fact would not be resolved, or influenced in any way, by performing the experiment. The people who turned out to be "wrong" would simply say "Oh, so I guess I'm in a more complicated world after all. But this doesn't conflict with my predictions, since I didn't make any predictions."

What do you think about this issue? Do you think I made a mistake somewhere?

EDIT: Or was I massively unclear? Rereading, I think that might be the case. I'd be happy to rewrite if you like, but since I'm busy now I'll just hope that it is comprehensible to you.

I'm not sure what to think about your defense here. I think that it probably wouldn't work if we were talking about valuing people/worlds directly instead of assigning a measure over the space of worlds.

So you don't know that you live in a simple world. But, goes the obvious reply, you care much more about what happens if you do happen to live in the simple world.

You kind of seem to jump around there. Our world looks simple, that's why we're worrying so much about why our world looks so simple in the first place! Sure our world might not actually be simple, but we simply have no sufficient reason to distrust the copious simplicity our scientific inquiry seems to yield.

If I live in a simple world, I want to believe I live in a simple world. If I live in a complex, interventionistic world I want to believe I live in a complex, interventionistic world. The way to find out what sort of world I live in is to look at the world. It looks simple.

Underdetermination of theory by data. Our world looks like it could be simple, but it could be complex too.

To summarize that part of the post: (1) The view I'm discussing there argues that the reason we find ourselves in a simple-looking world is that all possible experiences are consciously experienced, including the ones where the world looks simple, and we just happen to experience the latter. (2) If this is correct, then you cannot use the fact that you look around and see a simple-looking world to infer that you live in a simple-looking world, because there are plenty of complex interventionistic worlds that look deceptively simple. In fact, the prior probability that the particular world you see is actually simple is extremely low. (3) However, if you value the things that happen in actually simple worlds more than the things that happen in complex worlds, then it's still correct to act as if your simple-looking world is in fact simple, despite the fact that prior probability says this is possibly wrong (or to put this differently, even though most of the equally-existing mathematically possible humans reasoning like this will be wrong).

the reason we find ourselves in a simple-looking world is that all possible experiences are consciously experienced, including the ones where the world looks simple, and we just happen to experience the latter. (2) If this is correct, then you cannot use the fact that you look around and see a simple-looking world to infer that you live in a simple-looking world, because there are plenty of complex interventionistic worlds that look deceptively simple. In fact, the prior probability that the particular world you see is actually simple is extremely low.

But most worlds aren't "complex worlds appearing simple", most worlds are just "complex worlds", right? So the fact that we find ourselves in a simple world should still enormously surprise us. And any theory that causes us to "naturally" expect simple worlds would seem to have an enormous advantage.

I'm confused by your use of "priors". On a Tegmark IV sort of view, all meaningful sentences are true (in some universe). So the usefulness of the term "prior probability" turns on one's having at least some doubt about Tegmark IV, yes? I'm not accusing you of making any mistake over this; I just want reassurance or correction about my (mis)understanding of your probability talk.

It's priors over logical states of affairs. Consider the following sentence: "There is a cellular automaton that can be described in at most 10 KB in programming language X, plus a computable function f() which can be described in another 10 KB in the same programming language, such that f() returns a space/time location within the cellular automaton corresponding to Earth as we know it in early 2014." This could be false even if Tegmark IV is true, and prior probability (i.e., probability without trying to do an anthropic update of the form "I observe this, so it's probably simple") says it's probably false.

Thanks. But how can I even think the concept "corresponding to Earth as we know it" without relying on a large body of empirical knowledge that influences my probability assignments? I'm having trouble understanding what the prior is prior to. Of course I can refrain from explicitly calculating the K-complexity, say, of the theory in a physics textbook. But even without doing such a calculation, I still have some gut level sense of the simplicity/complexity of physics, very much based on my concrete experiences. Does that not count as anthropic?

What exactly is the 'potential great importance of getting this right'? Please be specific.

Thanks you for this post! It helped me to have a better grasp on some questions I was thinking about after the L-zombies post.

One point of clarification though -- in the paragraph that begins, "In other words, I agree that on the priors..." is Tegmark IV being used interchangeably with measureless Tegmark IV or does the first refer to the MUH, measure included?

I feel like my first reaction was like you, that I do not care about simplicity THAT much, but reflicting on it has made me think that maybe I really do care about simplicity that much.

FIrst, let's remodel the problem. There is a collection of universes, each one an infinite string of bits, which encodes a turing machine. Lets say the actual states of the TM are encoded on some finite subset of the infinite string, and the rest of the string is random bits that The TM can read if it chooses to.

The first obstacle to out intuition is that caring based on K-complexity is the same as caring about all of these different TMs equally, so what seems unfair in one model seems very fair in another model. This might be enough to convince you to care THAT much, but I imagine you have the following rebuttal:

Many of these simple TMs never even read the infinite string of random bits at the end. They are all exactly the same. I have this vague feeling of diminishing returns. A million of the same good thing and a million different bad things does not feel as good as a million of the same bad thing and a million different good things.

I feel this intuition myself, but maybe this is just a fallacy of projecting intuitions about diminishing returns from within one universe to questions about multiple universes that do not communicate with each other.

I don't feel like considering these different ways to approach K-complexity addresses the point I was trying to make. The rebuttal seems to be arguing that we should weigh the TMs that don't read the end of the tape equally, rather than weighing TMs more that read less of the tape. But my point isn't that I don't want to weigh complex TMs as much as simple TMs; it is (1) that I seem to be willing to consider TMs with one obviously disorderly event "pretty simple", even though I think they have high K-complexity; and (2) given this, the utility I lose by only disregarding the possibility of magical reality fluid in worlds where I've seen a single obviously disorderly event doesn't seem to lose me all that much utility if measureless Tegmark IV is true, compared to the utility I may lose if there actually is magical reality fluid or something like that and I ignore this possibility and, because of this, act in a way that is very bad.

(If there aren't any important ways in which I'd act differently if measureless Tegmark IV is false, then this argument has no pull, but I think there may be; for example, if the ultrafinitist hypothesis from the end of my post were correct, that might make a difference to FAI theory.)

So why is our world so orderly? There's a mathematically possible continuation of the world that you seem to be living in, where purple pumpkins are about to start falling from the sky. Or the light we observe coming in from outside our galaxy is suddenly replaced by white noise. Why don't you remember ever seeing anything as obviously disorderly as that?

Who says all of this is mathematically possible? I've read this idea before, and I think it's wrong.

First of all, I think it's very difficult to guess what is mathematically possible. We experience the universe at a level which is already extremely evolved. For example, I imagine that mathematically possibility resulted in an incredibly complex structure that eventually mapped to the rules of physics (string theory maybe, but certainly eventually quantum mechanics and then atoms, etc). Then the universe we experience is just the manifestation of that physics.

Secondly, another way to look at it, is that counter factual 'possibility' is not the same thing as mathematical possibility. For example I could have chosen not to compose this comment (counter-factually) but it wasn't actually possible that I wouldn't because I'm computing a program which -- certainly at this scale -- is deterministic.

Oh, I see you already considered this:

But I'd say that there's a small chance that maybe yes, and that if we understood the right kind of math, it would seem very obvious that not all intuitively possible human experiences are actually mathematically possible.

I think this is very likely, and in fact we don't need to compute what is possible ... What we experience is exactly what is mathematically possible.