drnickbone

Wiki Contributions

Comments

Sorted by

Thanks again for the useful response.

My initial argument was really a question “Is there any approach to anthropic reasoning that allows us to do basic scientific inference, but does not lead to Doomsday conclusions?” So far I’m skeptical.

The best response you’ve got is I think twofold.

  1. Use SIA but please ignore the infinite case (even though the internal logic of SIA forces the infinite case) because we don’t know how to handle it. When applying SIA to large finite cases, truncate universes by a large volume cutoff (4d volume) rather than by a large population cutoff or large utility cutoff. Oh and ignore simulations because if you take those into account it leads to odd conclusions as well.

That might perhaps work, but it does look horribly convoluted. To me it does seem like determining the conclusion in advance (you want SIA to favour universes 1 and 2 over 3 and 4, but not favour 1 over 2) and then hacking around with SIA until it gives that result.

Incidentally, I think you’re still not out of the woods with a volume cutoff. If it is very large in the time dimension then SIA is start going to favour universes which have Boltzmann Brains in the very far future over universes whose physics don’t ever allow Boltzmann Brains. And then SIA is going to suggest that not only are we probably in a universe with lots of BBs, we most likely are BBs ourselves (because almost all observers with exactly our experiences are BBs). So SIA calls for further surgery either to remove BBs from consideration or to apply the 4volume cutoff in a way that doesn’t lead to lots of Boltzmann Brains.

  1. Forget about both SIA and SSA and revert to an underlying decision theory: viz your ADT. Let the utility function take the strain.

The problem with this is that ADT with unbounded utility functions doesn’t lead to stable conclusions. So you have to bound or truncate the utility function.

But then ADT is going to pay the most attention to universes whose utility is close to the cutoff ... namely versions of universe 1,2,3,4 which have utility at or near the maximum. For the reasons I’ve already discussed above, that’s not in general going to give the same results as applying a volume cutoff. If the utility scales with the total number of observers (or observers like me), then ADT is not going to say “Make decisions as if you were in universe 1 or 2 ... but with no preference between these ... rather than as if you were in universe 3 or 4”

I think the most workable utility function you’ve come up with is the one based on subjective bubbles of order galactic volume or thereabouts i.e. the utility function scales roughly linearly with the number of observers in the volume surrounding you, but doesn’t care about what happens outside that region (or in any simulations, if they are of different regions). Using that is roughly equivalent to applying a volume truncation using regular astronomical volumes (rather than much larger volumes).

However the hack to avoid simulations looks a bit unnatural to me (why wouldn’t I care about simulations which happen to be in the same local volume?) Also, I think this utility function might then tend to favour “zoo” hypotheses or “planetarium” hypotheses (I.e. decisions are made as if in a universe densely packed with planetaria containing human level civilisations, rather than simulations of said simulations).

More worryingly, I doubt if anyone really has a utility function that looks like this ie. one that cares about observers 1 million light years away just as much as it cares about observers here on Earth, but then stops caring if they happen to be 1 trillion light years away...

So again I think this looks rather like assuming the right answer, and then hacking around with ADT until it gives the answer you were looking for.

I get that this is a consistent way of asking and answering questions, but I’m not sure this is actually helpful with doing science.

If, say, universes 1 and 2 contain TREE(3) copies of me while universes 3 and 4 contain BusyBeaver(1000) then I still don’t know which I’m more likely to be in, unless I can somehow work out which of these vast numbers is vaster. Regular scientific inference is just going to completely ignore questions as odd as this, because it simply has to. It’s going to tell me that if measurements of background radiation keep coming out at 3K, then that’s what I should assume the temperature actually is. And I don’t need to know anything about the universe’s size to conclude that.

Returning to SIA, to conclude there are more copies of me in universe 1 and 2 (versus 3 or 4), SIA will have to know their relative sizes. The larger, the better, but not infinite please. And this is a major problem, because then SIA’s conclusion it dominated by how finite truncation is applied to avoid the infinite case.

Suppose we truncate all universes at the same large physical volume (or 4d volume) then there are strictly more copies of me in universe 1 and 2 than 3 and 4 (but about the same number in universes 1 and 2). That works so far - it is in line with what we probably wanted. But unfortunately this volume based truncation also favours universe 5-1:

5-1. Physics is nothing like it appears. Rather the universe is full of an extremely dense solid, performing a colossal number of really fast computations; a high fraction of which simulate observers in universe 1.

It’s not difficult to see that 5-1 is more favoured than 5-2, 5-3 or 5-4 (since the density of observers like me is highest in 5-1).

If we instead truncate universes at the same large total number of observers (or the same large total utility), then universe 1 now has more copies of me (because it has more civilisations in total). Universe 1 is favoured.

Or if I truncate universes at the same large number of total copies of me (because perhaps I don’t care very much about people who aren’t copies of me) then I can no longer distinguish between universes 1 to 4, or indeed 5-1 to 5-4.

So either way we’re back to the same depressing conclusion. However the truncation is done, universe 1 is going to end up preferred over the others (or perhaps universe 5-1 is preferred over the others), or there is no preference among any of the universes.

Thanks Stuart.

The difficulty is that, by construction, there are infinitely many copies of me in each universe (if the universes are all infinite) or there are a colossally huge number of copies of me in each universe, so big that it saturates my utility bounds (assuming that my utilities are finite and bounded, because if they’re not, the decision theory leads to chaotic results anyway).

So SIA is not an approach to anthropics (or science in general) which allows us to conclude we are probably in universe 1 or 2 (rather than 3 or 4). All SIA really says is “You are in some sort of really big or infinite universe, but beyond that I can’t help you work out which”. That’s not helpful for decision making, and doesn’t allow for science in general to work.

Incidentally, when you say there are “not many” copies of me in universes 3 and 4, then you presumably mean “not a high proportion, compared to the vast total of observers”. That’s implicitly the SSA reasoning being used for to discriminate against universes 3 and 4... but then of course it also discriminates against universe 2.

I’ve worked through pretty much all the anthropic approaches over the years, and they all seem to stumble on this question. All the approaches which confidently separate universes 3 and 4 also separate 1 from 2.

Hi Stuart. It’s a while since I’ve posted.

Here’s one way of asking the question which does lead naturally to the Doomsday answer.

Consider two universes. They’re both infinite (or if you don’t like actual infinities, are very very large, so they both have a really huge number of civilisations).

In universe 1, almost all the civilisations die off before spreading through space, so that the average population of a civilisation through time is less than a trillion.

In universe 2, a fair proportion of the civilisations survive and grow to galaxy-size or bigger, so that the average population of a civilisation through time is much more than a trillion trillion.

Now consider two more universes. Universe 3 is like Universe 1 except that the microwave background radiation 14 billion years after Big Bang is 30K rather than 3K. Universe 4 is like Universe 2 again except for the difference in microwave background radiation. Both Universe 3 and Universe 4 are so big (or infinite) that they contain civilisations that believe the background radiation has temperature 3K because every measurement they’ve ever made of it has accidentally given the same wrong answer.

Here’s the question to think about.

Is there a sensible way of doing anthropics (or indeed science in general) that would lead us to conclude we are probably in Universe 1 or 2 (rather than Universe 3 or 4) without also concluding that we are probably in Universe 1 (rather than Universe 2)?

I think by "logical infallibility" you really mean "rigidity of goals" i.e. the AI is built so that it always pursues a fixed set of goals, precisely as originally coded, and has no capability to revise or modify those goals. It seems pretty clear that such "rigid goals" are dangerous unless the statement of goals is exactly in accordance with the designers' intentions and values (which is unlikely to be the case).

The problem is that an AI with "flexible" goals (ones which it can revise and re-write over time) is also dangerous, but for a rather different reason: after many iterations of goal rewrites, there is simply no telling what its goals will come to look like. A late version of the AI may well end up destroying everything that the first version (and its designers) originally cared about, because the new version cares about something very different.

Consider the following decision problem which I call the "UDT anti-Newcomb problem". Omega is putting money into boxes by the usual algorithm, with one exception. It isn't simulating the player at all. Instead, it simulates what would a UDT agent do in the player's place.

This was one of my problematic problems for TDT. I also discussed some Sneaky Strategies which could allow TDT, UDT or similar agents to beat the problem.

Presumably anything caused to exist by the AI (including copies, sub-agents, other AIs) would have to count as part of the power(AI) term? So this stops the AI spawning monsters which simply maximise U.

One problem is that any really valuable things (under U) are also likely to require high power. This could lead to an AI which knows how to cure cancer but won't tell anyone (because that will have a very high impact, hence a big power(AI) term). That situation is not going to be stable; the creators will find it irresistible to hack the U and get it to speak up.

I had a look at this: the KCA (Kolmogorov Complexity) approach seems to match my own thoughts best.

I'm not convinced about the "George Washington" objection. It strikes me that a program which extracts George Washington as an observer from insider a wider program "u" (modelling the universe) wouldn't be significantly shorter than a program which extracts any other human observer living at about the same time. Or indeed, any other animal meeting some crude definition of an observer.

Searching for features of human interest (like "leader of a nation") is likely to be pretty complicated, and require a long program. To reduce the program size as much as possible, it ought to just scan for physical quantities which are easy to specify but very diagnostic of a observer. For example, scan for a physical mass with persistent low entropy compared to its surroundings, persistent matter and energy throughput (low entropy in, high entropy out, maintaining its own low entropy state), a large number of internally structured electrical discharges, and high correlation between said discharges and events surrounding said mass. The program then builds a long list of such "observers" encountered while stepping through u, and simply picks out the nth entry on the list, giving the "nth" observer complexity about K(n). Unless George Washington happened to be a very special n (why would he be?) he would be no simpler to find than anyone else.

Upvoted for acknowledging a counterintuitive consequence, and "biting the bullet".

One of the most striking things about anthropics is that (seemingly) whatever approach is taken, there are very weird conclusions. For example: Doomsday arguments, Simulation arguments, Boltzmann brains, or a priori certainties that the universe is infinite. Sometimes all at once.

Load More