Comment author: Irgy 11 August 2015 06:56:05AM 0 points [-]

I think this shows how the whole "language independent up to a constant" thing is basically just a massive cop-out. It's very clever for demonstrating that complexity is a real, definable thing, with properties which at least transcend representation in the infinite limit. But as you show it's useless for doing anything practical.

My personal view is that there's a true universal measure of complexity which AIXI ought to be using, and which wouldn't have these problems. It may well be unknowable, but AIXI is intractable anyway so what's the difference? In my opinion, this complexity measure could give a real, numeric answer to seemingly stupid questions like "You see a number. How likely is it that the number is 1 (given no other information)?". Or it could tell us that 16 is actually less complex than, say, 13. I mean really, it's 2^2^2, spurning even a need for brackets. I'm almost certain it would show up in real life more often than 13, and yet who can even show me a non-contrived language or machine in which it's simpler?

Incidentally, they "hell" scenario you describe isn't as unlikely as it at first sounds. I remember an article here a while back lamenting the fact that left unmonitored AIXI could easily kill itself with exploration, the result of which would have a very similar reward profile to what you describe as "hell". It seems like it's both too cautious and not cautious enough in even just this one scenario.

Comment author: ksvanhorn 05 August 2015 04:48:02PM *  1 point [-]

Actually, no, improper priors such as you suggest are not part of the foundations of Bayesian probability theory. It's only legitimate to use an improper prior if the result you get is the limit of the results you get from a sequence of progressively more diffuse priors that tend to the improper prior in the limit. The Marginalization Paradox is an example where just plugging in an improper prior without considering the limiting process leads to an apparent contradiction. My analysis (http://ksvanhorn.com/bayes/Papers/mp.pdf) is that the problem there ultimately stems from non-uniform convergence.

I've had some email discussions with Scott Aaronson, and my conclusion is that the Dice Room scenario really isn't an appropriate metaphor for the question of human extinction. There are no anthropic considerations in the Dice Room, and the existence of a larger population from which the kidnap victims are taken introduces complications that have no counterpart when discussing the human extinction scenario.

You could formalize the human extinction scenario with unrealistic parameters for growth and generational risk as follows:

  • Let n be the number of generations for which humanity survives.

  • The population in each generation is 10 times as large as the previous generation.

  • There is a risk 1/36 of extinction in each generation. Hence, P(n=N+1 | n >= n) = 1/36.

  • You are a randomly chosen individual from the entirety of all humans who will ever exist. Specifically, P(you belong to generation g) = 10^g / N, where N is the sum of 10^t for 1 <= t <= n.

Analyzing this problem, I get

P(extinction occurs in generation t | extinction no earlier than generation t) = 1/36

P(extinction occurs in generation t | you are in generation t) = about 9/10

That's a vast difference depending on whether or not we take into account anthropic considerations.

The Dice Room analogy would be if the madman first rolled the dice until he got snake-eyes, then went out and kidnapped a bunch of people, randomly divided them into n batches, each 10 times larger than the previous, and murdered the last batch. This is a different process than what is described in the book, and results in different answers.

Comment author: Irgy 10 August 2015 07:49:07AM 0 points [-]

Thanks, interesting reading.

Fundamental or not I think my point still stands that "the prior is infinite so the whole thing's wrong" isn't quite enough of an argument, since you still seem to conclude that improper priors can be used if used carefully enough. A more satisfying argument would be to demonstrate that the 9/10 case can't be made without incorrect use of an improper prior. Though I guess it's still showing where the problem most likely is which is helpful.

As far as being part of the foundations goes, I was just going by the fact that it's in Jaynes, but you clearly know a lot more about this topic than I do. I would be interested to know your answer to the following questions though: "Can a state of ignorance be described without the use of improper priors (or something mathematically equivalent)?", and "Can Bayesian probability be used as the foundation of rational thought without describing states of ignorance?".

On the Doomsday argument, I would only take the Dice Room as a metaphor not a proof of anything, but it does help me realise a couple of things. One is that the setup you describe of a potentially endlessly exponentially growing population is not a reasonable model of reality (irrespective of the parameters themselves). The growth has to stop, or at least converge, at some point, even without a catastrophe.

It's interesting that the answer changes if he rolls the dice first. I think ultimately the different answers to the Dice Room correspond to different ways of handling the infinite population correctly - i.e. taking limits of finite populations. For any finite population there needs to be an answer to "what does he do if he doesn't roll snake-eyes in time?" and different choices, for all that you might expect them to disappear in the limit, lead to different answers.

If the dice having already being rolled is the best analogy for the Doomsday argument then it's making quite particular statements about causality and free will.

Comment author: Irgy 30 July 2015 05:02:10AM *  1 point [-]

To my view, the 1/36 is "obviously" the right answer, what's interesting is exactly how it all went wrong in the other case. I'm honestly not all that enlightened by the argument given here nor in the links. The important question is, how would I recognise this mistake easily in the future? The best I have for the moment is "don't blindly apply a proportion argument" and "be careful when dealing with infinite scenarios even when they're disguised as otherwise". I think the combination of the two was required here, the proportion argument failed because the maths which normally supports it couldn't be used without at some point colliding with the partly-hidden infinity in the problem setup.

I'd be interested in more development of how this relates to anthropic arguments. It does feel like it highlights some of the weaknesses in anthropic arguments. It seems to strongly undermine the doomsday argument in particular. My take on it is that it highlights the folly of the idea that population is endlessly exponentially growing. At some point that has to stop regardless of whether it has yet already, and as soon as you take that into account I suspect the maths behind the argument collapses.

Edit: Just another thought. I tried harder to understand your argument and I'm not convinced it's enough. Have you heard of ignorance priors? They're the prior you use, in fact the prior you need to use, to represent a state of no knowledge about a measurement other than an invariance property which identifies the type of measurement it is. So an ignorance prior for a position is constant, and for a scale is 1/x, and for a probability has been at least argued to be 1/x(1-x). These all have the property that their integral is infinite, but they work because as soon as you add some knowledge and apply Bayes rule the result becomes integrable. These are part of the foundations of Bayesian probability theory. So while I agree with the conclusion, I don't think the argument that the prior is unnormalisable is sufficient proof.

Comment author: Irgy 21 July 2015 04:24:02AM 1 point [-]

My prior expectation would be: A long comment from a specific user has more potential to be interesting than a short one because it has more content. But, A concise commenter has more potential to write interesting comments of a given length than a verbose commenter.

So while long comments might on average be rated higher, shorter versions of the same comment may well rate higher than longer versions of the same comment would have. It seems like this result does nothing to contradict that view but in the process seems to suggest people should write longer comments. The problem is that verbosity is per-person while information content is per-comment. Also verbosity in general can't be separated from other personal traits that lead to better comments.

You could test this by having people write both long and short versions of comments that appear to different pools of readers and comparing the ratings.

In response to comment by Irgy on Crazy Ideas Thread
Comment author: Lumifer 16 July 2015 02:24:47PM 2 points [-]

there's a difference between thinking briefly and abstractly of the idea of something and indulging in fantasy about it.

Yes, of course, there is a whole range of, let's say, involvement in these thoughts. But if I understand mainstream Catholicism correctly, even a brief lustful glance at the neighbor's wife is a sin. Granted, a lesser sin than constructing a whole porn movie in your head, but still a sin.

In response to comment by Lumifer on Crazy Ideas Thread
Comment author: Irgy 16 July 2015 11:29:45PM 1 point [-]

Well that's why I called it steel-manning, I can't promise anything about the reasonableness of the common interpretation.

In response to comment by Val on Crazy Ideas Thread
Comment author: Lumifer 10 July 2015 02:22:27PM 3 points [-]

you actually cannot prevent yourself from thinking about robbing a bank

But you think you can prevent desire from sneaking into your thinking about sinful things..? ;-)

In response to comment by Lumifer on Crazy Ideas Thread
Comment author: Irgy 16 July 2015 07:05:13AM 2 points [-]

In the interest of steel-manning the Christian view; there's a difference between thinking briefly and abstractly of the idea of something and indulging in fantasy about it.

If you spend hours imagining the feel of the gun in your hand, the sound of the money sliding smoothly into the bag, the power and control, the danger and excitement, it would be fair to say that there's a point where you could have made the choice to stop.

Comment author: Irgy 30 January 2014 05:43:27AM 3 points [-]

Another small example. I have a clock near the end of my bed. It runs 15 minutes fast. Not by accident, it's been reset many times and then set back to 15 minutes fast. I know it's fast, we even call it the "rocket clock". None of this knowledge diminishes it's effectiveness at getting me out of bed sooner, and making me feel more guilty for staying up late. Works very well.

Glad to discover I can now rationalise it as entirely rational behaviour and simply the dark side (where "dark side" only serves to increase perceived awesomeness anyway).

Comment author: Irgy 11 December 2013 10:25:27PM 2 points [-]

Daisy isn't in a loop at all. There's apparently evidence for Dark and that is tempered by the fact that its existance indicates a failing on Dark's part.

For Bob, to make an analogy, imagine Bob is wet. For you, that is evidence that it is raining. It could be argued that being wet is evidence that it's raining for Bob as well. But generally speaking Bob will know why Bob is wet. Given the knowedge of why Bob is wet, the wetness itself is masked off and no longer relevant. If Bob has just had a bath, then being wet no longer constitutes any evidence of rain. If Bob was outside and water fell on him from the sky, it probably did rain, but his being wet no longer constitutes any additional evidence in that case either (well, ok, it has some value still as confirmation of his memory, but it's orders of magnitude less relevant).

Similarly Bob should ask "Why do I believe in Bright?". The answer to that question contains all the relevant evidence for Bright's existance, and given that answer Bob's actual belief no longer constitutes evidence either way. With that answer, there is no longer a loop for Bob either.

One final point, you have to consider the likelihood of belief in case 4. If you would expect some level of belief in sorcerors in Faerie even when there are no sorcerors, then case 4 doesn't fall behind as much as you might think. Once you've got both Bob and Daisy, case 4 doesn't just break even, it's actually way ahead.

Comment author: Irgy 22 November 2013 05:24:40AM 38 points [-]

I found myself geuinely confused by the question "You are a certain kind of person, and there's not much that can be done either way to really change that" - not by the general vagueness of the statement (which I assume is all part of the fun) but by a very specific issue, the word "you". Is it "you" as in me? Or "you" as in "one", i.e. a hypothetical person essentially referring to everyone? I interpreted it the first way then changed my mind after reading the subsequent questions which seemed to be more clearly using it the second way.

Comment author: Ishaan 19 November 2013 04:50:48AM *  1 point [-]

you appear to have missed the point of my reply.

Let's check: "I can only have preferences over things that exist. The ship probably exists, because my memory of its departure is evidence. The parallel worlds have no similar evidence for their existence." Is that correct paraphrasing?

Before the ship leaves, you know that sometime in the future there will be a future-ship in a location where it cannot interact with future-you.

By the same token, you can observe the laws of physics and the present-state of the universe. If, for some reason, your interpretation of those laws involves Many Worlds splitting off from each other, then, before the worlds split, you know that sometime in the future there will be a future-world unable to interact with future you.

For future-you, the existence of the future-ship is not a testable theory, but the fact that you have a memory of the ship leaving counts as evidence.

For future-you, the existence of the Other-Worlds is not a testable theory, but if Many-Worlds is your best model, then your memory of the past-state of the universe, combined with your knowledge of physics, counts as evidence for the existence of certain specific other worlds.

In your Faeries example, the Faeries do not merit consideration because it is impossible to get evidence for their existence. That's not true in the quantum bomb scenario - if we except Many Worlds, then for the survivors of the quantum bomb, the memory of the existence of a quantum bomb is evidence that there exist many branches with Other Worlds in which everyone was wiped out by the bomb.

So, the actual question should be:

1) Does Many-Worlds fit in our ontology - as in, do universes on other branches constructed in the Many-World format even fit within the definition of "Reality" or not? (For example, if you told me there was a parallel universe which never interacted with us in any way, I'd say that your universe wasn't Real by definition. Many Worlds branches are a gray area because they do interact, but current Other Worlds only interact with the past and the present only interacts with future Other Worlds, not current ones )

2a) If we decide that the Other Worlds from Many Worlds qualify as "Real", can Many Worlds ever be a hypothesis which is Parsimonious enough to not be Pascal-Wager-ish? The Faeries qualify as "Real" because they do cause the raindrops to fall, but because of the nature of that hypothesis it can never be parsimonious enough to rise above Pascal-Wager-thresholds. Is Many-Worlds the same way? (From your answer, I gathered that your answer is "yes", but I disagreed with your reason - see paragraph that begins with "In your Faeries example..." which is why I pointed out that if you accept Many Worlds then you can have evidence that points to certain sorts of worlds existing in my first reply.)

2b) If we decide that the other branches do not qualify as Real, can we make a definition of reality that does not exclude light-cone-leaving-spaceships?

3) And how do we construct our preferences, in relation to what we have defined as "Real"? (For example, we could simply say that despite having an ontology that acknowledges all the branches of Many Worlds as Real, our preferences only care about the world that we end up in.)

Comment author: Irgy 20 November 2013 06:38:03AM *  -1 points [-]

Let's check: "I can only have preferences over things that exist. The ship probably exists, because my memory of its departure is evidence. The parallel worlds have no similar evidence for their existence." Is that correct paraphrasing?

No, not really. I mean, it's not that far from something I said, but it's departing from what I meant and it's not in any case the point of my reply. The mistake I'm making is persisting in trying to clarify a particular way of viewing the problem which is not the best way and which is leading us both down the garden path. Instead, please forget everything else I said and consider the following argument.

Theories have two aspects. Testable predictions, and descriptive elements. I would (and I think the sequences support me) argue that two theories which make the same predictions are not different theories, they are the same theory with different flavour. In particular, you should never make a different decision under one theory than under the other. Many Worlds is a flavour of quantum mechanics, and if that choice of flavour effects ethical decisions then you are making different decisions according to the flavour rather than content of the theory, and something has gone wrong.

Everything else I said was intended solely to support that point, but somewhere along the way we got lost arguing about what's observable, what consitutes evidence and meta-ethics. If you accept that argument then I have no further point to make. If you do not accept it, then please direct comments at that argument directly rather than anything else I've said.

I'll try to address the rest of your reply with this in mind in the hopes that it's helpful.

If ... your interpretation of those laws involves Many Worlds

You could equally have said "If your interpretation of the physics of raindrops involves fairies". My point is that no-one has any justification for making that assumption. Quantum physics is a whole bunch of maths that models the behaviour of particles on a small scale. Many Worlds is one of many possible descriptions of that maths that help us understand it. If you arbitrarily assume your description is a meaningful property of reality then sure, everything else you say follows logically, but only because the mistake was made already.

You compare Many Worlds to fairies in the wrong place, in particular post-arbitrary-assumption for Many Worlds and pre-arbitrary-assumption for fairies. I'll give you the analogous statements for a correct comparison:

the Faeries do not merit consideration because it is impossible to get evidence for their existence

The people of other worlds do not merit consideration because it is impossible to get evidence of their existance.

if we except Many Worlds...

If we accept fairies...

... the memory of the existence of a quantum bomb is evidence that there exist many branches with Other Worlds in which everyone was wiped out by the bomb

... the sight of a raindrop falling is evidence that there exists a fairy a short distance away.

View more: Prev | Next