Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Eliezer_Yudkowsky 11 August 2013 07:26:15PM 2 points [-]

Maybe you should just quickly glance at http://lesswrong.com/lw/q7/if_manyworlds_had_come_first/. The mysterious force that eats all of the wavefunction except one part is something I assign similar probability as I assign to God - there is just no reason to believe in it except poorly justified elite opinion, and I don't believe in elite opinions that I think are poorly justified.

Comment author: OccamsTaser 11 August 2013 07:42:02PM 5 points [-]

But the main (if not only) argument you make for many worlds in that post and the others is the ridiculousness of collapse postulates. Now I'm not disagreeing with you, collapses would defy a great deal of convention (causality, relativity, CPT-symmetry, etc) but even with 100% confidence in this (as a hypothetical), you still wouldn't be justified in assigning 99+% confidence in many worlds. There exist single world interpretations without a collapse, against which you haven't presented any arguments. Bohmian mechanics would seem to be the most plausible of these (given the LW census). Do you still assign <1% likelihood to this interpretation, and if so, why?

Comment author: fractalman 21 July 2013 05:38:54AM *  -2 points [-]

How much do you know about many worlds, anyways? My alternate self very much does exist, the technical term is possibility-cloud which will eventually diverge noticeably but which for now is just barely distinguishable from me.

there you go.

Comment author: OccamsTaser 21 July 2013 07:19:13AM 1 point [-]

My alternate self very much does exist

Given that many-worlds is true, yes. Invoking it kind of defeats the purpose of the decision theory problem though, as it is meant as a test of reflective consistency (i.e. you are supposed to assume you prefer $100>$0 in this world regardless of any other worlds).

Comment author: [deleted] 14 July 2013 08:37:33PM *  1 point [-]

MWI distinguishes itself from Copenhagen by making testable predictions. We simply don't have the technology yet to test them to a sufficient level of precisions as to distinguish which meta-theory models reality.

See: http://www.hedweb.com/manworld.htm#unique

In the mean time, there are strong metaphysical reasons (Occam's razor) to trust MWI over Copenhagen.

In response to comment by [deleted] on Probability is in the Mind
Comment author: OccamsTaser 14 July 2013 09:14:21PM 3 points [-]

In the mean time, there are strong metaphysical reasons (Occam's razor) to trust MWI over Copenhagen.

Indeed there are, but this is not the same as strong metaphysical reasons to trust MWI over all alternative explanations. In particular, EY argued quite forcefully (and rightly so) that collapse postulates are absurd as they would be the only "nonlinear, non CPT-symmetric, acausal, FTL, discontinuous..." part of all physics. He then argued that since all single-world QM interpretations are absurd (a non-sequitur on his part, as not all single-world QM interpretations involve a collapse), many-worlds wins as the only multi-world interpretation (which is also slightly inaccurate, not that many-minds is taken that seriously around here). Ultimately, I feel that LW assigns too high a prior to MW (and too low a prior to bohmian mechanics).

Comment author: DSherron 28 June 2013 10:53:07PM 1 point [-]

Yes, 0 is no more a probability than 1 is. You are correct that I do not assign 100% certainty to the idea that 100% certainty is impossible. The proposition is of precisely that form though, that it is impossible - I would expect to find that it was simply not true at all, rather than expect to see it almost always hold true but sometimes break down. In any case, yes, I would be willing to make many such bets. I would happily accept a bet of one penny, right now, against a source of effectively limitless resources, for one example.

As to what probability you assign; I do not find it in the slightest improbable that you claim 100% certainty in full honesty. I do question, though, whether you would make literally any bet offered to you. Would you take the other side of my bet; having limitless resources, or a FAI, or something, would you be willing to bet losing it in exchange for a value roughly equal to that of a penny right now? In fact, you ought to be willing to risk losing it for no gain - you'd be indifferent on the bet, and you get free signaling from it.

Comment author: OccamsTaser 29 June 2013 12:16:38AM *  0 points [-]

Would you take the other side of my bet; having limitless resources, or a FAI, or something, would you be willing to bet losing it in exchange for a value roughly equal to that of a penny right now? In fact, you ought to be willing to risk losing it for no gain - you'd be indifferent on the bet, and you get free signaling from it.

Indeed, I would bet the world (or many worlds) that (A→A) to win a penny, or even to win nothing but reinforced signaling. In fact, refusal to use 1 and 0 as probabilities can lead to being money-pumped (or at least exploited, I may be misusing the term "money-pump"). Let's say you assign a 1/10^100 probability that your mind has a critical logic error of some sort, causing you to bound probabilities to the range of (1/10^100, 1-1/10^100) (should be brackets but formatting won't allow it). You can now be pascal's mugged if the payoff offered is greater than the amount asked for by a factor of at least 10^100. If you claim the probability is less than 10^100 due to a leverage penalty or any other reason, you are admitting that your brain is capable of being more certain than the aforementioned number (and such a scenario can be set up for any such number).

Comment author: DSherron 28 June 2013 06:48:15PM 0 points [-]

Sure, it sounds pretty reasonable. I mean, it's an elementary facet of logic, and there's no way it's wrong. But, are you really, 100% certain that there is no possible configuration of your brain which would result in you holding that A implies not A, while feeling the exact same subjective feeling of certainty (along with being able to offer logical proofs, such that you feel like it is a trivial truth of logic)? Remember that our brains are not perfect logical computers; they can make mistakes. Trivially, there is some probability of your brain entering into any given state for no good reason at all due to quantum effects. Ridiculously unlikely, but not literally 0. Unless you believe with absolute certainty that it is impossible to have the subjective experience of believing that A implies not A in the same way you currently believe that A implies A, then you can't say that you are literally 100% certain. You will feel 100% certain, but this is a very different thing than actually literally possessing 100% certainty. Are you certain, 100%, that you're not brain damaged and wildly misinterpreting the entire field of logic? When you posit certainty, there can be literally no way that you could ever be wrong. Literally none. That's an insanely hard thing to prove, and subjective experience cannot possibly get you there. You can't be certain about what experiences are possible, and that puts some amount of uncertainty into literally everything else.

Comment author: OccamsTaser 28 June 2013 07:29:39PM 2 points [-]

So by that logic I should assign a nonzero probability to ¬(A→A). And if something has nonzero probability, you should bet on it if the payout is sufficiently high. Would you bet any amount of money or utilons at any odds on this proposition? If not, then I don't believe you truly believe 100% certainty is impossible. Also, 100% certainty can't be impossible, because impossibility implies that it is 0% likely, which would be a self-defeating argument. You may find it highly improbable that I can truly be 100% certain. What probability do you assign to me being able to assign 100% probability?

Comment author: DSherron 28 June 2013 04:39:49PM 3 points [-]

"Exist" is meaningful in the sense that "true" is meaningful, as described in EY's The Simple Truth. I'm not really sure why anyone cares about saying something with probability 1 though; no matter how carefully you think about it, there's always the chance that in a few seconds you'll wake up and realize that even though it seems to make sense now, you were actually spouting gibberish. Your brain is capable of making mistakes while asserting that it cannot possibly be making a mistake, and there is no domain on which this does not hold.

Comment author: OccamsTaser 28 June 2013 06:13:01PM 1 point [-]

Your brain is capable of making mistakes while asserting that it cannot possibly be making a mistake, and there is no domain on which this does not hold.

I must raise an objection to that last point, there are 1 or more domain(s) on which this does not hold. For instance, my belief that A→A is easily 100%, and there is no way for this to be a mistake. If you don't believe me, substitute A="2+2=4". Similarly, I can never be mistaken in saying "something exists" because for me to be mistaken about it, I'd have to exist.

Comment author: ygert 28 June 2013 02:38:23PM 1 point [-]

That doesn't answer it. You still had the thought, even with some time lapse. But even if you somehow say that doesn't count, a trivial fix which that supposition totally cannot answer would be "There is some entity [even if only a simulation] that is having at least a portion of this thought".

In response to comment by ygert on Infinite Certainty
Comment author: OccamsTaser 28 June 2013 03:03:42PM 2 points [-]

If the goal here is to make a statement to which one can assign probability 1, how about this: something exists. That would be quite difficult to contradict (albeit it has been done by non-realists).

Comment author: gwern 23 June 2013 03:36:12AM -1 points [-]

However, freedom to deviate creates diversity (among possibly other advantageous traits) and over-adaptation to one's environment can cause a species to "put all its eggs in one basket" and eventually become extinct.

You seem to be ascribing magical properties to one source of randomness. What special 'diversity' is being caused by 'free will' that one couldn't get by, say, cutting back a little bit on DNA repair and error-checking mechanisms? Or by amplifying thermal noise? Or by epileptic fits?

(Bonus points: energy and resource savings. Free will and no DNA error checking, two great flavors that go great together!)

Comment author: OccamsTaser 23 June 2013 04:28:36AM -1 points [-]

You seem to be ascribing magical properties to one source of randomness.

Free will is not the same as randomness.

What special 'diversity' is being caused by 'free will' that one couldn't get by, say, cutting back a little bit on DNA repair and error-checking mechanisms? Or by amplifying thermal noise? Or by epileptic fits?

Diversity that each individial agent is free to optimize.

Comment author: leplen 23 June 2013 01:24:03AM 2 points [-]

Slightly off-topic, but I don't want to start another free-will thread...

Would free-will represent an evolutionary cost?

If free will is to say that your decisions are not driven solely by your stimuli inputs, then it seems to me that a creature with free-will is by definition less responsive to its environment than a creature without it. A creature that is less responsive to its environment should be out-competed by a creature that is more responsive to its environment ceteris parisbus.

Even assuming that free-will is possible, is it likely, or would we expect "free-will" genes to get eliminated from the gene pool?

Comment author: OccamsTaser 23 June 2013 02:25:41AM 1 point [-]

If we assume being reactionary to one's environment is purely advantageous (with no negative effects when taken to the extreme), then yes it would have died out (theoretically). However, freedom to deviate creates diversity (among possibly other advantageous traits) and over-adaptation to one's environment can cause a species to "put all its eggs in one basket" and eventually become extinct.

Comment author: OccamsTaser 18 June 2013 08:49:14PM 5 points [-]

Ultimately, I think what this question boils down to is whether to expect "a sample" or "a sample within which we live" (i.e. whether or not the anthropic argument applies). Under MWI, anthropics would be quite likely to hold. On the other hand, if there is only a single world, it would be quite unlikely to hold (as you not living is a possible outcome, whether you could observe it or not). In the former case, we've received no evidence that MAD works. In the latter, however, we have received such evidence.

View more: Next