All of OccamsTaser's Comments + Replies

But the main (if not only) argument you make for many worlds in that post and the others is the ridiculousness of collapse postulates. Now I'm not disagreeing with you, collapses would defy a great deal of convention (causality, relativity, CPT-symmetry, etc) but even with 100% confidence in this (as a hypothetical), you still wouldn't be justified in assigning 99+% confidence in many worlds. There exist single world interpretations without a collapse, against which you haven't presented any arguments. Bohmian mechanics would seem to be the most plausible of these (given the LW census). Do you still assign <1% likelihood to this interpretation, and if so, why?

1Eliezer Yudkowsky
Obvious rationalizations of single-world theories have no more evidence in their favor, no more reason to be believed; it's like Deism vs. Jehovah. Sure, the class 'Deism' is more probable but it's still not credible in an absolute sense (and no, Matrix Lords are not deities, they were born at a particular time, have limited domains and are made of parts). You can't start with a terrible idea and expect to find >1% rationalizations for it. There's more than 100 possible terrible ideas. Single-world QM via collapse/Copenhagen/shut-up was originally a terrible idea and you shouldn't expect terrible ideas to be resurrectable on average. Privileging the hypothesis. (Specifically: Bohm has similar FTL problems and causality problems and introduces epiphenomenal pointers to a 'real world' and if the wavefunction still exists (which it must because it is causally affecting the epiphenomenal pointer, things must be real to be causes of real effects so far as we know) then it should still have sentient observers inside it. Relational quantum mechanics is more awful amateur epistemology from people who'd rather abandon the concept of objective reality, with no good formal replacement, than just give up already. But most of all, why are we even asking that question or considering these theories in the first place? And again, simulated physics wouldn't count because then the apparent laws are false and the simulations would presumably be of an original universe that would almost certainly be multiplicitous by the same reasoning; also there'd presumably be branches within the sim, so not single-world which is what I specified.) If you can assign <1% probability to deism (the generalized abstracted class containing Jehovahism) then there should be no problem with assigning <1% probability to all single-world theories.

My alternate self very much does exist

Given that many-worlds is true, yes. Invoking it kind of defeats the purpose of the decision theory problem though, as it is meant as a test of reflective consistency (i.e. you are supposed to assume you prefer $100>$0 in this world regardless of any other worlds).

In the mean time, there are strong metaphysical reasons (Occam's razor) to trust MWI over Copenhagen.

Indeed there are, but this is not the same as strong metaphysical reasons to trust MWI over all alternative explanations. In particular, EY argued quite forcefully (and rightly so) that collapse postulates are absurd as they would be the only "nonlinear, non CPT-symmetric, acausal, FTL, discontinuous..." part of all physics. He then argued that since all single-world QM interpretations are absurd (a non-sequitur on his part, as not all single-w... (read more)

2[anonymous]
It's not just about collapse - every single-world QM interpretation either involves extra postulates, non-locality or other surprising alterations of physical law, or yields falsified predictions. The FAQ I linked to addresses these points in great detail. MWI is simple in the Occam's razor sense - it is what falls out of the equations of QM if you take them to represent reality at face value. Single-world meta-theories require adding additional restrictions which are at this time completely unjustified from the data.

Would you take the other side of my bet; having limitless resources, or a FAI, or something, would you be willing to bet losing it in exchange for a value roughly equal to that of a penny right now? In fact, you ought to be willing to risk losing it for no gain - you'd be indifferent on the bet, and you get free signaling from it.

Indeed, I would bet the world (or many worlds) that (A→A) to win a penny, or even to win nothing but reinforced signaling. In fact, refusal to use 1 and 0 as probabilities can lead to being money-pumped (or at least exploited, ... (read more)

2wedrifid
The usage of "money-pump" is correct. (Do note, however, that using 1 and 0 as probabilities when you in fact do not have that much certainty also implies the possibility for exploitation, and unlike the money pump scenario you do not even have the opportunity to learn from the first exploitation and self correct.)
4DSherron
That's not how decision theory works. The bounds on my probabilities don't actually apply quite like that. When I'm making a decision, I can usefully talk about the expected utility of taking the bet, under the assumption that I have not made an error, and then multiply that by the odds of me not making an error, adding the final result to the expected utility of taking the bet given that I have made an error. This will give me the correct expected utility for taking the bet, and will not result in me taking stupid bets just because of the chance I've made a logic error; after all, given that my entire reasoning is wrong, I shouldn't expect taking the bet to be any better or worse than not taking it. In shorter terms: EU(action) = EU(action & ¬error) + EU(action & error); also EU(action & error) = EU(anyOtherAction & error), meaning that when I compare any 2 actions I get EU(action) - EU(otherAction) = EU(action & ¬error) - EU(otherAction & ¬error). Even though my probability estimates are affected by the presence of an error factor, my decisions are not. On the surface this seems like an argument that the distinction is somehow trivial or pointless; however, the critical difference comes in the fact that while I cannot predict the nature of such an error ahead of time, I can potentially recover from it iff I assign >0 probability to it occurring. Otherwise I will never ever assign it anything other than 0, no matter how much evidence I see. In the incredibly improbable event that I am wrong, given extraordinary amounts of evidence I can be convinced of that fact. And that will cause all of my other probabilities to update, which will cause my decisions to change.

So by that logic I should assign a nonzero probability to ¬(A→A). And if something has nonzero probability, you should bet on it if the payout is sufficiently high. Would you bet any amount of money or utilons at any odds on this proposition? If not, then I don't believe you truly believe 100% certainty is impossible. Also, 100% certainty can't be impossible, because impossibility implies that it is 0% likely, which would be a self-defeating argument. You may find it highly improbable that I can truly be 100% certain. What probability do you assign to me being able to assign 100% probability?

2DSherron
Yes, 0 is no more a probability than 1 is. You are correct that I do not assign 100% certainty to the idea that 100% certainty is impossible. The proposition is of precisely that form though, that it is impossible - I would expect to find that it was simply not true at all, rather than expect to see it almost always hold true but sometimes break down. In any case, yes, I would be willing to make many such bets. I would happily accept a bet of one penny, right now, against a source of effectively limitless resources, for one example. As to what probability you assign; I do not find it in the slightest improbable that you claim 100% certainty in full honesty. I do question, though, whether you would make literally any bet offered to you. Would you take the other side of my bet; having limitless resources, or a FAI, or something, would you be willing to bet losing it in exchange for a value roughly equal to that of a penny right now? In fact, you ought to be willing to risk losing it for no gain - you'd be indifferent on the bet, and you get free signaling from it.
1Watercressed
When I say 100% certainty is impossible, I mean that there are no cases where assigning 100% to something is correct, but I have less than 100% confidence in this claim. It's similar to the claim that it's impossible to travel faster than the speed of light.
0Kawoomba
If any agent within a system were able to assign a 1 or 0 probability to any belief about that system being true, that would mean that the map-territory divide would have been broken. However, since that agent can never rule out being mistaken about its own ontology, its reasoning mechanism, following an invisible (if vanishingly unlikely) internal failure, it can never gain final certainty about any feature of territory, although it can get arbitrarily close.
2ialdabaoth
A lot of this is a framing problem. Remember that anything we're discussing here is in human terms, not (for example) raw Universal Turing Machine tape-streams with measurable Komolgorov complexities. So when you say "what probability do you assign to me being able to assign 100% probability", you're abstracting a LOT of little details that otherwise need to be accounted for. I.e., if I'm computing probabilities as a set of propositions, each of which is a computable function that might predict the universe and a probability that I assign to whether it accurately does so, and in all of those computable functions my semantic representation of 'probability' is encoded as log odds with finite precision, then your question translates into a function which traverses all of my possible worlds, looks to see if one of those probabilities that refers to your self-assigned probability is encoded as the number 'INFINITY', multiplies that by the probability that I assigned that world being the correct one, and then tabulates. Since "encoded as log odds with finite precision" and "encoded as the number 'INFINITY'" are not simultaneously possible given certain encoding schemes, this really resolves itself to "do I encode floating-point numbers using a mantissa notation or other scheme that allows for values like +INF/-INF/+NaN/-NaN?" Which sounds NOTHING like the question you asked, but it the answers do happen to perfectly correlate (to within the precision allowed by the language we're using to communicate right now). Did that make sense?

Your brain is capable of making mistakes while asserting that it cannot possibly be making a mistake, and there is no domain on which this does not hold.

I must raise an objection to that last point, there are 1 or more domain(s) on which this does not hold. For instance, my belief that A→A is easily 100%, and there is no way for this to be a mistake. If you don't believe me, substitute A="2+2=4". Similarly, I can never be mistaken in saying "something exists" because for me to be mistaken about it, I'd have to exist.

0Kawoomba
You could be mistaken about logic, a demon might be playing tricks on you etc. You can say "Sherlock Holmes was correct in his deduction." That does not rely on Sherlock Holmes actually existing, it's just noting a relation between one concept (Sherlock Holmes) and another (a correct deduction).
0DSherron
Sure, it sounds pretty reasonable. I mean, it's an elementary facet of logic, and there's no way it's wrong. But, are you really, 100% certain that there is no possible configuration of your brain which would result in you holding that A implies not A, while feeling the exact same subjective feeling of certainty (along with being able to offer logical proofs, such that you feel like it is a trivial truth of logic)? Remember that our brains are not perfect logical computers; they can make mistakes. Trivially, there is some probability of your brain entering into any given state for no good reason at all due to quantum effects. Ridiculously unlikely, but not literally 0. Unless you believe with absolute certainty that it is impossible to have the subjective experience of believing that A implies not A in the same way you currently believe that A implies A, then you can't say that you are literally 100% certain. You will feel 100% certain, but this is a very different thing than actually literally possessing 100% certainty. Are you certain, 100%, that you're not brain damaged and wildly misinterpreting the entire field of logic? When you posit certainty, there can be literally no way that you could ever be wrong. Literally none. That's an insanely hard thing to prove, and subjective experience cannot possibly get you there. You can't be certain about what experiences are possible, and that puts some amount of uncertainty into literally everything else.

If the goal here is to make a statement to which one can assign probability 1, how about this: something exists. That would be quite difficult to contradict (albeit it has been done by non-realists).

0Decius
What evidence convinces you now that something exists? What would the world look like if it were not the case that something existed? Imagine yourself as a brain in a jar, without the brain and the jar. Would you remain convinced that something existed if confronted with a world that had evidence against that proposition?
3lavalamp
Is "exist" even a meaningful term? My probability on that is highish but no where near unity.

You seem to be ascribing magical properties to one source of randomness.

Free will is not the same as randomness.

What special 'diversity' is being caused by 'free will' that one couldn't get by, say, cutting back a little bit on DNA repair and error-checking mechanisms? Or by amplifying thermal noise? Or by epileptic fits?

Diversity that each individial agent is free to optimize.

[This comment is no longer endorsed by its author]Reply
0gwern
In your usage, there is nothing distinguishing free will from randomness. Huh? How is your link at all related? What freedom to optimize does 'free will' give you that a RNG or PRNG of any kind, from thermal fluctuations to ionizing or cosmic radiation, does not?

If we assume being reactionary to one's environment is purely advantageous (with no negative effects when taken to the extreme), then yes it would have died out (theoretically). However, freedom to deviate creates diversity (among possibly other advantageous traits) and over-adaptation to one's environment can cause a species to "put all its eggs in one basket" and eventually become extinct.

-1gwern
You seem to be ascribing magical properties to one source of randomness. What special 'diversity' is being caused by 'free will' that one couldn't get by, say, cutting back a little bit on DNA repair and error-checking mechanisms? Or by amplifying thermal noise? Or by epileptic fits? (Bonus points: energy and resource savings. Free will and no DNA error checking, two great flavors that go great together!)

Ultimately, I think what this question boils down to is whether to expect "a sample" or "a sample within which we live" (i.e. whether or not the anthropic argument applies). Under MWI, anthropics would be quite likely to hold. On the other hand, if there is only a single world, it would be quite unlikely to hold (as you not living is a possible outcome, whether you could observe it or not). In the former case, we've received no evidence that MAD works. In the latter, however, we have received such evidence.

1ThisSpaceAvailable
I don't see what your reasoning is (and I find "anthropics would hold" to be ambiguous). Can you explain? Suppose half the worlds adopt a strategy that is certain to avoid war, and half adopt one the has a 50% chance. Of the worlds without war, 2/3 have adopted a strategy that is certain to avoid war. Therefore, anyone in a world without war should have their confidence that they are in a world that has adopted a strategy that is certain to avoid war go from ½ to 2/3 upon seeing war fail to develop.
5Mestroyer
That is an excellent username. Welcome to LessWrong.

I propose a variation of fairbot, let's call it two-tiered fairbot (TTF).

If the opponent cooperates iff I cooperate, cooperate
else, if the opponent cooperates
iff (I cooperate iff the opponent cooperates), check to see if the opponent cooperates
and cooperate iff he/she does*
else, defect.

It seems to cooperate against any "reasonable" agents, as well as itself (unless there's something I'm missing) while defecting against cooperatebot. Any thoughts?

*As determined by proof check.

3orthonormal
It cooperates with CooperateBot, for the same Löbian reason that FairBot does. The substatement "if I defected, then CooperateBot will defect" is actually true because you cooperate (and "if false, then false" is a tautology).