Comment author: Luke_A_Somers 06 December 2011 06:13:18AM *  2 points [-]

I see two ways of resolving this. Both are valid, as far as I can tell. The first assumes nothing, but may not satisfy. The second only assumes that we even expect the theory to speak of probability.

1

Well, QM says what's real. It's out there. There are many ways of interpreting this thing. Among those ways is the Born Rule. If you take that way, you may notice our world, and in turn, us. If you don't look at it that way, you won't notice us, much as if you use a computer implementing a GAI as a cup holder. Yet, that interpretation can be made, and moreover it's compact and yields a lot.

So, since that interpretation can be made, apply the generalized anti-zombie principle - if it acts like a sapient being, it's a sapient being... And it'll perceive the universe only under interpretations under which it is a sapient being. So the Born Rule isn't a general property of the universe. It's a property of our viewpoint.

2

Just from decoherence, without bringing in Born's rule, we get the notion that sections of configuration space are splitting up and never coming back together again. If we're willing to take from that the notion that this splitting should map onto probabilities, then there is exactly one way of mapping from relative weights of splits onto probabilities, such that the usual laws of probability apply correctly. In particular:

1) probabilities are not always equal to zero.

2) the probability of a decoherent branch doesn't change after its initial decoherence (if it could change, it wouldn't be decoherent), and the rules are the same everywhere, and in every direction, and at every speed, and so on.

The simplest way to achieve this is to go with 'unitary operations don't shift probabilities, just change their orientation in Hilbert Space'. If we require that the probability rule be simpler than the physical theory it's to apply to (i.e. quantum mechanics itself), it's the only one, since all of the other candidates effectively take QM, nullify it, and replace it with something else. Being able to freely apply Unitary operations implies that the probability is a function only of component amplitude, not orientation in Hilbert Space.

3) given exclusive possibilities A and B, P(A or B) = P(A) + P(B).

These three are sufficient.

Given a labeling b on states, we have | psi > = sum(b) [ A(b) |b>]

Define for brevity the capital letters J, K, and M as the vector component of |psi> in a particular dimension j, k, or m. For example, K = A(k) | k >

It is possible (and natural, in the language of decoherence) to choose the labeling b such that each decoherent branch gets exactly one dimension (at some particular moment - it will propagate into some other dimension later, even before it decoheres again). Now, consider two recently decohered components, K' and M'. By running time backwards to before the split, we get the original K and M. Back at that time, we would have seen this as a different, single coherent component, J = K + M.

P ( J ) = P ( K + M) must be equal to P ( K ) + P ( M )

This could have occurred in any dimension, so we make this requirement general.

So, consider instead the ways of projecting a vector J into two orthogonal vectors, K and M. As seen above, the probability of J must not be changed by this re-projection. Let theta be the angle between J and M.

K = sin(theta) A(j) | k >

M = cos(theta) A(j) | m >

By condition (2), P(x) is a function of amplitude, not the vectors, so we can simplify the P ( J ) statement to:

P( A(j) ) = P ( sin(theta) A(j) ) + P( cos(theta) A(j) )

this must be true as a function of theta, and for any A(j). The pythagorean theorem shows the one way to achieve this:

P(x) = C x* x for some C.

Since the probabilities are not identically zero, we know that C is not zero.

This, you may note, is the Born Probability Rule.

Comment author: GDC3 31 March 2012 02:19:28AM 0 points [-]

1 and 2 together are pretty convincing to me. The intuition runs like this: it seems pretty hard to construct anything like an observer without probabilities, so there are only observers in as much as one is looking at the world according to the Born Rule view. So an easy anthropic argument says that we should not be surprised to find ourselves within that interpretation.

Comment author: Will_Sawin 29 December 2010 10:46:56PM 7 points [-]

This is a misinterpretation. The argument goes like this:

True statement: There is lots of evidence or cells. P(Evidence|Cells)/P(Evidence|~Cells)>>1.

False statement: Without intelligent design, cells could only be produced by random chance. P(Cells|~God) is very very small.

Debatable statement: P(Cells|God) is large.

Conclusion: We update massively in favor of God and against ~God, because of, not in opposition to, the massive evidence in favor of the existence of cells.

This is valid Bayesian updating, it's just that the false statement is false.

Comment author: GDC3 31 March 2012 12:29:47AM *  1 point [-]

Upvoted for successfully correcting my confusion about this example and helping me get updating a little better.

Edit: wow, this was a really old comment reply. How did I just notice it...

Comment author: Eliezer_Yudkowsky 24 March 2012 12:02:45AM 13 points [-]

Cleverness-related failure mode (that actually came up in the trial unit):

One shouldn't try too hard to rescue non-consequentialist reasons. This probably has to be emphasized especially with new audiences who associate "rationality" to Spock and university professors, or audiences who've studied pre-behavioral economics, and who think they score extra points if they come up with amazingly clever ways to rescue bad ideas.

Any decision-making algorithm, no matter how stupid, can be made to look like expected utility maximization through the transform "Assign infinite negative utility to departing from decision algorithm X". This in essence is what somebody is doing when they say, "Aha! But if I stop my PhD program now, I'll have the negative consequence of having abandoned a sunk cost!" (Sometimes I feel like hitting people with a wooden stick when they do this, but that act just expresses an emotion rather than having any discernible positive consequences.) This is Cleverly Failing to Get the Point if "not wanting to abandon a sunk cost", i.e., the counterintuitive feel of departing from the brain's previous decision algorithm, is treated as an overriding consideration, i.e., an infinite negative utility.

It's a legitimate future consequence only if the person says, "The sense of having abandoned a sunk cost will make me feel sick to my stomach for around three days, after which I would start to adjust and adapt a la the hedonic treadmill". In this case they have weighed the intensity and the duration of the future hedonic consequence, rather than treating it as an instantaneous infinite negative penalty, and are now ready to trade that off against other and probably larger considerations like the total amount of work required to get a PhD.

Comment author: GDC3 25 March 2012 02:33:29AM 6 points [-]

I think it's important to try to convert the reason to a consequentialist reason every time actually; it's just that one isn't done at that point, you have to step back and decide if the reason is enough. Like the murder example one needs to avoid dismissing reasons for being in the wrong format.

"I don't want to tell my boyfriend because he should already know" translates to: in the universe in which I tell my boyfriend he learns to rely on me to tell him these things a little more and his chance of doing this sort of thing without my asking decreases in the future. You then have to ask if this supposed effect is really true and if the negative consequence is strong enough, which depends on things like the chances that he'll eventually figure it out. But converting the reason gets you answering the right questions.

Sunk cost fallacy could be a sign that you don't trust your present judgement compared to when you made the original decision to put the resources in. The right question is to ask why you changed your mind so strongly that the degree isn't worth it even at significantly less additional cost. It is because of new information, new values, new rationality skills or just being in a bad mood right now.

An advantage is that you feel just as clever for coming up with the right questions whatever you decide, which out to make this a bit easy to motivate yourself to implement.

Comment author: GDC3 09 March 2012 09:08:57AM 0 points [-]

I have the same 5, except in place of 1 I have something linguistic but not auditory. I can break it down into a stream of "words" in an order but there isn't sound (nor visible words). The stream follows English grammar basically, and the "words" have English parts of speech but do not always correspond easily to English (or any other language I know) words. Sometimes there's a translation but it's not obvious to me, nor do my thoughts slow down thinking of it.

I can usually convert most of these thoughts into words by a paraphrase or translation, but I remember when I was a kid having many thoughts that I could memorize and repeat to myself but not successfully express in external language. A few of the most important ones I can remember now and translate.

Comment author: GDC3 08 March 2012 12:24:51AM 0 points [-]

What if I have a strong emotional response to the existence of a creature that would make up such a thing as a religion? I suppose it feels more poignant than transcendant, but I've always had strong tender feelings about other peoples religious beliefs.

If the original mistake was never made it would not be referenced as a meme in fiction, but given that it is mightn't I just as well enjoy God as a fictional character or a cultural tradition to reference but not believe?

I agree that hymns to the nonexistence of God are bad, but that's indeed because they're imitative and not genuinely expressive. But there are genuine emotional expressions to the very real existence of the idea of God. And I think they prove that the "would not exist without the underlying mistake" is too broad.

In response to A Proposed Litany
Comment author: GDC3 29 December 2010 09:46:04PM 1 point [-]

I think that there is a use of the negative emotion of disillusionment that you are missing. When you switch to a more negative belief about a person based on new information for example, simply thinking about them differently in the future is not enough to adjust your emotional relationship to what you now think is appropriate. The time you spent believing the positive lie still counts in their favor instinctually. The pain of disillusion corrects for that.

If Santa isn't real I want to retroactively cancel all of my fondness for him so that my history of believing in him can no longer influence me. That happening all at once hurts a lot. The motivating to face this pain is not just the desire for more knowledge. It has to be balanced by feeling an appropriate amount of fuzzies if my belief in Santa is confirmed by the experiment of pointing a hidden web cam at the fireplace. If we weren't loss averse they would cancel for the same reason you can only try to test hypotheses rather than to confirm them.

You can't try to be legitimately disillusioned or the opposite. You can only try to gain knowledge. So satisfied curiosity breaks the tie rather than replaces disillusionment.

Comment author: Dan_Moore 16 December 2010 03:26:31PM 1 point [-]

I speculate there's at least two problems with the creationism odds calculation. First, it looks like the person doing the calculation was working with maybe 60,000 protein molecules rather than zillions of protein molecules.

The second problem I'm having trouble putting precisely in words, concerning the use of the uniform distribution as a prior. Sometimes the use of the uniform distribution as a prior seems to me to be entirely justified. An example of this is where there is a well-constructed model as to subsequent outcomes.

Other times, when the model for subsequent outcomes is sketchy, the uniform distribution is used as a prior simply as a default. Or, as in this case, it's clearly not an appropriate prior. In this case, the person is probably assuming that all combinations of proteins are equally likely (I suspect this assumption is false.)

Comment author: GDC3 29 December 2010 09:29:31PM 1 point [-]

Isn't the problem more like: they are ignoring the huge number of bits of evidence that say that cells in fact exist. They aren't comparing between hypotheses that say cells exist. They are comparing the uniform prior for cells existing to a the prior for only random proteins existing. They sound more like they are trying to argue that all our experiences cannot be enough evidence that there are cells, which seems weird.

Comment author: GDC3 29 December 2010 09:22:37AM 9 points [-]

HI, I'm GDC3. Those are my initials. I'm a little nervous about giving my full name on the internet, especially because my dad is googlible and I'm named after him. (Actually we're both named after my grandfather, hence the 3) But I go by G.D. in real life anyway so its not exactly not my name. I'm primarily working on learning math in advance of returning to college right now.

Sorry if this is TMI but you asked: I became an aspiring rationalist because I was molested as a kid and I knew that something was wrong, but not what it was or how to stop it, and I figure that if I didn't learn how the world really worked instead of what people told me, stuff like that might keep happening to me. So I guess my something to protect was me.

My something to protect is still mostly me, because most of my life is still dealing with the consequences of that. My limbic system learned all sorts of distorted and crazy things about how the world works that my neocortex has to spend all of its time trying to compensate for. Trying to be a functional human being is sort of hard enough goal for now. I also value and care about eventually using this information to help other people who've had similar stuff happen to them. I value this primarily because I've pre-committed to valuing that so that the narrative would motivate me emotionally when I hate myself too much to motivate myself selfishly.

So I guess I self-modified my utility function. I actually was pretty willing to hurt other people to protect myself as a kid. I've made myself more altruistic not to feel less guilty (which would mean that I wasn't really as selfish as I thought I was), but to feel less alone. Which is plausible I guess, because I wasn't exactly a standard moral specimen as a kid.

I hope that was more interesting than upsetting. I think I can learn a lot from you guys if I can speak freely. I hope that I can contribute or at least constitute good outreach.

View more: Prev