Comment author: Patrick_(orthonormal) 02 June 2008 05:12:05PM 4 points [-]

Eliezer,

I also think that considering the particular topics is helpful here. In the math book, you were pretty confident the statement was wrong once you discovered a clear formal proof, because essentially there's nothing more to be said.

On the interpretation of quantum mechanics, since you believe we have almost all the relevant data we'll ever have (save for observed superpositions of larger and larger objects) and the full criteria to decide between these hypotheses given that information, you again think that disagreement is unfounded.

(I suggest you make an exception in your analysis for Scott Aaronson et al, whose view as I understand it is that progress in his research is more important than holding the Best Justified Interpretation at all times, if the different interpretations don't have consequences for that research; so he uses whatever one seems most helpful at the moment. This is more like asking a different valid question than getting the wrong answer to a question.)

But on the prospects for General AI in the next century, well, there's all sort of data you don't yet have that would greatly help, and others might have it; and updating according to Bayes on that data is intractable without significant assumptions. I think that explains your willingness to hear out Daniel Dennett (albeit with some skepticism).

Finally, I think that when it comes to religion you may be implicitly using the same second-order evaluation I've come around to. I still ascribe a nonzero chance to my old religion being true—I didn't find a knockdown logical flaw or something completely impossible in my experience of the world. I just came to the conclusion I didn't have a specific reason to believe it above others.

However, I'd refuse to give any such religion serious consideration from now on unless it became more than 50% probable to my current self, because taking up a serious religion changes one's very practice of rationality by making doubt a disvalue. Spending too much thought on a religion can get you stuck there, and it was hard enough leaving the first time around. That's a second-order phenomenon different from the others: taking the Copenhagen interpretation for a hypothesis doesn't strongly prevent you from discarding it later.

My best probability of finding the truth lies in the space of nonreligious answers instead of within any particular religion, so I can't let myself get drawn in. So I do form an object-level bias against religion (akin to your outright dismissal of Aumann), but it's one I think is justified on a meta-level.

Comment author: Patrick_(orthonormal) 01 June 2008 02:18:34AM 0 points [-]

Caledonian, I think Eliezer's going off of his distinction (in Knowability of AI and elsewhere) between "optimal" and "optimized", which more colloquial senses of the words don't include. There may be more optimal ways of achieving our goals, but that doesn't take away from the fact that we regularly achieve results that

(1) we explicitly set out to do (2) we can distinguish clearly from other results (3) would be incredibly unlikely to achieve by random effort.

I.e. this comment isn't close to optimal, but it's optimized enough as a coherent reply in a conversation that you'd ascribe a decent level of intelligence to whatever optimization process produced it. You wouldn't, say, wonder if I were a spambot, let alone a random word generator.

In response to That Alien Message
Comment author: Patrick_(orthonormal) 22 May 2008 10:12:05PM 2 points [-]

Bambi,

The 'you gotta believe me technology' remark was probably a reference to the AI-Box Experiment.

Phillip,

None of the defenses you mentioned are safe against something that can out-think their designers, any more than current Internet firewalls are really secure against smart and determined hackers.

And blocking protein nanotech is as limited a defense against AGI as prohibiting boxcutters on airplanes is against general terrorist attack. Eliezer promoted it as the first idea he imagined for getting into physical space, not the only avenue.

Comment author: Patrick_(orthonormal) 21 May 2008 09:01:00AM 0 points [-]

Frank, I think you have an idea that many-worlds means a bunch of parallel universes, each with a single past and future, like parallel train tracks. That is most emphatically not what the interpretation means. Rather*, all of the universes with my current state in their history are actual futures that the current me will experience (weighted by the Born probabilities).

If there's an event which I might or might not witness (but which won't interfere with my existence), then that's really saying that there are versions of me that witness it and versions of me that don't. But when it comes to death, the only versions of me that notice anything are the ones that notice they're still alive. So I really should anticipate waking up alive— but my family should anticipate me being dead the next day, because most of their future versions live in worlds where I've passed on.

The conclusion above is contentious even among those who believe the many-worlds interpretation; however, the rejection of the 'parallel tracks' analogy is not contentious in the least. If (as you indicate) you think that you have one future and that the version of you who will be miraculously cured overnight isn't the same you, then you have misunderstood the many-worlds interpretation.

*This is an oversimplification and falsification, of course, but it's a damn sight closer than the other image.

Comment author: Patrick_(orthonormal) 19 May 2008 11:39:00PM 0 points [-]

Bad analogy, actually. If I have an incurable terminal illness today and fall asleep, I'll still have an incurable terminal illness in most of the worlds in which I wake up— so I should assign a very low subjective probability to finding myself cured tomorrow. (Or, more precisely, the vast majority of the configurations that contain someone with all my memories up to that point will be ones in which I'm waking up the next day with the illness.)

I'm not quite sure how it might play out subjectively at the very end of life sans cryonics; this is where the idea of quantum suicide gets weird, with one-in-way-more-than-a-million chances subjectively coming to pass. However, if I'm signed up for cryonics, and if there's a significant chance I'll be revived someday, that probability by far overwhelms those weird possibilities for continued consciousness: in the vast majority of worlds where someone has my memories up to that point, that someone will be a revived post-cryonic me. Thus I should subjectively assign a high probability to being revived.

Or so I think.

Comment author: Patrick_(orthonormal) 18 May 2008 11:57:00AM -1 points [-]

Sorry to be late to the party— but has nobody yet mentioned the effect that MWI has on assessing cryonics from a personal standpoint; i.e. that your subjective probability of being revived should very nearly be your probability estimate that revival will happen in some universe? If 9/10 of future worlds destroy all cryogenic chambers, and 9/10 of the ones left don't bother to revive you, then it doesn't matter to you: you'll still wake up and find yourself in the hundredth world. Such factors only matter if you think your revival would be a significant benefit to the rest of humanity (rather unlikely, in my estimation).

(Yes, there are quirks to be discussed in this idea. I've thought about some of them already, but I might have missed others. Anyhow, it's getting early.)

Comment author: Patrick_(orthonormal) 12 May 2008 10:39:29PM 3 points [-]

Does anyone else suspect that the last full paragraph is meant to give us the assignment for tomorrow morning?

As for my answers, I think that the particulars of this paradigm shift have to enter into it on some level— because as Eliezer pointed out earlier, the SchrÜdinger's Cat thought experiment really should have suggested the possibility of superimposed observers to someone, and from there the MWI doesn't seem too remote.

So I'd have to ascribe the delay in the MWI proposal in great part to the fact that it doesn't immediately cohere with our subjective experience of consciousness, and that the physicists were culturally separated from other disciplines (including even philosophy and literature) that were proposing less naive interpretations of consciousness.

Comment author: Patrick_(orthonormal) 12 May 2008 02:32:38AM 1 point [-]

Well, now I think I understand why you chose to do the QM series on OB. As it stands, the series is a long explication of one of the most subtle anthropocentric biases out there— the bias in favor of a single world with a single past and future, based on our subjective perception of a single continuous conscious experience. It takes a great deal of effort before most of us are even willing to recognize that assumption as potentially problematic.

Oh, and one doesn't even have to assume the MWI is true to note this; the single-world bias is irrationally strong in us even if it turns out to correspond to reality.

In response to On Being Decoherent
Comment author: Patrick_(orthonormal) 29 April 2008 03:01:32AM 2 points [-]

I just wanted to say I've benefited greatly from this series, and especially from the last few posts. I'd studied some graduate quantum mechanics, but bailed out before Feynman paths, decoherence, etc; and from what I'd experienced with it, I was beginning to think an intuitive explanation of (one interpretation of) quantum mechanics was nigh-impossible. Thanks for proving me wrong, Eliezer.

The argument (from elegance/Occam's Razor) for the many-worlds interpretation seems impressively strong, too. I'll be interested to read the exchanges when you let the one-world advocates have their say.

Comment author: Patrick_(orthonormal) 05 March 2008 06:33:14PM 0 points [-]

More to the point: (P or ~P) isn't a theorem, it's an axiom. It is (so far as we can tell) consistent with our other axioms and absolutely necessary for many important theorems (any proof by contradiction— and there are some theorems like Brouwer's Fixed Point Theorem which, IIRC, don't seem to be provable any other way), so we accept a few counterintuitive but consistent consequences like (G or ~G) as the price of doing business. (The Axiom of Choice with the Banach-Tarski Paradox is the same way.)

OK, I've said enough on that tangent.

View more: Prev | Next