LauraABJ comments on Normal Cryonics - Less Wrong

58 Post author: Eliezer_Yudkowsky 19 January 2010 07:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (930)

You are viewing a single comment's thread. Show more comments above.

Comment author: MichaelVassar 22 January 2010 04:27:01AM 5 points [-]

I would be very surprised if uploading was easier than AI, maybe slightly more surprised than I would be by cold fusion being real, but with the sort of broad probabilities I use that's still a bit over 1%. AGI is terribly difficult too. It's not FAI or uploading but very high caliber people have failed over and over.

The status quo points to AGI before FAI, but the status quo continually changes, both due to trends and due to radical surprises. The world wouldn't have to change more radically than it has numerous times in the past for the sanity waterline to rise far enough that people capable of making significant progress towards AGI reliably understood that they needed to aim for FAI or for uploading instead. Once Newton could unsurprisingly be a Christian theist and an Alchemist. By the mid 20th century the priors against Einstein being a theist were phenomenal and in fact he wasn't one. (his Spinozaism is closer to what we call atheism than what most people call atheism is). I don't think that extreme low probabilities are self defeating to me, though they might be for some people, I just disagree with them.

Comment author: LauraABJ 22 January 2010 09:29:25PM 1 point [-]

Your argument is interesting, but I'm not sure if you arrived at your 1% estimate by specific reasoning about uploading/AI, or by simply arguing that paradigmatic 'surprises' occur frequently enough that we should never assign more than a 99% chance to something (theoretically possible) not happening.

I can conceive of many possible worlds (given AGI does not occur) in which the individual technologies needed to achieve uploading are all in place, and yet are never put together for that purpose due to general human revulsion. I can also conceive of global-political reasons that will throw a wrench in tech-development in general. Should I assign each of those a 1% probability just because they are possible?

Also, no offense meant to you or anyone else here, but I frequently wonder how much bias there is in this in-group of people who like to think about uploading/FAI towards believing that it will actually occur. It's a difficult thing to gage, since it seems the people best qualified to answer questions about these topics are the ones most excited/invested in the positive outcomes. I mean, if someone looks at the evidence and becomes convinced that the situation is hopeless, they are much less likely to get involved in bringing about a positive outcome and more likely to rationalize all this away as either crazy or likely to occur so far in the future that it won't bother them. Where do you go for an outside view?

Comment author: MichaelVassar 25 January 2010 05:36:20AM 5 points [-]

Paradigmatic surprises vary a lot in how dramatic they are. X-rays and double slit deserved WAY lower probabilities than 1%. I'm basically going on how convincing I find the arguments for uploading first and trying to maintain calibrated confidence intervals. I would not bet 99:1 against uploading happening first. I would bet 9:1 without qualm. I would probably bet 49:1 I find it very easy to tell personally credible stories (no outlandish steps) where uploading happens first for good reasons. The probability of any of those stories happening may be much less than 1%, but they probably constitute exemplars of a large class.

Assigning a 1% probability to uploading not happening in a given decade when it could happen, due to politics and/or revulsion, seems much too low. Decade-to-decade correlations could be pretty high but not plausibly near 1, so given civilization's long term survival uploading is inevitable once the required tech is in place, but it's silly to assume civilization's long-term survival.

I don't really think that outside views are that widely applicable a methodology and if there isn't an obvious place to look for one there probably isn't one. The buck for judgment and decision-making has to stop somewhere, and stopping with deciding on reference classes seems silly in most situations. That said, I share your concern. I'm sure that there is a bias in the community of interested people, but I think that the community's most careful thinkers can and do largely avoid it. I certainly think bad outcomes are more likely than good ones, but I think that the odds are around 2:1 rather than 100:1.

Comment author: Eliezer_Yudkowsky 25 January 2010 06:30:13AM 6 points [-]

double slit deserved WAY lower probabilities than 1%

I think that was probably the greatest single surprise in the entire history of time.

Comment author: MichaelVassar 25 January 2010 11:35:20AM 1 point [-]

Outside of pure math at least. Irrational numbers were a big deal.

Comment author: Eliezer_Yudkowsky 25 January 2010 04:55:14PM 0 points [-]

Measured in the prior probability that was assigned or could justly have been assigned beforehand, I don't think irrational numbers come close.

Comment author: LauraABJ 25 January 2010 04:38:19PM 3 points [-]

I'd be interested in seeing your reasoning written out in a top-level post. 2:1 seems beyond optimistic to me, especially if you give AI before uploading 9:1, but I'm sure you have your reasons. Explaining a few of these 'personally credible stories,' and what classes you place them in such that they sum to 10% total may be helpful. This goes for why you think FAI has such a high chance or succeeding as well.

Also, I believe I used the phrase 'outside view' incorrectly, since I didn't mean reference classes. I was interested to know if there are people who are not part of your community that help you with number crunching on the tech-side. An 'unbiased' source of probabilities, if you will.

Comment author: MichaelVassar 26 January 2010 05:11:02AM 3 points [-]

I think of my community as essentially consisting of the people who are willing to do this sort of analysis, so almost axiomatically no.

The simplest reason for thinking that FAI is (relatively) likely to succeed is the same reason for thinking that slavery ending or world peace are more likely than one might assume from psychology or from economics, namely that people who think about them are unusually motivated to try to bring them about.