Belief is pretty unambiguous - being sure of (100% probability, like cogito ergo sum), or a strong trust (not nearly 90% probability is not belief). So it seems we are in agreement, you don't believe in it, and neither do most less wrong readers. I agree that based on that argument, whether the probability is 10^-1000 or 75%, is still up for debate.
I think only a tiny minority of lesswrong readers, believe in cryopreservation. If people genuinely believed in it then they would not wait until they were dying to preserve themselves, since the cumulative risk of death or serious mental debilitation before cryopreservation would be significant, the consequence is loss of (almost) eternal life, while by early cryopreservation all they have to lose is their current, finite life, in the "unlikely" event that they are not successfully reanimated. If people were actually trying to preserve themselves early then there would be a legal debate. There is none (unless I'm mistaken).
Further evidence against this argument is the tiny sums that people are willing to pay. How much would you pay for eternal life? More or less than $8,219 (which is the present value of an annual payment of $300 in perpetuity?). Sounds too cheap to be genuine, too expensive to waste my money on. If I genuinely believed in cryopreservation I would be spending my net worth, which for most americans over 75 years old is > $150k. For less wrong readers, I would guess the median net worth at age 75 would be > $1m.
What is the real probability? I think given the lack of success of humans in making long term predictions suggests that we should admit we simply don't know. Cryopreservation might work. I wouldn't stake my life, or my money on it, and I think there are more important jobs to do first.
I agree FAI should certainly be able to outclass human scientists in the creation of scientific theories and new technologies. This in itself has great value (at the very least we could spend happy years trying to follow the proofs).
I think my issue is that I think it will be insanely difficult to produce an AI and I do not believe it will produce a utopian "singularity" - where people would actually be happy. The same could be said of the industrial revolution. Regardless, my original post is borked. I concede the point.
Yeah I can see that applies much better to intelligence than to processing speed - one might think that a super-genius intelligence could achieve things that a human intelligence could not. Gladwell's Outliers (embarrassing source) seems to refute this - his analysis seemed to show that IQ in excess of 130 did not contribute to success. Geoffrey Miller hypothesised that intelligence is actually an evolutionary signal of biological fitness - in this case, intellect is simply a sexual display. So my view is that a basic level of intelligence is useful, but excess intelligence is usually wasted.
To directly address your point - what I mean is if you have 1 computer that you never use, with 200MHz processor, I'd think twice about buying a 1.6GHz computer, especially if the 200MHz machine is suffering from depression due to it's feeling of low status and worthlessness.
I probably stole from The Economist too.
Yes - thank you for the cite.
There is already a vast surplus of unused intelligence in the human race, so working on generalized AI is a waste of time (90%)
Edit: "waste of time" is careless, wrong and a bit rude. I just mean a working generalized AI would not make a major positive impact on humankind's well-being. The research would be fun, so it's not wasted time. Level of disagreement should be higher too - say ~95%.
Yes - this is exactly the point I was about to make. Another way of putting it is that an argument from authority is not going to cut mustard in a dialog (i.e. in a scientific paper, you will be laughed at if your evidence for a theory is another scientist's say so) but as a personal heuristic it can work extremely well. While people sometimes "don't notice" the 900 pound gorilla in the room (the Catholic sex abuse scandal being a nice example), 99% of the things that I hear this argument used for turn out to be total tosh (e.g. Santill's Roswell Alien Autopsy film, Rhine's ESP experiments). As Feynman probably didn't say, "Keep an open mind, but not so open that your brains fall out".
jhuffman's point made me think of the following devil's advocacy: If someone is very confident of cryonics, say more than 99% confident, then they should have themselves preserved before death. They should really have themselves preserved immediately - otherwise there is a higher risk that they will die in a way that causes the destruction of their mind, than there is that cryonics will fail. The amount that they will be willing to pay would also be irrelevant - they won't need the money until after they are preserved. I appreciate that there are probably laws against preserving healthy adults, so this is strictly a thought experiment.
As people get older their risk of death or brain damage increases. This means that as someone gets older the confidence level at which they should seek early preservation will decrease. Also as someone gets older their expected "natural" survival time decreases, by definition. This means the payoff for not seeking early preservation is reducing all the time. This seems to bring some force to the argument - if there is a 10% probability that cryonics will succeed, then I really can't see why anyone would let themselves get within 6 years of likely death - they are putting a second lifetime at risk for 6 years of less and less healthy life.
Finally the confidence level relates to cost. If people can be shown to have a low level of confidence in cryonics, then their willingness to pay money should be lower. The figures I've seen quoted require a sum of $150,000. (Whether this is paid in life insurance or not is irrelevant - you must pay for it in the premium since, if you're going to keep the insurance until you die, the probability of the insurer paying out is 100%). If the probability of Cryonics working is 10%, then the average cost for a successful re-animation is $1.5 million. This is a pretty conservative cost I think - doubtless for some who read this blog it is small change. Not for me sadly though :)
"The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy." - no. The most intelligent and able forecasters are incapable of making predictions (many of them worked in the field of AI). Your argument about updating my probability upwards because I don't understand the future is fascinating. Can you explain why I can't use the precise same argument to say there is a 50% chance that Arizona will be destroyed by a super-bomb on January 1st 2018?