dilaudid
dilaudid has not written any posts yet.

Belief is pretty unambiguous - being sure of (100% probability, like cogito ergo sum), or a strong trust (not nearly 90% probability is not belief). So it seems we are in agreement, you don't believe in it, and neither do most less wrong readers. I agree that based on that argument, whether the probability is 10^-1000 or 75%, is still up for debate.
I think only a tiny minority of lesswrong readers, believe in cryopreservation. If people genuinely believed in it then they would not wait until they were dying to preserve themselves, since the cumulative risk of death or serious mental debilitation before cryopreservation would be significant, the consequence is loss of (almost) eternal life, while by early cryopreservation all they have to lose is their current, finite life, in the "unlikely" event that they are not successfully reanimated. If people were actually trying to preserve themselves early then there would be a legal debate. There is none (unless I'm mistaken).
Further evidence against this argument is the tiny sums that people are willing to... (read more)
I agree FAI should certainly be able to outclass human scientists in the creation of scientific theories and new technologies. This in itself has great value (at the very least we could spend happy years trying to follow the proofs).
I think my issue is that I think it will be insanely difficult to produce an AI and I do not believe it will produce a utopian "singularity" - where people would actually be happy. The same could be said of the industrial revolution. Regardless, my original post is borked. I concede the point.
Yeah I can see that applies much better to intelligence than to processing speed - one might think that a super-genius intelligence could achieve things that a human intelligence could not. Gladwell's Outliers (embarrassing source) seems to refute this - his analysis seemed to show that IQ in excess of 130 did not contribute to success. Geoffrey Miller hypothesised that intelligence is actually an evolutionary signal of biological fitness - in this case, intellect is simply a sexual display. So my view is that a basic level of intelligence is useful, but excess intelligence is usually wasted.
To directly address your point - what I mean is if you have 1 computer that you never use, with 200MHz processor, I'd think twice about buying a 1.6GHz computer, especially if the 200MHz machine is suffering from depression due to it's feeling of low status and worthlessness.
I probably stole from The Economist too.
Yes - thank you for the cite.
There is already a vast surplus of unused intelligence in the human race, so working on generalized AI is a waste of time (90%)
Edit: "waste of time" is careless, wrong and a bit rude. I just mean a working generalized AI would not make a major positive impact on humankind's well-being. The research would be fun, so it's not wasted time. Level of disagreement should be higher too - say ~95%.
Yes - this is exactly the point I was about to make. Another way of putting it is that an argument from authority is not going to cut mustard in a dialog (i.e. in a scientific paper, you will be laughed at if your evidence for a theory is another scientist's say so) but as a personal heuristic it can work extremely well. While people sometimes "don't notice" the 900 pound gorilla in the room (the Catholic sex abuse scandal being a nice example), 99% of the things that I hear this argument used for turn out to be total tosh (e.g. Santill's Roswell Alien Autopsy film, Rhine's ESP experiments). As Feynman probably didn't say, "Keep an open mind, but not so open that your brains fall out".
jhuffman's point made me think of the following devil's advocacy: If someone is very confident of cryonics, say more than 99% confident, then they should have themselves preserved before death. They should really have themselves preserved immediately - otherwise there is a higher risk that they will die in a way that causes the destruction of their mind, than there is that cryonics will fail. The amount that they will be willing to pay would also be irrelevant - they won't need the money until after they are preserved. I appreciate that there are probably laws against preserving healthy adults, so this is strictly a thought experiment.
As people get older their risk of... (read more)
"The main weakness comes from the fact that almost every single two-bit futurist feels a need to make predictions, almost every single one of which goes for narrative plausibility and thus has massive issues with burdensome details and the conjunction fallacy." - no. The most intelligent and able forecasters are incapable of making predictions (many of them worked in the field of AI). Your argument about updating my probability upwards because I don't understand the future is fascinating. Can you explain why I can't use the precise same argument to say there is a 50% chance that Arizona will be destroyed by a super-bomb on January 1st 2018?