In a comment on his skeptical post about Ray Kurzweil, he writes,
Unfortunately, [Kurzweil's] technological forecasting is naive, and I believe it will also prove erroneous (and in that, he is in excellent company). That would be of no consequence to me, or to others in cryonics, were it not for the fact that it has had, and continues to have, a corrosive effect on cryonics and immortalist activists and activism. His idea of the Singularity has created an expectation of entitlement and inevitability that are wholly unjustified, both on the basis of history, and on on the basis of events that are playing out now in the world markets, and on the geopolitical stage....
The IEET poll [link; Sep 7, 2011] found that the majority of their readers aged 35 or older said that they expect to “die within a normal human lifespan;” no surprises there.
This was in contrast to to an overwhelming majority (69%) of their readers under the age of 35 who believe that radical life extension will enable them to stay alive indefinitely, or “for centuries, at least.”
Where the data gets really interesting is when you look at the breakdown of just how these folks think they are going to be GIVEN practical immortality:
- 36% believe they will stay alive for centuries (at least) in their own (biological) bodies
- 26% expect that they will continue to survive by having their “minds uploaded to a computer”
- 7% expect to “die” but to eventually be resurrected by cryonics.
Only 7% think cryonics will be necessary? That simply delusional and it is a huge problem....
Nor are the 7% who anticipate survival via cryonics likely to be signed up. In fact, I’d wager not more than one or two of them is. And why should they bestir themselves in any way to this end? After all, the Singularity is coming, it is INEVITABLE, and all they have to do is to sit back and wait for it to arrive – presumably wrapped up in in pretty paper and with bows on.
Young people anticipating practical immortality look at me like some kind of raving mad Luddite when I try to convince them that if they are to have any meaningful chance at truly long term survival, they are going to have to act, work very hard, and have a hell of a lot of luck in the bargain....
Kurzweil has been, without doubt or argument, THE great enabler of this madness by providing a scenario and a narrative that is far more credible than Santa Claus, and orders of magnitude more appealing.
I wonder how people on Less Wrong would respond to that poll?
Edit: (Tried to) fix formatting and typo in title.
Huh? No. I mean you can construct prior sets that will result in them moving radically in one direction or another. Exercise: For any epsilon>0, there is a set of priors and hypothesis space containing a hypothesis H such that one can construct two Bayesians who start off with P(H)< eps, and after updating have P(H)>1-eps.
Well, we agree on the major things that pretty much everyone LW would agree on. We seem to agree on slightly more than that. But on other issues, especially meta issues like "how good in general are humans at estimating technological progress" we do seem to disagree. We also apparently disagree on how likely it is for a large number of scientists and engineers to focus on an area and not succeed in that area. The fusion power example I gave earlier seems relevant in that context.
This is not obvious to me. I think it is probably true for near-perfect Bayesians by some reasonable metric, but I'm not aware of a metric to measure how Bayesian non-Bayesians are. Moreover, there are many theorems in many areas where if one weakens the premises a small amount the results of the theorem don't just fail but fail catastrophically. Without a better understanding of the underlying material I don't think I can judge whether this is correct. My impression (very much not my area of math) is that very little has been done trying to look at how imperfect Bayesians should/will behave.
That seems like an extreme phrasing. The disagreement in question is whether in the next fifty years substantial life-extension is likely or that it is so likely that all conceivable world-lines from here which don't result in civilization collapsing will result in substantial life extensions in the next fifty years. Diametrically opposed I think would be something like your view v. the view that life-extension is definitely not going to happen in the next fifty years barring something like aliens coming down and giving us advanced technology or some sort of Friendly AGI going foom.
See also lessdazed's remark about the difficulty humans have of exchanging all relevant information.
If and only if their priors do not match one another. That's the whole point of Aumann's Agreement... (read more)