orthonormal comments on Reference class of the unclassreferenceable - Less Wrong

25 Post author: taw 08 January 2010 04:13AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (150)

You are viewing a single comment's thread. Show more comments above.

Comment author: Zack_M_Davis 08 January 2010 04:55:36AM 10 points [-]

Finding a convincing reference class in which cryonics, singularity, superhuman AI etc. are highly probable

I'll nominate hypotheses or predictions predicated on materialism, or maybe the Copernican/mediocrity principle. In an indifferent universe, there's nothing special about the current human condition; in the long run, we should expect things to be very different in some way.

Note that a lot of the people around this community who take radically positive scenarios seriously, also take human extinction risks seriously, and seem to try to carefully analyze their uncertainty. The attitude seems markedly different from typical doom/salvation prophecies.

(Yes, predictions about human extinction events have never come true either, but there are strong anthropic reasons to expect this: if there had been a human extinction even in our past, we wouldn't expect to be here to talk about it!)

Comment author: orthonormal 09 January 2010 07:30:57PM 6 points [-]

Feynman's anticipation of nanotechnology is another prediction that belongs to that reference class.