magfrump comments on Reference class of the unclassreferenceable - Less Wrong

25 Post author: taw 08 January 2010 04:13AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (150)

You are viewing a single comment's thread. Show more comments above.

Comment author: taw 08 January 2010 06:21:52AM 1 point [-]

How do you get from "in the long run, we should expect things to be very different in some way" or "hypotheses or predictions predicated on materialism" or "Copernican/mediocrity principle" to cryonics, superhuman AIs, or foom-style singularity?

Comment author: Zack_M_Davis 08 January 2010 07:19:13AM 10 points [-]

cryonics

The human brain is made out of matter (materialism). Many people's brains are largely intact at the time of their deaths. By preserving the brain, we give possible future advanced neuroscience and materials technology a chance at restoring the original person. There are certainly a number of good reasons to think that this probably won't happen, but it doesn't belong in the same reference class of "predictions promising eternal life," because most previous predictions about about eternal life didn't propose technological means in a material universe. Cryonics isn't about rapturing people's souls up to heaven; it's about reconstruction of a damaged physical artifact. Conditional on continued scientific progress (which might or might not happen), it seems plausible. I do agree that "technology which isn't even remotely here" is a good reference class. Similarly ...

superhuman AIs

Intelligence doesn't require ontologically fundamental things that we can't create more of, only matter appropriately arranged (materialism). Humans are not the most powerful possible intelligences (mediocrity). Conditional on continued scientific progress, it's plausible that we could create superhuman AIs.

foom-style singularity

Human minds are not the fastest-thinking or the fastest-improving possible intelligences (mediocrity). Faster processes outrun slower ones. Conditional on our creating AIs, some of them might think much faster than us, and faster minds probably have a greater share in determining the future.

Comment author: taw 08 January 2010 11:11:00AM 1 point [-]

These are fine arguments, but they all take the inside view - focusing on particulars of a situation, not finding big robust reference classes to which the situation belongs.

And in any case you seem to be arguing for such inventions not being prohibited by laws of physics more than for them happening with very high probability in near future, as many here believe. As a reference class, things which are merely not prohibited by laws of physics almost never happen anyway - this class is just too huge.

Comment author: MichaelVassar 10 January 2010 04:39:43AM 7 points [-]

Things not prohibited by physics that humans want to happen don't happen eventually? Very far from clear.