In response to High Challenge
Comment author: steven 19 December 2008 05:01:20PM 1 point [-]

That one bothered me too. Perhaps you could say bodies are much more peripheral to people's identities than brains, so that in the running case what is being tested is meat that happens to be attached to you and in the robot case it's you yourself. On the other hand I'd still be me with some minor brain upgrades.

In response to High Challenge
Comment author: steven 19 December 2008 04:05:28PM 0 points [-]

Computer games are the devil but I agree strongly with Hyphen, the good ones are like sports not work.

Comment author: steven 15 December 2008 11:09:09AM 1 point [-]

Not sure global diversity, as opposed to local diversity or just sheer quantity of experience, is the only reason I prefer there to be more (happy) people.

Comment author: steven 14 December 2008 07:27:02PM 0 points [-]

and where I just said "universe" I meant a 4D thing, with the dials each referring to a 4D structure and time never entering into the picture.

Comment author: steven 14 December 2008 07:24:14PM 1 point [-]

Eliezer, I don't think your reality fluid is the same thing as my continuous dials, which were intended as an alternative to your binary check marks. I think we can use algorithmic complexity theory to answer the question "to what degree is a structure (e.g. a mind-history) implemented in the universe" and then just make sure valuable structures are implemented to a high degree and disvaluable structures are implemented to a low degree. The reason most minds should expect to see ordered universes is because it's much easier to specify an ordered universe and then locate a mind within it, than it is to specify a mind from scratch. If this commits me to believing funny stuff like people with arrows pointing at them are more alive than people not with arrows pointing at them, I'm inclined to say "so be it".

Comment author: steven 14 December 2008 05:49:06PM 1 point [-]

Also "standard model" doesn't mean what you think it means and "unpleasant possibility" isn't an argument.

Comment author: steven 14 December 2008 05:45:12PM 13 points [-]

I'm completely not getting this. If all possible mind-histories are instantiated at least once, and their being instantiated at least once is all that matters, then how does anything we do matter?

If you became convinced that people had not just little checkmarks but little continuous dials representing their degree of existence (as measured by algorithmic complexity), how would that change your goals?

Comment author: steven 12 December 2008 08:45:12PM 0 points [-]

Hal, it also requires that you see each other as seeing each other that way, that you see each other as seeing each other as seeing each other that way, that you see each other as seeing each other as seeing each other as seeing each other that way, and so on.

In response to You Only Live Twice
Comment author: steven 12 December 2008 08:35:06PM 1 point [-]

I agree that a future world with currently-existing people still living in it is more valuable than one with an equal number of newly-created people living in it after the currently-existing people died, but to show that cryonics is a utilitarian duty you'd need to show not just that this is a factor but that it's an important enough factor to outweigh whatever people are sacrificing for cryonics (normalcy capital!). Lots of people are dead already so whether any single person lives to see the future can constitute at most a tiny part of the future's value.

Comment author: steven 17 November 2008 12:54:04PM 1 point [-]

Russell, I think the point is we can't expect Friendliness theory to take less than 30 years.

View more: Prev | Next