I'm surprised nobody put this problem in terms of optimization and "steering the future" (including Eliezer, though I suppose he might have tried to make a different point in his post).
As I see it, robots are a special case of machines intended to steer things in their immediate vicinity towards some preferred future. (The special case is that their acting parts and steering parts are housed in the same object, which is not terribly important, except that the subsumption architecture implies it.)
"Smart" robots have a component analogue ...
OK, I see I got a bit long-winded. The interesting part of my question is if you'd take the same decision if it's about you instead of others. The answer is obvious, of course ;-)
The other details/versions I mentioned are only intended to explore the "contour of the value space" of the other posters. (: I'm sure Eliezer has a term for this, but I forget it.)
I know you're all getting a bit bored, but I'm curious what you think about a different scenario:
What if you have to choose between (a) for the next 3^^^3 days, you get an extra speck in your eye per day than normally, and 50 years you're placed in stasis, or (b) you get the normal amount of specks in your eyes, but during the next 3^^^3 days you'll pass through 50 years of atrocious torture.
Everything else is considered equal in the other cases, including the fact that (i) your total lifespan will be the same in both cases (more than 3^^^3 days), (ii) th...
Will Pearson: First of all, it's not at all clear to me that your wish is well-formed, i.e. it's not obvious that it is possible to be informed about the many (infinite?) aspects of the future and not regret it. (As a minor consequence, it's not exactly obvious to me from your phrasing that "kill you before you know it" is not a valid answer; depending on what the genie believes about the world, it may consider that "future" stops when you stop thinking.)
Second, there might be futures that you would not regret but _everybodyelse does. (... (read more)