cousin_it comments on Open Thread: July 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (653)
A small koan on utility functions that "refer to the real world".
Question to Clippy: would you agree to move into a simulation where you'd have all the paperclips you want?
Question to humans: would you agree to all of humankind moving into a simulation where we would fulfill our CEV (at least, all terms of it that don't mention "not living in a simulation")?
In both cases assume you have mathematical proof that the simulation is indestructible and perfectly tamper-resistant.
Would the simulation allow us to exit, in order to perform further research on the nature of the external world?
If so, I would enter it. If not? Probably not. I do not want to live in a world where there are ultimate answers and you can go no further.
The fact that I may already live in one is just bloody irritating :p
Good point. You have just changed my answer from yes to no.
If we move into the same simulation and can really interact with others, then I wouldn't mind the move at all. Apart from that, experiences are the important bit and simulations can have those.
I might do that just sort of temporarily because it would be fun, similar to how apes like to watch other apes in ape situations even when it doesn't relate to their own lives.
But I would have to limit this kind of thing because, although pleasurable, it doesn't support my real values. I value real paperclips, not simulated paperclips, fun though they might be to watch.
Clippy is funnier when he plays the part of a paperclip maximiser, not a human with a paperclip fetish.
User:wedrifid is funnier when he plays the part of a paperclip maximiser, not an ape with a pretense of enlightenment.
What is real?
Stuff that's not in a simulation?
Your footnote assumes away most of the real reasons for objecting to such a scenario (i.e. there is no remotely plausible world in which you could be confident that the simulation is either indestructible or tamper-proof, so entering it means giving up any attempt at personal autonomy for the rest of your existence).
Computronium maximizer will ensure, that there will be no one to tamper with simulation, indestructability in this scenario is maximized too,
Part 2 seems similar to the claim (which I have made in the past but not on LessWrong) that the Matrix was actually a friendly move on the part of that world's AI.
And the AI kills the thousands of people in Zion every hundred years or so when they get aggressive enough to start destabilizing the Matrix, thereby threatening billions. But the AI needs to keep some outside the Matrix as a control and insurance against problems inside the Matrix. And the AI spreads the idea that the Matrix "victims" are slaves and provide energy to the AI to keep the outsiders outside (even though the energy source claims are obviously ridiculous - the people in Zion are profoundly ignorant and bordering on outright stupid). Makes more sense than the silliness of the movies anyway.
This hypothesis also explains the oracle in a fairly clean way.
Agent Smith did say that the first matrix was a paradise but people wouldn't have it, but is simulating the world of 1999 really the friendliest option?
We only ever see America simulated. Even there we never see crime or oppression or poverty (homeless people could even be bots).
If you don't simulate poverty and dictatorships then 1999 could be reasonably friendly. The economy is doing okay and the Internet exists and there is some sense that technology is expanding to meet the world's needs but not spiraling out of control.
But I'm just making most of this up to show that an argument exists; it seems pretty clear that it was written to be in the present day to keep it in the genre of post-apocalyptic lit, in which case using the present adds to the sense of "the world is going downhill."
The given assumption seems unlikely to me, but in that case I think I'd go for it.
Is it assumed that no new information will be entered into simulation after launch?
And does it change your answers if you learn that we are living in a simulation now? Or if you learn that Tegmark's theory is correct?
Yes, assuming further that the simulation will expand optimally to use all available resources for its computation, and that any persons it encounters will be taken into the simulation.
My answer is yes, and your point is well-taken: We have to be careful about what we mean by "the real world".
Does Clippy maximise number-of-paperclips-in-universe (given all available information) or some proxy variable like number-of-paperclips-counted-so-far? If the former, Clippy does not want to move to a simulation. If the latter, Clippy does want to move to a simulation.
The same analysis applies to humankind.
I maximize the number of papercips in the universe (that exist an arbitrarily long time from now). I use "number of paperclips counted so far" as a measure of progress, but it is always screened off by more direct measures, or expected quantities, of paperclips in the universe.
I'm not certain that's so, as ISTM many of the things humanity wants to maximize are to a large extent representation-invariant - in particular because they refer to other people - and could be done just as well in a simulation. The obvious exception being actual knowledge of the outside world.