cousin_it comments on Open Thread: July 2010 - Less Wrong

6 Post author: komponisto 01 July 2010 09:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (653)

You are viewing a single comment's thread.

Comment author: cousin_it 02 July 2010 07:41:52AM *  3 points [-]

A small koan on utility functions that "refer to the real world".

  1. Question to Clippy: would you agree to move into a simulation where you'd have all the paperclips you want?

  2. Question to humans: would you agree to all of humankind moving into a simulation where we would fulfill our CEV (at least, all terms of it that don't mention "not living in a simulation")?

In both cases assume you have mathematical proof that the simulation is indestructible and perfectly tamper-resistant.

Comment author: Kingreaper 02 July 2010 02:25:45PM 5 points [-]

Would the simulation allow us to exit, in order to perform further research on the nature of the external world?

If so, I would enter it. If not? Probably not. I do not want to live in a world where there are ultimate answers and you can go no further.

The fact that I may already live in one is just bloody irritating :p

Comment author: cousin_it 02 July 2010 02:45:51PM 1 point [-]

Good point. You have just changed my answer from yes to no.

Comment author: Alicorn 02 July 2010 07:40:55PM *  3 points [-]

If we move into the same simulation and can really interact with others, then I wouldn't mind the move at all. Apart from that, experiences are the important bit and simulations can have those.

Comment author: Clippy 06 July 2010 05:13:57PM *  1 point [-]

I might do that just sort of temporarily because it would be fun, similar to how apes like to watch other apes in ape situations even when it doesn't relate to their own lives.

But I would have to limit this kind of thing because, although pleasurable, it doesn't support my real values. I value real paperclips, not simulated paperclips, fun though they might be to watch.

Comment author: wedrifid 08 July 2010 06:20:23AM 1 point [-]

Clippy is funnier when he plays the part of a paperclip maximiser, not a human with a paperclip fetish.

Comment author: Clippy 08 July 2010 02:00:20PM 0 points [-]

User:wedrifid is funnier when he plays the part of a paperclip maximiser, not an ape with a pretense of enlightenment.

Comment author: Kevin 08 July 2010 06:14:59AM 0 points [-]

What is real?

Comment author: Clippy 08 July 2010 02:01:04PM 0 points [-]

Stuff that's not in a simulation?

Comment author: ewbrownv 02 July 2010 09:13:27PM 1 point [-]

Your footnote assumes away most of the real reasons for objecting to such a scenario (i.e. there is no remotely plausible world in which you could be confident that the simulation is either indestructible or tamper-proof, so entering it means giving up any attempt at personal autonomy for the rest of your existence).

Comment author: red75 02 July 2010 09:24:45PM 0 points [-]

Computronium maximizer will ensure, that there will be no one to tamper with simulation, indestructability in this scenario is maximized too,

Comment author: magfrump 02 July 2010 01:57:36PM 1 point [-]

Part 2 seems similar to the claim (which I have made in the past but not on LessWrong) that the Matrix was actually a friendly move on the part of that world's AI.

Comment author: billswift 02 July 2010 07:13:27PM 4 points [-]

And the AI kills the thousands of people in Zion every hundred years or so when they get aggressive enough to start destabilizing the Matrix, thereby threatening billions. But the AI needs to keep some outside the Matrix as a control and insurance against problems inside the Matrix. And the AI spreads the idea that the Matrix "victims" are slaves and provide energy to the AI to keep the outsiders outside (even though the energy source claims are obviously ridiculous - the people in Zion are profoundly ignorant and bordering on outright stupid). Makes more sense than the silliness of the movies anyway.

Comment author: magfrump 02 July 2010 09:31:05PM 1 point [-]

This hypothesis also explains the oracle in a fairly clean way.

Comment author: Bongo 04 July 2010 08:29:18PM 3 points [-]

Agent Smith did say that the first matrix was a paradise but people wouldn't have it, but is simulating the world of 1999 really the friendliest option?

Comment author: magfrump 05 July 2010 05:43:15PM 1 point [-]

We only ever see America simulated. Even there we never see crime or oppression or poverty (homeless people could even be bots).

If you don't simulate poverty and dictatorships then 1999 could be reasonably friendly. The economy is doing okay and the Internet exists and there is some sense that technology is expanding to meet the world's needs but not spiraling out of control.

But I'm just making most of this up to show that an argument exists; it seems pretty clear that it was written to be in the present day to keep it in the genre of post-apocalyptic lit, in which case using the present adds to the sense of "the world is going downhill."

Comment author: ShardPhoenix 02 July 2010 12:44:59PM *  1 point [-]

The given assumption seems unlikely to me, but in that case I think I'd go for it.

Comment author: red75 02 July 2010 09:38:10AM *  1 point [-]

Is it assumed that no new information will be entered into simulation after launch?

Comment author: Blueberry 02 July 2010 07:47:48AM 1 point [-]

And does it change your answers if you learn that we are living in a simulation now? Or if you learn that Tegmark's theory is correct?

Comment author: JGWeissman 08 July 2010 07:05:24AM 0 points [-]

Yes, assuming further that the simulation will expand optimally to use all available resources for its computation, and that any persons it encounters will be taken into the simulation.

Comment author: Nisan 02 July 2010 12:46:52PM 0 points [-]

My answer is yes, and your point is well-taken: We have to be careful about what we mean by "the real world".

Comment author: Tom_Talbot 02 July 2010 01:05:38PM 0 points [-]

Does Clippy maximise number-of-paperclips-in-universe (given all available information) or some proxy variable like number-of-paperclips-counted-so-far? If the former, Clippy does not want to move to a simulation. If the latter, Clippy does want to move to a simulation.

The same analysis applies to humankind.

Comment author: Clippy 06 July 2010 05:17:14PM 2 points [-]

I maximize the number of papercips in the universe (that exist an arbitrarily long time from now). I use "number of paperclips counted so far" as a measure of progress, but it is always screened off by more direct measures, or expected quantities, of paperclips in the universe.

Comment author: Sniffnoy 02 July 2010 10:21:36PM 2 points [-]

I'm not certain that's so, as ISTM many of the things humanity wants to maximize are to a large extent representation-invariant - in particular because they refer to other people - and could be done just as well in a simulation. The obvious exception being actual knowledge of the outside world.