RomeoStevens comments on Discussion: Which futures are good enough? - Less Wrong

5 Post author: WrongBot 24 February 2013 12:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: Elithrion 24 February 2013 02:27:52AM *  4 points [-]

It seems like most other commenters so far don't share my opinion, but I view the above scenario as basically equivalent to wireheading, and consequently see it as only very slightly better than the destruction of all earth-originating intelligence (assuming the AI doesn't do anything else interesting). "Affecting-the-real-world" is actually the one value I would not want to trade off (well, obviously, I'd still trade it off, but only at a prohibitively expensive rate).

I'm much more open to trading off other things, however. For example, if we could get much more easily than the successful utopia, I'd say we should go for it. What specific values are the best to throw away in the pursuit of getting something workable isn't really clear, though. While I don't agree that if we lose one, we lose them all, I'm also not sure that anything in particular can be meaningfully isolated.

Perhaps the best (meta-)value that we could trade off is "optimality" - we should consider that if we see a way to design something stable that's clearly not the best we can do, we should nonetheless go with it if it's considerably easier than better options. For example, if you see a way to specify a particular pretty good future and have the AI build that without going into some failure mode, it might be better to just use that future instead of trying to have the AI design the best possible future.

Comment author: RomeoStevens 24 February 2013 03:16:55AM 5 points [-]

If believing you inhabit the highest level floats your boat be my guest, just don't mess with the power plug on my experience machine.

Comment author: Elithrion 24 February 2013 04:12:26AM 0 points [-]

From an instrumental viewpoint, I hope you plan to figure out how to make everyone sitting around on a higher level credibly precommit to not messing with the power plug on your experience machine, otherwise it probably won't last very long. (Other than that, I see no problems with us not sharing some terminal values.)

Comment author: Lightwave 24 February 2013 10:59:25AM *  0 points [-]

figure out how to make everyone sitting around on a higher level credibly precommit to not messing with the power plug

That's MFAI's job. Living on the "highest level" also has the same problem, you have to protect your region of the universe from anything that could "de-optimize" it, and FAI will (attempt to) make sure this doesn't happen.

Comment author: RomeoStevens 24 February 2013 06:16:45AM 0 points [-]

I just have to ensure that the inequality (Amount of damage I cause if outside my experience machine>Cost of running my experience machine) holds.

Comment author: RichardKennaway 24 February 2013 10:01:20AM 1 point [-]

Translating that back into English, I get "unplug me from the Matrix and I'll do my best to help Skynet kill you all".

Comment author: Elithrion 24 February 2013 06:03:45PM 0 points [-]

Also that killing you outright isn't optimal.

Comment author: RomeoStevens 24 February 2013 09:30:01PM 0 points [-]

I can't do much about scenarios in which it is optimal to kill humans. We're probably all screwed in such a case. "Kill some humans according to these criteria" is a much smaller target than vast swathes of futures that simply kill us all.