You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Sebastian_Hagen comments on Superintelligence 21: Value learning - Less Wrong Discussion

7 Post author: KatjaGrace 03 February 2015 02:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread. Show more comments above.

Comment author: Sebastian_Hagen 03 February 2015 08:49:23PM 3 points [-]

I agree, the actual local existence of other AIs shouldn't make a difference, and the approach could work equally either way. As Bostrom says on page 198, no communication is required.

Nevertheless, for the process to yield a useful result, some possible civilization would have to build a non-HM AI. That civilization might be (locally speaking) hypothetical or simulated, but either way the HM-implementing AI needs to think of it to delegate values. I believe that's what footnote 25 gets at: From a superrational point of view, if every possible civilization (or every one imaginable to the AI we build) at this point in time chooses to use an HM approach to value coding, it can't work.

Comment author: diegocaleiro 11 February 2015 07:30:00PM *  0 points [-]

If all civilizations HailMary to value-code they would all find out the others did the same and because the game doesn't end there, in round two they would decide to use a different approach. Possibly, like undifferentiated blastula cells use an environmental asymmetric element (gravity) to decide to start differentiating, AGI's could use local information to decide whether they should HailMary again on the second hypothetical round or if they should be the ones deciding for themselves (say information about where you are located in your Hubble volume, or how much available energy there still is in your light cone or something).

Comment author: KatjaGrace 12 February 2015 05:49:06PM 0 points [-]

Isn't it the civilization not the AGI that will need to decide what to do?

Comment author: diegocaleiro 16 February 2015 04:28:48PM *  0 points [-]

That depends on whether the AGI is told (and accepts) to HailMary once, or to HailMary to completion, or something in between. It also depends on which decision theory the AGI uses to decide I believe. There seem to be, for a large ensemble of decisions, a one-round version of the many-round decision ("No Regrets" Arntzenius2007, "TDT" Yudkowksy2010, "UDT" WeiDai 20xx).