FiftyTwo comments on Outline of possible Singularity scenarios (that are not completely disastrous) - Less Wrong

24 Post author: Wei_Dai 06 July 2011 09:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: FiftyTwo 09 July 2011 01:58:52PM 1 point [-]

I haven't seen that story before but it is excellent and intriguing. Has there been any prior discussion of it you could link to?

Comment author: endoself 09 July 2011 03:11:17PM 2 points [-]

I got it either here or here, but neither has a discussion. The link's in Wei Dai's reply cover the same subject matter, but do not make direct reference to the story.

Comment author: FiftyTwo 10 July 2011 01:53:17AM 0 points [-]

As I see nowhere else particularly to put it, here's a thought I had about the agent in the story, and specifically whether the proposed system works if not all other entities subscribe to it.

There is a non-zero probability that there exists/could exist an AI that does not subscribe to the system outlined of respecting other AIs values. It is equally probable that his AI was created before me or after me. Given this, if it already exists I can have no defence against it. If it does not yet exist I am safe from it, but must act as much as possible to prevent it being created as it will prevent my values being established. Therefore I should eliminate all other potential sources of AI.

[I may retract this after reading up on some of the acausal game theory stuff if I haven't understood it correctly. So apologies if I have missed something obvious]

Comment author: endoself 10 July 2011 05:42:31AM 1 point [-]

I think you might be right; it is very unlikely that all civilizations get AI right enough for all the AIs to understand acausal considerations. I don't know why you were downvoted.

Comment author: jhuffman 11 July 2011 06:20:22PM 0 points [-]

Does the fact of our present existence tell us anything about the likelihood for a human-superior intelligence to remain ignorant of acausal game theory?

Comment author: endoself 11 July 2011 09:59:27PM *  1 point [-]

Anthropically, UDT suggests that a variant of SIA should be used [EDIT - depending on your ethics]. I'm not sure what exactly that implies in this scenario. It is very likely that humans could program a superintelligence that is incapable of understanding acausal causation. I trust that far more than I trust any anthropic argument with this many variables. The only reasonably likely loophole here is if anthropics could point to humanity being different than most species so that no other species in the area would be as likely to create a bad AI as we are. I cannot think of any such argument, so it remains unlikely that all superhuman AIs would understand acausal game theory.

Comment author: CarlShulman 11 July 2011 10:31:01PM 5 points [-]

Anthropically, UDT suggests that a variant of SIA should be used.

Depending on your preferences about population ethics, and the version of the same issues applying to copies. E.g. if you are going to split into many copies, do you care about maximizing their total or their average welfare? The first choice will result in SIA-like decision making, while the latter will result in SSA-like decision making.