jhuffman comments on Outline of possible Singularity scenarios (that are not completely disastrous) - Less Wrong

24 Post author: Wei_Dai 06 July 2011 09:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: jhuffman 11 July 2011 06:20:22PM 0 points [-]

Does the fact of our present existence tell us anything about the likelihood for a human-superior intelligence to remain ignorant of acausal game theory?

Comment author: endoself 11 July 2011 09:59:27PM *  1 point [-]

Anthropically, UDT suggests that a variant of SIA should be used [EDIT - depending on your ethics]. I'm not sure what exactly that implies in this scenario. It is very likely that humans could program a superintelligence that is incapable of understanding acausal causation. I trust that far more than I trust any anthropic argument with this many variables. The only reasonably likely loophole here is if anthropics could point to humanity being different than most species so that no other species in the area would be as likely to create a bad AI as we are. I cannot think of any such argument, so it remains unlikely that all superhuman AIs would understand acausal game theory.

Comment author: CarlShulman 11 July 2011 10:31:01PM 5 points [-]

Anthropically, UDT suggests that a variant of SIA should be used.

Depending on your preferences about population ethics, and the version of the same issues applying to copies. E.g. if you are going to split into many copies, do you care about maximizing their total or their average welfare? The first choice will result in SIA-like decision making, while the latter will result in SSA-like decision making.