Wei_Dai comments on Outline of possible Singularity scenarios (that are not completely disastrous) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (40)
A fascinating thought. Do you assign more than negligible probability to this being true? Do you plan to elaborate on this point and others? I assume that it would be stupid to hope for some sort of "emergent friendliness", but it is nonetheless a very interesting speculation.
It's not my original idea. See comments by Carl Shulman and Vladimir Nesov. Gary Drescher also mentioned in conversation a different way in which acausal considerations might lead superintelligent AIs to treat us ethically. I'm not sure if he has written about it anywhere. (ETA: See page 287 of his book.)