JoshuaZ comments on Public Choice and the Altruist's Burden - Less Wrong

19 [deleted] 22 July 2010 09:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 28 July 2010 04:39:28AM 0 points [-]

It-just-so-happens that "solving" uFAI risk would most likely solve all other problems by triggering a friendly Singularity

This seems unlikely to me. Even if you completely solve the problem of Friendly AI you might lack the processing power to implement it. Or it might turn out that there are fundamental limits which prevent a Singularity event from taking place. The first problem seems particularly relevant given that to someone concerned about uFAI, the goal presumably is to solve the Friendliness problem well before we're anywhere near actually having functional general AI. No one want this to be cut close and there's no a priori reason to think it would be cut close. (Indeed if it did seem to be getting cut close one could arguably use that as evidence that we're in a simulation and that this is a semifictionalized account with a timeline specifically engineered to create suspense and drama.)