Vladimir_Nesov comments on Changing accepted public opinion and Skynet - Less Wrong

15 [deleted] 22 May 2009 11:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 22 May 2009 03:12:35PM 3 points [-]

Given the stakes, if you already accept the expected utility maximization decision principle, it's enough to become convinced that there is even a nontrivial probability of this happening. The paper seems to be adequate for snapping the reader's mind out of conviction in the absurdity and impossibility of dangerous AI.

Comment author: whpearson 22 May 2009 04:00:04PM 1 point [-]

The stakes on the other side of the equation are also the survival of the human race.

Refraining from developing AI unless we can formally prove it is safe may also lead to extinction if it reduces our ability to cope with other existential threats,

Comment author: Nick_Tarleton 23 May 2009 01:49:32AM 2 points [-]

"Enough" is ambiguous; your point is true but doesn't affect Vladimir's if he meant "enough to justify devoting a large amount of your attention (given the current distribution of allocated attention) to the risk of UFAI hard takeoff".