MugaSofer comments on Tactics against Pascal's Mugging - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (59)
Which mission? The FAI mission? The GiveWell mission? I am confused :(
I don't suppose he said this somewhere linkable?
Here he claims that the default outcome of AI is very likely safe, but attempts at Friendly AI are very likely deadly if they do anything (although I would argue this neglects the correlation between what AI approaches are workable in general and for would-be FAI efforts, and what is dangerous for both types, as well as assuming some silly behaviors and that competitive pressures aren't severe):