kingmaker comments on Are there really no ghosts in the machine? - Less Wrong

0 Post author: kingmaker 13 April 2015 07:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread. Show more comments above.

Comment author: kingmaker 14 April 2015 05:02:04PM 1 point [-]

That wasn't what I claimed, I proposed that the current, most promising methods of producing an FAI are far too likely to produce a UFAI to be considered safe

Comment author: SolveIt 15 April 2015 01:12:06AM 2 points [-]

Why do you think the whole website is obsessed with provably-friendly AI? The whole point of MIRI is that pretty much every superintelligence that is anything other than provably safe is going to be unfriendly! This site is littered with examples of how terribly almost-friendly AI would go wrong! We don't consider current methods "too likely" to produce a UFAI, we think they're almost certainly going to produce UFAI! (Conditional on creating a superintelligence at all, of course).

So as much as I hate asking this question because it's alienating, have you read the sequences?