Wei_Dai comments on Top 9+2 myths about AI risk - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
I'm not a Friendliness researcher, but I did once consider whether trying to slow down AI research might be a good idea. Current thinking is probably not, but only because we're forced to live in a third-best world:
First best: Do AI research until just before we're ready to create an AGI. Either Friendliness is already solved by then, or else everyone stop and wait until Friendliness is solved.
Second best: Friendliness looks a lot harder than AGI, and we can't expect everyone to resist the temptation of fame and fortune when the possibility of creating AGI is staring them in the face. So stop or slow down AI research now.
Third best: Don't try to stop or slow down AI research because we don't know how to do it effectively, and doing it ineffectively will just antagonize AI researchers and create PR problems.
Why is this so ridiculous as to be unimaginable? Isn't the second-best world above actually better than the third-best, if only it was feasible?
I meant I can't imagine Friendliness-researchers seriously taking the stance for the same reason you subscribe to third-best choice.