Dagon comments on The AI, the best human advisor - Less Wrong

7 Post author: Stuart_Armstrong 13 July 2015 03:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread.

Comment author: Dagon 15 July 2015 12:24:41AM 1 point [-]

Why do we think the WBE is "safe"? Natural intelligence is unfriendly in exactly the same way as a naively created AI.

The human is likely to be less effective than an AI, which makes it safer. But I don't see how you can assert that a human is less likely to intentionally or accidentally destroy the universe, given the same power.

Comment author: Stuart_Armstrong 15 July 2015 09:49:12AM 0 points [-]

We think that WBE are safe, in that they are unlikely to be able to produce a single message starting an optimisation process taking over the universe.

Comment author: alicey 16 July 2015 10:11:40PM 0 points [-]

You do have to be being careful not to give it too much computation time: http://lesswrong.com/lw/qk/that_alien_message/

Comment author: Stuart_Armstrong 17 July 2015 09:35:24AM 0 points [-]

Indeed! That's why I give them three subjective weeks.