You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JoshuaZ comments on Friendly AI research news: FriendlyAI.tumblr.com - Less Wrong Discussion

2 Post author: lukeprog 18 September 2011 07:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 20 September 2011 02:41:26AM 2 points [-]

Sorry, did he ever actually propose a smiley tiler? I thought that was just used as an example of a simple way one could easily go wrong.

Comment author: pedanterrific 20 September 2011 02:49:07AM *  4 points [-]

Well, he sort of did, actually.

Comment author: JoshuaZ 20 September 2011 02:56:39AM 3 points [-]

That's just... wow. That's frighteningly stupid. That's about as bad as someone saying they aren't worried about a nuclear reactor undergoing a meltdown because they had their local clergy bless it although the potential negative pay off of this is orders of magnitude higher. I don't put a high probability to an AI triggered singularity but this is just... wow. One thing seems pretty clear: if an AGI does do a hard-take off to control its light cone, the result is likely to be really bad, simply because so many people are being stupid about it.

Part of me is worried that the SIAI people are thinking much more carefully about some of these issues maybe should suggest that their estimates for recursively self-improving are much more likely than I estimate.

Comment author: pedanterrific 20 September 2011 03:11:47AM 6 points [-]

To me, the frightening thing isn't the original mistake (though it is egregious), it's the fact that the response to having it pointed out was "You should not assume such a poor implementation of my idea that it cannot make discriminations that are trivial to current humans" rather than "OOPS!"