You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

turchin comments on Open thread, Mar. 14 - Mar. 20, 2016 - Less Wrong Discussion

3 Post author: MrMind 14 March 2016 08:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (212)

You are viewing a single comment's thread.

Comment author: turchin 15 March 2016 09:39:07PM *  4 points [-]

Probably everybody had seen it, but EY wrote long post on FB about AlphaGO which get 400 reposts. The post overestimates power of AlphaGO, and in general it seems to me that EY did too much conclusions based on very small available information (3:0 wins at the moment of the post - 10 pages of conclusions). The post's comment section includes contribution from Robin Hanson about usual foom's speed and type topic. EY later updated his predictions based on Segol win on game 4 and stated that even superhuman AI could make dumb mistakes, which may result in new type of AI failures.

https://www.facebook.com/yudkowsky/posts/10154018209759228?pnref=story

Comment author: CellBioGuy 15 March 2016 10:31:48PM 1 point [-]

So, whats the difference between 'superhuman with dumb mistakes', 'dumb with some superhuman skills', and 'better at some things and worse at others'?

Comment author: turchin 15 March 2016 11:09:12PM 1 point [-]

I think the difference here is distribution.

superhuman with dumb mistakes' - 4 brilliant games, one stupid loose.

dumb with some superhuman skills - dumb in one game, unbeatble in another.

better at some things and worse at others - different performance in different domains.

I think that if superhuman AI with bugs will start to self-improve, the bugs will start to accumulate. This will ruin or AIs power, or AIs goal system. The first is good and the second is bad. I also could suggest that first AI which will try to self improve will still have some bugs. The open question is if AI will be able to debug itself. Some bugs may prevent seeing them as bugs, so they are reccurent. The closest thing is human bias of overconfidence. Overconfident human can't understand that there is something wrong with him.