You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on AlphaGo versus Lee Sedol - Less Wrong Discussion

17 Post author: gjm 09 March 2016 12:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (183)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 11 March 2016 12:52:18PM 0 points [-]

It is also interesting to know the size of Alphago.

Wiki says: "The distributed version in October 2015 was using 1,202 CPUs and 176 GPUs (and was developed by teem of 100 scientists). Assuming that it was best GPU on the market in 2015, with power around 1 teraflop, total power of AlphaGO was around 200 teraplop or more. (I would give it 100 Teraflop - 1 Petaflop with 75 probability estimate). I also think that the size of the program is around terabytes, but only conclude it from the number of computers in use.

This could provide us with minimal size of AI on current level of technologies. Fooming for such AI will be not easy as it would require sizeable new resources and rewriting of it complicated inner structure.

And it is also not computer virus size yet, so it can't run away. A private researcher probably don't have such computational resources, but hacker could use botnet

But if such AI will be used to create more effective master algorithms, it may foom.

Comment author: ChristianKl 14 March 2016 06:17:53PM 1 point [-]

Demis said that AlphaGo also works on a single computer. The distributed version has 75% winning chance against the one computer version. The hardware they used seem to be the point where there are dimishing return of adding additional hardware.