ChristianKl comments on AlphaGo versus Lee Sedol - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (183)
It is also interesting to know the size of Alphago.
Wiki says: "The distributed version in October 2015 was using 1,202 CPUs and 176 GPUs (and was developed by teem of 100 scientists). Assuming that it was best GPU on the market in 2015, with power around 1 teraflop, total power of AlphaGO was around 200 teraplop or more. (I would give it 100 Teraflop - 1 Petaflop with 75 probability estimate). I also think that the size of the program is around terabytes, but only conclude it from the number of computers in use.
This could provide us with minimal size of AI on current level of technologies. Fooming for such AI will be not easy as it would require sizeable new resources and rewriting of it complicated inner structure.
And it is also not computer virus size yet, so it can't run away. A private researcher probably don't have such computational resources, but hacker could use botnet
But if such AI will be used to create more effective master algorithms, it may foom.
Demis said that AlphaGo also works on a single computer. The distributed version has 75% winning chance against the one computer version. The hardware they used seem to be the point where there are dimishing return of adding additional hardware.