You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Houshalter comments on AIFoom Debate - conclusion? - Less Wrong Discussion

11 Post author: Bound_up 04 March 2016 08:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: entirelyuseless 07 March 2016 04:54:20PM 0 points [-]

Did Google actually say how long it took to train Alpha Go? In any case, even if it took a week or less, that is not strong evidence that an AGI could go from knowing nothing to knowing a reasonable amount in a week. It could easily take months, even if it would learn faster than a human being. You need to learn a lot more for general intelligence than to play Go.

Comment author: Houshalter 08 March 2016 07:07:59PM 0 points [-]

First, at least it establishes a minimum. If an AI can learn the basics of English in a day, then it still has that much of a head start against humans. Even if it takes longer to master the rest of language, you can at least cut 3 years off the training time, and presumably the rest can be learned at a rapid rate as well.

It also establishes that AI can teach itself specialized skills very rapidly. Today it learns the basics of language, tomorrow it learns the basics of programming, the day after it learns vision, and then it can learns engineering nanotechnology, etc. This is an ability far above what humans can do, and would give it a huge advantage.

Finally, even if it takes months, that's still FOOM. I don't know where the cutoff point is, but anything that advances at a pace that rapid is dangerous. It's very different than than the alternative "slow takeoff" scenarios where AI takes years and years to advance to superhuman level.