John_Maxwell_IV comments on Reply to Holden on 'Tool AI' - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (348)
My summary (now with endorsement by Eliezer!):
How do we know that the problem is finite? When it comes to proving a computer program safe from being hacked the problem is considered NP-hard. Google Chrome got recently hacked by chaining 14 different bugs together. A working AGI is probably as least a complex as Google Chrome. Proving it safe will likely also be NP-hard.
Google Chrome doesn't even self modify.
This point seems missing:
A system that undertakes extended processes of research and thinking, generating new ideas and writing new programs for internal experiments, seems both much more effective and much more potentially risky than something like chess program with a simple fixed algorithm to search using a fixed narrow representation of the world (as a chess board).
Looks pretty good, actually. Nice.
So you wrote 10x too much then?
I'm not really sure what's meant by this.
For example, in computer vision, you can input an image and get a classification as output. The input is supplied by a human. The computation doesn't involve the human. The output is well defined. The same could be true of a tool AI that makes predictions.
Both Andrew Ng and Jeff Hawkins think that tool AI is the most likely approach.
I would consider 3 to be a few.
That is about how I read it.