timtyler comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (105)
The "key insight" model seems deeply flawed. We know that the technical side of the problem involves performing inductive inference - which is a close cousin of stream compression. So, progress is very likely to look like progress with stream compression. Some low-hanging fruit - and then gradually diminishing returns. Rather like digging a big hole in the ground.
Here's Bob Mottram making much the same point as I just made:
How confident should we be that general AI involves solely hard work on existing problems like performing inductive inference? I agree that if there are no more Key Insights, and instead just a bunch of insights that some researcher will eventually have, then most of the gains from the proposal can't be realized. Next steps: somehow estimate the probability that there are 0, 1, or several Key Insights remaining before general AI is "just" a matter of tons of hard research/experimentation, and estimate the gains from the 100-paper-strategy for the scenarios in which there are 0 or several Key Insights remaining.
I didn't really claim that. There's also the whole issue of what utility function to use - and some other things as well - tree pruning strategies, for instance. Just that inductive inference is the key technology for the technical side of the problem - the part not to do with values.
Much has been written about the link between induction and intelligence: Hutter. Mahoney. Me.