You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on The metaphor/myth of general intelligence - Less Wrong Discussion

11 Post author: Stuart_Armstrong 18 August 2014 04:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 19 August 2014 12:09:02PM 2 points [-]

The challenge is not to combine different algorithms in the same area, but in different areas. A social bot and a stock market predictor - how should they interface best? And how would you automate the construction of interfaces?

Comment author: Cyan 19 August 2014 08:35:33PM *  3 points [-]

Meh. That's only a problem in practice, not in principle. In principle, all prediction problems can be reduced to binary sequence prediction. (By which I mean, in principle there's only one "area".)

Comment author: Stuart_Armstrong 20 August 2014 10:40:55AM 2 points [-]

And is thinking in terms of that principle leading us astray in practice? After all, humans don't learn social interactions by reducing it to bit sequence prediction...

Comment author: Cyan 20 August 2014 02:33:30PM *  1 point [-]

No, we totally do... in principle.

Comment author: djm 20 August 2014 03:30:49AM 1 point [-]

Automatic construction of general interfaces would be tricky to say the least. It would surely have to depend on why agentA needs to interface with agentB in the first place - for general data transfer (location , status, random data) it would be fine, but unless both agents had the understanding of each others internal models/goals/thought processes it seems unlikely that they would benefit from a transfer except at an aggregate level

Comment author: roystgnr 19 August 2014 03:48:04PM 1 point [-]

The theorem in Cyan's link assumes that the output of each predictor is a single prediction. If it were instead a probability distribution function over predictions, can we again find an optimal algorithm? If so then it would seem like the only remaining trick would be to get specialized algorithms to output higher uncertainty predictions when facing questions further from their "area".

Comment author: Stuart_Armstrong 20 August 2014 10:37:11AM 1 point [-]

Say you need to plan an expedition, like columbus. How much time should you spend shmoozing with royalty to get more money, how much time inspecting the ships, how much testing the crew, etc... and how do these all interact? The narrow predictors would domain specific questions, but you need to be able to meld and balance the info in some way.