timtyler comments on The Irrationality Game - Less Wrong

38 Post author: Will_Newsome 03 October 2010 02:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (910)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 22 June 2011 02:05:43PM *  1 point [-]

There is no such thing as general intelligence, i.e. an algorithm that is "capable of behaving intelligently over many domains" if not specifically designed for these domain(s).

Sure there is - see:

The only assumption about the environment is that Occam's razor applies to it.

Comment author: SimonF 22 June 2011 02:24:54PM *  3 points [-]

Of course you're right in the strictest sense! I should have included something along the lines of "an algorithm that can be efficiently computed", this was already discussed in other comments.

Comment author: timtyler 22 June 2011 02:33:14PM *  1 point [-]

IMO, it is best to think of power and breadth being two orthogonal dimensions - like this.

  • narrow <-> broad;
  • weak <-> powerful.

The idea of general intelligence not being practical for resource-limited agents is apparently one that mixes up these two dimensions, whereas it is best to see them as being orthogonal. Or maybe there's the idea that if you are broad, you can't be very deep, and be able to be computed quickly. I don't think that idea is correct.

I would compare the idea to saying that we can't build a general-purpose compressor. However: yes we can.

I don't think the idea that "there is no such thing as general intelligence" can be rescued by invoking resource limitation. It is best to abandon the idea completely and label it as a dud.

Comment author: [deleted] 18 April 2012 04:16:21PM 1 point [-]

That is a very good point, with wideness orthogonal to power.

Evolution is broad but weak. Humans (and presumably AGI) are broad and powerful. Expert systems are narrow and powerful. Anything weak and narrow can barely be called intelligent.

Comment author: SimonF 22 June 2011 06:09:27PM 0 points [-]

I don't care about that specific formulation of the idea; maybe Robin Hanson's formulation that there exists no "grand unified theory of intelligence" is clearer? (link)

Comment author: timtyler 22 June 2011 07:29:54PM *  0 points [-]

Clear - but also clearly wrong. Robin Hanson says:

After all, we seem to have little reason to expect there is a useful grand unified theory of betterness to discover, beyond what we already know. “Betterness” seems mostly a concept about us and what we want – why should it correspond to something out there about which we can make powerful discoveries?

...but the answer seems simple. A big part of "betterness" is the ability to perform inductive inference, which is not a human-specific concept. We do already have a powerful theory about that, which we discovered in the last 50 years. It doesn't immediately suggest implementation strategy - which is what we need. So: more discoveries relating to this seem likely.

Comment author: SimonF 23 June 2011 10:31:15AM 0 points [-]

Clearly, I do not understand how this data point should influence my estimate of the probablity that general, computationally tractable methods exist.

Comment author: timtyler 23 June 2011 08:08:08PM 0 points [-]

To me it seems a lot like the question of whether general, computationally tractable methods of compression exist.

Provided you are allowed to assume that the expected inputs obey some vaguely-sensible version of Occam's razor, I would say that the answer is just "yes, they do".