SimonF comments on The Irrationality Game - Less Wrong

38 Post author: Will_Newsome 03 October 2010 02:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (910)

You are viewing a single comment's thread.

Comment author: SimonF 05 October 2010 04:17:18PM *  36 points [-]

There is no such thing as general intelligence, i.e. an algorithm that is "capable of behaving intelligently over many domains" if not specifically designed for these domain(s). As a corollar, AI will not go FOOM. (80% confident)

EDIT: Quote from here

Comment author: wedrifid 05 October 2010 05:02:03PM 3 points [-]

Do you apply this to yourself?

Comment author: SimonF 05 October 2010 05:13:49PM *  3 points [-]

Yes!

Humans are "designed" to act intelligently in the physical world here on earth, we have complex adaptations for this environment. I don't think we are capable of acting effectively in "strange" environments, e.g. we are bad at predicting quantum mechanical systems, programming computers, etc.

Comment author: RomanDavis 06 October 2010 06:31:46AM 1 point [-]

But we can recursively self optimize ourselves for understanding mechanical systems or programming computers, not infinitely, of course, but with different hardware, it seems extremely plausible to smash through whatever ceiling a human might have.with the brute force of many calculated iterations of whatever humans are using,

And this is before the computer uses it's knowledge to reoptimize it's optimization process.

Comment author: SimonF 06 October 2010 09:24:17AM *  1 point [-]

I understand the concept of recursive self-optimization und I don't consider it to be very implausible.

Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization?

I'm also not convinced that the human mind is good counterexample, e.g. I do not know how much I could improve on a the sourcecode of a simulation of my brain once the simulation itself runs effectively.

Comment author: wedrifid 06 October 2010 10:25:40AM *  2 points [-]

Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization?

I count "algorithm-space is really really really big" as at least some form of evidence. ;)

Mind you by "is there any evidence?" you really mean "does the evidence lead to a high assigned probability?" That being the case "No Free Lunch" must also be considered. Even so NFL in this case mostly suggests that a general intelligence algorithm will be systematically bad at being generally stupid.

Considerations that lead me to believe that a general intelligence algorithm are likely include the observation that we can already see progressively more general problem solving processes in evidence just by looking at mammals. I also take more evidence from humanity than you do. Not because I think humans are good at general intelligence. We suck at it, it's something that has been tacked on to our brains relatively recently and it far less efficient than our more specific problem solving facilities. But the point is that we can do general intelligence of a form eventually if we dedicate ourselves to the problem.

Comment author: Risto_Saarelma 06 October 2010 06:12:10AM 1 point [-]

I don't think we are capable of acting effectively in "strange" environments, e.g. we are bad at predicting quantum mechanical systems, programming computers, etc.

You're putting 'effectively' here in place of 'intelligently' in the original assertion.

Comment author: SimonF 06 October 2010 08:56:01AM 0 points [-]

I understand "capable of behaving intelligently" to mean "capable of achieving complex goals in complex environments", do you disagree?

Comment author: Risto_Saarelma 06 October 2010 09:04:42AM *  0 points [-]

I don't disagree. Are you saying that humans aren't capable of achieving complex goals in the domains of quantum mechanics or computer programming?

Comment author: SimonF 06 October 2010 09:16:22AM 1 point [-]

This is of course a matter of degree, but basically yes!

Comment author: Risto_Saarelma 06 October 2010 09:34:37AM 0 points [-]

Can you give any idea what these complex goals would look like? Or conversely, describe some complex goals humans can achieve, which are fundamentally beyond an entity with a similar abstract reasoning capabilities as humans have, but lack some of humans' native capabilities for dealing more efficiently with certain types of problems?

The obvious examples are problems where a slow reaction time will lead to failure, but these don't seem to tell that much about the general complexity handling abilities of the agents.

Comment author: SimonF 06 October 2010 09:53:07AM *  2 points [-]

I'll try to give examples:

For computer programming: Given a simulation of a human brain, improve it so that the simulated human is significantly more intelligent.

For quantum mechanics: Design a high-temperature superconductor from scratch.

Are humans better than brute-force at a multi-dimensional version of chess where we can't use our visual cortex?

Comment author: wedrifid 06 October 2010 10:03:03AM 0 points [-]

Are humans better than brute-force at a multi-dimensional version of chess where we can't use our visual cortex?

We have a way to use brute force to achieve general optimisation goals? That seems like a good start to me!

Comment author: timtyler 22 June 2011 02:05:43PM *  1 point [-]

There is no such thing as general intelligence, i.e. an algorithm that is "capable of behaving intelligently over many domains" if not specifically designed for these domain(s).

Sure there is - see:

The only assumption about the environment is that Occam's razor applies to it.

Comment author: SimonF 22 June 2011 02:24:54PM *  3 points [-]

Of course you're right in the strictest sense! I should have included something along the lines of "an algorithm that can be efficiently computed", this was already discussed in other comments.

Comment author: timtyler 22 June 2011 02:33:14PM *  1 point [-]

IMO, it is best to think of power and breadth being two orthogonal dimensions - like this.

  • narrow <-> broad;
  • weak <-> powerful.

The idea of general intelligence not being practical for resource-limited agents is apparently one that mixes up these two dimensions, whereas it is best to see them as being orthogonal. Or maybe there's the idea that if you are broad, you can't be very deep, and be able to be computed quickly. I don't think that idea is correct.

I would compare the idea to saying that we can't build a general-purpose compressor. However: yes we can.

I don't think the idea that "there is no such thing as general intelligence" can be rescued by invoking resource limitation. It is best to abandon the idea completely and label it as a dud.

Comment author: [deleted] 18 April 2012 04:16:21PM 1 point [-]

That is a very good point, with wideness orthogonal to power.

Evolution is broad but weak. Humans (and presumably AGI) are broad and powerful. Expert systems are narrow and powerful. Anything weak and narrow can barely be called intelligent.

Comment author: SimonF 22 June 2011 06:09:27PM 0 points [-]

I don't care about that specific formulation of the idea; maybe Robin Hanson's formulation that there exists no "grand unified theory of intelligence" is clearer? (link)

Comment author: timtyler 22 June 2011 07:29:54PM *  0 points [-]

Clear - but also clearly wrong. Robin Hanson says:

After all, we seem to have little reason to expect there is a useful grand unified theory of betterness to discover, beyond what we already know. “Betterness” seems mostly a concept about us and what we want – why should it correspond to something out there about which we can make powerful discoveries?

...but the answer seems simple. A big part of "betterness" is the ability to perform inductive inference, which is not a human-specific concept. We do already have a powerful theory about that, which we discovered in the last 50 years. It doesn't immediately suggest implementation strategy - which is what we need. So: more discoveries relating to this seem likely.

Comment author: SimonF 23 June 2011 10:31:15AM 0 points [-]

Clearly, I do not understand how this data point should influence my estimate of the probablity that general, computationally tractable methods exist.

Comment author: timtyler 23 June 2011 08:08:08PM 0 points [-]

To me it seems a lot like the question of whether general, computationally tractable methods of compression exist.

Provided you are allowed to assume that the expected inputs obey some vaguely-sensible version of Occam's razor, I would say that the answer is just "yes, they do".

Comment author: whpearson 05 October 2010 06:50:52PM 1 point [-]

Can you unpack algorithm and why you think an intelligence is one?

Comment author: SimonF 05 October 2010 07:22:33PM *  1 point [-]

I'm not sure what your point is, I don't think I use the term "algorithm" in a non-standard way.

Wikipedia says: "Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system."

When talking about "intelligence" I assume we are talking about a goal-oriented agent, controlled by an algorithm as defined above.

Comment author: whpearson 05 October 2010 07:51:09PM *  2 points [-]

Does it make sense to call the computer system in front of you as being controlled by a single algorithm? If so that would have to be the fetch-execute cycle. Which may not halt or be a finite sequence. This form of system is sometimes called an interaction machine or persistent Turing machine. So some may say it is not an algorithm.

The fetch-execute cycle is very poor at giving you information about what problems your computer might be able to solve, as it can download code from all over the place. Similarly if you think of an intelligence as this sort of system, you cannot bound what problems it might be able to solve. At any given time it won't have the programming to solve all problems well, but it can modify the programming it does have.

Comment author: ata 05 October 2010 05:08:18PM 0 points [-]

Do you behave intelligently in domains you were not specifically designed(/selected) for?

Comment author: SimonF 05 October 2010 05:33:03PM 0 points [-]

No, I don't think I would be capable if the domain is sufficiently different from the EEA.