SimonF comments on The Irrationality Game - Less Wrong

38 Post author: Will_Newsome 03 October 2010 02:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (910)

You are viewing a single comment's thread. Show more comments above.

Comment author: SimonF 06 October 2010 08:56:01AM 0 points [-]

I understand "capable of behaving intelligently" to mean "capable of achieving complex goals in complex environments", do you disagree?

Comment author: Risto_Saarelma 06 October 2010 09:04:42AM *  0 points [-]

I don't disagree. Are you saying that humans aren't capable of achieving complex goals in the domains of quantum mechanics or computer programming?

Comment author: SimonF 06 October 2010 09:16:22AM 1 point [-]

This is of course a matter of degree, but basically yes!

Comment author: Risto_Saarelma 06 October 2010 09:34:37AM 0 points [-]

Can you give any idea what these complex goals would look like? Or conversely, describe some complex goals humans can achieve, which are fundamentally beyond an entity with a similar abstract reasoning capabilities as humans have, but lack some of humans' native capabilities for dealing more efficiently with certain types of problems?

The obvious examples are problems where a slow reaction time will lead to failure, but these don't seem to tell that much about the general complexity handling abilities of the agents.

Comment author: SimonF 06 October 2010 09:53:07AM *  2 points [-]

I'll try to give examples:

For computer programming: Given a simulation of a human brain, improve it so that the simulated human is significantly more intelligent.

For quantum mechanics: Design a high-temperature superconductor from scratch.

Are humans better than brute-force at a multi-dimensional version of chess where we can't use our visual cortex?

Comment author: wedrifid 06 October 2010 10:03:03AM 0 points [-]

Are humans better than brute-force at a multi-dimensional version of chess where we can't use our visual cortex?

We have a way to use brute force to achieve general optimisation goals? That seems like a good start to me!

Comment author: SimonF 06 October 2010 10:08:39AM 0 points [-]

Not a good start if we are facing exponential search-spaces! If brute-force would work, I imagine the AI-problem would be solved?

Comment author: wedrifid 06 October 2010 10:23:11AM 0 points [-]

Not a good start if we are facing exponential search-spaces!

Not particularly. :)

But it would constitute an in principle method of bootstrapping a more impressive kind of general intelligence. I actually didn't expect you would concede the ability to brute force 'general optimisation' - the ability to notice the brute forced solution is more than half the problem. From there it is just a matter of time to discover an algorithm that can do the search efficiently.

If brute-force would work, I imagine the AI-problem would be solved?

Not necessarily. Biases could easily have made humans worse than brute-force.

Comment author: SimonF 06 October 2010 10:31:11AM 0 points [-]

Please give evidence that "a more impressive kind of general intelligence" actually exists!

Comment author: wedrifid 06 October 2010 11:02:38AM *  4 points [-]

Nod. I noticed your other comment after I wrote the grandparent. I replied there and I do actually consider your question there interesting, even though my conclusions are far different to yours.

Note that I've tried to briefly answer what I consider a much stronger variation of your fundamental question. I think that the question you have actually asked is relatively trivial compared to what you could have asked so I would be doing you and the topic a disservice by just responding to the question itself. Some notes for reference:

  • Demands of the general form "Where is the evidence for?" are somewhat of a hangover from traditional rational 'debate' mindsets where the game is one of social advocacy of a position. Finding evidence for something is easy but isn't the sort of habit I like to encourage in myself. Advocacy is bad for thinking (but good for creating least-bad justice systems given human limitations).
  • "More impressive than humans" is a ridiculously low bar. It would be absolutely dumbfoundingly surprising if humans just happened to be the best 'general intelligence' we could arrive at in the local area. We haven't had a chance to even reach a local minimum of optimising DNA and protein based mammalian general intelligences. Selection pressures are only superficially in favour of creating general intelligence and apart from that the flourishing of human civilisation and intellectual enquiry happened basically when we reached the minimum level to support it. Civilisation didn't wait until our brains reached the best level DNA could support before it kicked in.
  • A more interesting question is whether it is possible to create a general intelligence algorithm that can in principle handle most any problem, given unlimited resources and time to do so. This is as opposed to progressively more complex problems requiring algorithms of progressively more complexity even to solve in principle.
  • Being able to 'brute force' a solution to any problem is actually a significant step towards being generally intelligent. Even being able to construct ways to brute force stuff and tell whether the brute force solution is in fact a solution is possibly a more difficult thing to find in algorithm space than optimisations thereof.