SimonF comments on The Irrationality Game - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (910)
There is no such thing as general intelligence, i.e. an algorithm that is "capable of behaving intelligently over many domains" if not specifically designed for these domain(s). As a corollar, AI will not go FOOM. (80% confident)
EDIT: Quote from here
Do you apply this to yourself?
Yes!
Humans are "designed" to act intelligently in the physical world here on earth, we have complex adaptations for this environment. I don't think we are capable of acting effectively in "strange" environments, e.g. we are bad at predicting quantum mechanical systems, programming computers, etc.
But we can recursively self optimize ourselves for understanding mechanical systems or programming computers, not infinitely, of course, but with different hardware, it seems extremely plausible to smash through whatever ceiling a human might have.with the brute force of many calculated iterations of whatever humans are using,
And this is before the computer uses it's knowledge to reoptimize it's optimization process.
I understand the concept of recursive self-optimization und I don't consider it to be very implausible.
Yet I am very sceptical, is there any evidence that algorithm-space has enough structure to allow for effective search to allow such an optimization?
I'm also not convinced that the human mind is good counterexample, e.g. I do not know how much I could improve on a the sourcecode of a simulation of my brain once the simulation itself runs effectively.
I count "algorithm-space is really really really big" as at least some form of evidence. ;)
Mind you by "is there any evidence?" you really mean "does the evidence lead to a high assigned probability?" That being the case "No Free Lunch" must also be considered. Even so NFL in this case mostly suggests that a general intelligence algorithm will be systematically bad at being generally stupid.
Considerations that lead me to believe that a general intelligence algorithm are likely include the observation that we can already see progressively more general problem solving processes in evidence just by looking at mammals. I also take more evidence from humanity than you do. Not because I think humans are good at general intelligence. We suck at it, it's something that has been tacked on to our brains relatively recently and it far less efficient than our more specific problem solving facilities. But the point is that we can do general intelligence of a form eventually if we dedicate ourselves to the problem.
You're putting 'effectively' here in place of 'intelligently' in the original assertion.
I understand "capable of behaving intelligently" to mean "capable of achieving complex goals in complex environments", do you disagree?
I don't disagree. Are you saying that humans aren't capable of achieving complex goals in the domains of quantum mechanics or computer programming?
This is of course a matter of degree, but basically yes!
Can you give any idea what these complex goals would look like? Or conversely, describe some complex goals humans can achieve, which are fundamentally beyond an entity with a similar abstract reasoning capabilities as humans have, but lack some of humans' native capabilities for dealing more efficiently with certain types of problems?
The obvious examples are problems where a slow reaction time will lead to failure, but these don't seem to tell that much about the general complexity handling abilities of the agents.
I'll try to give examples:
For computer programming: Given a simulation of a human brain, improve it so that the simulated human is significantly more intelligent.
For quantum mechanics: Design a high-temperature superconductor from scratch.
Are humans better than brute-force at a multi-dimensional version of chess where we can't use our visual cortex?
We have a way to use brute force to achieve general optimisation goals? That seems like a good start to me!
Sure there is - see:
The only assumption about the environment is that Occam's razor applies to it.
Of course you're right in the strictest sense! I should have included something along the lines of "an algorithm that can be efficiently computed", this was already discussed in other comments.
IMO, it is best to think of power and breadth being two orthogonal dimensions - like this.
The idea of general intelligence not being practical for resource-limited agents is apparently one that mixes up these two dimensions, whereas it is best to see them as being orthogonal. Or maybe there's the idea that if you are broad, you can't be very deep, and be able to be computed quickly. I don't think that idea is correct.
I would compare the idea to saying that we can't build a general-purpose compressor. However: yes we can.
I don't think the idea that "there is no such thing as general intelligence" can be rescued by invoking resource limitation. It is best to abandon the idea completely and label it as a dud.
That is a very good point, with wideness orthogonal to power.
Evolution is broad but weak. Humans (and presumably AGI) are broad and powerful. Expert systems are narrow and powerful. Anything weak and narrow can barely be called intelligent.
I don't care about that specific formulation of the idea; maybe Robin Hanson's formulation that there exists no "grand unified theory of intelligence" is clearer? (link)
Clear - but also clearly wrong. Robin Hanson says:
...but the answer seems simple. A big part of "betterness" is the ability to perform inductive inference, which is not a human-specific concept. We do already have a powerful theory about that, which we discovered in the last 50 years. It doesn't immediately suggest implementation strategy - which is what we need. So: more discoveries relating to this seem likely.
Clearly, I do not understand how this data point should influence my estimate of the probablity that general, computationally tractable methods exist.
To me it seems a lot like the question of whether general, computationally tractable methods of compression exist.
Provided you are allowed to assume that the expected inputs obey some vaguely-sensible version of Occam's razor, I would say that the answer is just "yes, they do".
Can you unpack algorithm and why you think an intelligence is one?
I'm not sure what your point is, I don't think I use the term "algorithm" in a non-standard way.
Wikipedia says: "Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system."
When talking about "intelligence" I assume we are talking about a goal-oriented agent, controlled by an algorithm as defined above.
Does it make sense to call the computer system in front of you as being controlled by a single algorithm? If so that would have to be the fetch-execute cycle. Which may not halt or be a finite sequence. This form of system is sometimes called an interaction machine or persistent Turing machine. So some may say it is not an algorithm.
The fetch-execute cycle is very poor at giving you information about what problems your computer might be able to solve, as it can download code from all over the place. Similarly if you think of an intelligence as this sort of system, you cannot bound what problems it might be able to solve. At any given time it won't have the programming to solve all problems well, but it can modify the programming it does have.
Do you behave intelligently in domains you were not specifically designed(/selected) for?
No, I don't think I would be capable if the domain is sufficiently different from the EEA.