Summary of main point: I argue that there is a significant probability that creating de novo AGI is an intractable problem. Evolution only solved this problem because of anthropic reasons. Conclusions are drawn regarding priorities in AI risk research.

Sketch of main argument: There are suggestive relations between AGI and NP-completeness. These relations lead me to hypothesize that AGI programs posses large Levin-Kolmogorov complexity which implies that producing them is a computationally intractable problem. The timing of events in the evolution of human intelligence seems to be consistent with the assumption evolution's success is anthropic, if we postulate human intelligence as arising from a combination of two modules: an "easy" (low complexity) module and a "hard" (high complexity) module. Therefore, creating superhuman intelligence will require reverse engineering the human brain and be limited to improving the "easy" module (since creating a better "hard" module is again computationally intractable).

AGI and P vs. NP

There are several arguments the AGI problem is of a similar "flavor" to problems that are NP-complete.

The first argument is rather vague but IMO still compelling. Many class separations in complexity theory (P vs. NP, L vs. P, R vs. RE) hinge on the existence of a complete language. This means there is a single problem solving which under the stronger resource constraints would lead to solving all problems in the larger class. Similarly, Goedel incompleteness means there is no single algorithm (a program which terminates on all inputs) for proving all provable theorems. It feels like there is a principle of mathematics which rules out algorithms that are "too good to be true": a single "magic wand" to solve all problems. In a similar way, AGI is a "magic wand": it solves "all" problems because you can simply delegate them to the AGI.

Another argument has to do with Solomonoff induction. Solomonoff induction is incomputable but it becomes computable if we set a limit T on the run-time of the "hypotheses" (programs) we consider. However, the resulting computable induction carries an
O(T 2T) slow-down penalty (the time it takes to run all possible hypotheses). On the other hand, the problem is easy modulo P# and tractable given an NP-complete oracle given certain assumptions on the required probability accuracy.

Yet another argument goes through logical uncertainty. The latter is widely suspected to be an important component of AGI and there is a compelling relation between it and P vs. NP.

What does all of it mean? We certainly don't need an NP-oracle to construct an AGI since humans are "A"GIs and (presumably) there are no NP-oracles in our brain. To shed light on this, it is useful to take the quantitative point of view on AGI. Namely, there is a metric which rates programs according to how "intelligent" they are. From this point-of-view, an AGI is just a program which ranks high on this metric. The first such metric was suggested by Legg and Hutter and I improved on their construction by combining it with UDT.

This way the AGI design problem becomes an optimization problem: find a program with an intelligence metric as high as possible. The NP-connection now suggests the following conjecture: the AGI optimization program is of exponential complexity in program length. Of course we don't necessarily need the best program of a given length but the impression remains that AGI design is hard in some rigorous complexity theoretic sense. In particular, I'm guessing there should be a relation between the intelligence (in the precise quantitative sense) of a program and its Levin-Kolmogorov complexity.

The anthropic scenario

If we buy into the conjecture above, a glaring problem appears: if AGI design is so hard, how come evolution succeeded in it? After all, evolution is also a process with bounded computing resources. The only explanation that seems to remain is the anthropic one: evolution's a priori probability of success was insanely low but in an infinite universe it still succeeds infinitely many times and we observe one of these times for the obvious reason.

This explanation produces probabilistic predictions regarding the timing of events. For example, if there was no cosmological upper bound on when intelligence can appear we would expect it would appear extremely late. This is not the case in our universe (on a cosmological time scale). However, this is not difficult to explain since there is a relatively short time window in the lifetime of the universe in which suitable planets revolving suitable stars exist. In particular, on Earth in 0.6 billion years there won't be trees any more and in 1.1 billion years there won't oceans.

As well known, in scenarios with hard steps that are overcome anthropically, the hard steps are expected to be distributed on the timeline approximately uniformly. This seems to conflict with the most intuitive location of the intelligence hard step: somewhere between chimp and human. However, the apparent discrepancy goes away if we consider a model with two coupled "intelligence modules": an "easy" module E which is susceptible to non-anthropic evolutionary optimization and a "hard" module H which contains most of the Levin-Kolmogorov complexity and whose appearance is the hard step in question.

Before the hard step, an early version E1 of E co-evolves with a module h which performs a similar function to H but does it much worse (imagine a rough heuristic which works for many of the cases in a relatively narrow domain). During the hard step, H appears "out of the blue" due to sheer anthropic luck after which the E1-h "wire" is replaced by an E1-H wire. After the hard step, natural selection proceeds to transform E1 into its final version E2. This picture seems to be consistent with hard step happening to our chimp-like ancestor after which natural selection rapidly transformed the result into homo sapiens sapiens.

This scenario would be undermined if there was an "E-like" property of our ancestors which evolved shortly before the presumed hard step. What can this property be? The best candidate I can think of is the evolution of hands. Apparently, hands evolved 100 millions years ago. The ratio between this number and the remaining 600 million years doesn't seem to be small enough to rule out the anthropic scenario. The argument is made stronger if we take into account that there is an extinction event every 100 million years or so which means we can't reasonably expect a much larger time difference.

Consequences for future of mankind

If AGI is a computationally intractable problem, we won't be able to solve it "fairly" in the near future. However, we can use the existing solution: homo sapiens sapiens. This means reverse engineering the brain and either modifying it (improving module E) or extracting (emulating) H and writing E from scratch. It is not clear how much intelligence improvement to expect: on the one hand we're stuck with the current H on the other hand E might still have lots of room for improvement (which is intuitively likely). It is not clear whether the monopole (singleton) or multipole scenario is more likely. It feels to me that a singleton will require rewriting E and it will be easier to start tweaking it therefore multipole superhuman intelligence will be first.

Reverse engineering and modifying the brain is a project which is likely to require considerable resources and encounter enormous legal barriers. As opposed to de novo AGI, it is difficult to imagine it accomplished by a small group or any private organization. The most likely scenario seems to be a major government project in the spirit of Manhattan, Apollo or LHC. The currently prevailing culture / system of beliefs makes it extremely unlikely for the government of a liberal country to undertake such a project if the technology was available. If this circumstance doesn't change, the first government to try will be an authoritarian one like China. Such a government will ensure the resulting superhumans will have extreme built-in loyalty*, resulting in a world-wide superdictatorship. Therefore, the highest priority seems to be changing culture in a way that will ensure a supportive public opinion for a future friendly superintelligence project. Another high priority is continuing to develop the abstract mathematical theory to better understand the likelihood of this and other scenarios.

* I am assuming (or hoping) that no government will be stupid enough to try it before brain reverse engineering identifies the "utility function module"

 

EDIT: The treatment of anthropics in this post is unforgivably oversimplified. I'm hoping to a write a UDT-based analysis later. Also, thanks to Mark Friedenbach for point out the extremely relevant paper by Shulman and Bostrom.

New Comment
69 comments, sorted by Click to highlight new comments since:

One comment I'd like to make is that one shouldn't confuse asymptotic complexity with practical complexity. It's possible for a problem to be NP-complete but still be easy to solve for all but extremely large cases. I realize you haven't made this mistake - it's just something that could trip up someone less familiar with this kind of reasoning.

Additionally, it seems many of your arguments would also equally apply to current narrow AI systems like search, object recognition, playing chess, and so on. Yet systems like these have been constructed. What special property does "what humans do" have that distinguishes it from these narrow AI systems? (Note that I'm not saying such a special property doesn't exist, I'm just saying it's not clear, through your arguments, what it is).

Thx for commenting!

Regarding asymptotic vs. practical complexity: you are absolutely right, it is entirely possible that AGI is of exponential complexity but human-level is still feasible given practically available resources. A definite answer would require much more work than the handwaving I'm doing here, but I still think my argument is important for assessing the likelihood of different scenarios.

None of my arguments regarding AGI and NP apply to narrow AI. Narrow AIs don't compute approximate Solomonoff induction in any sense, don't evaluate logical uncertainties and are not "magic wands": this is why they're narrow! On the other hand humans seem to be very good at finding patterns (which is very close to Solomonoff induction), seem to easily reason about logical propositions as uncertain and solve problems in an amazingly large variety of domains (compared to narrow AIs and animals).

Thing is, it's not clear to me that the human brain carries out anything theoretically similar to Solomonoff induction, in any part of the brain. I know that AIXI does, but the human brain is clearly not AIXI, otherwise it would be much smarter than it actually is (AIXIt is provably smarter than the human brain for general problem solving so it's clear that the human brain must be a narrow intelligence).

but the human brain is clearly not AIXI, otherwise it would be much smarter than it actually is

Not sure this logic works. AIXI with unlimited resources would be smarter. A version of AIXI that does some degree of an approximation though?

Yup, Marcus Hutter proves in 'Universal Artificial Intelligence' that AIXIt performs "at least as well as the provably best time t and space l limited agent", for general problems. He even offers a few concrete examples for this.

[-]V_V00

...up to some additive and multiplicative constants, which are large enough to make any implementation impractical.

If it's that good why isn't it used? (Serious question).

edit: Based on your other comment I guess that's it's only better than other programs when you can't include assumptions about the domain.

it's only better than other programs when you can't include assumptions about the domain.

Yes, and also when the domain-specific information doesn't exist. We don't yet precisely know how language works, for instance. If we did, we could code it, but we don't.

There's also the matter of having a large additive O(c) constant. That's not as big an issue as some people make it out to be (it can be remedied by clever choice of programming language - provided you know what programming language to use).

Thanks for pointing that out. I've skimmed some of the AIXI stuff before and obviously didn't read it carefully enough.

No problem; this is a common misconception. People often think general intelligence is harder than narrow intelligence. Actually the best general intelligent agent can be coded with a few hundred lines of C code. It's non-general intelligence that's hard. Especially, the kind of non-general intelligence that the human brain has (identifying objects visually, language, humor, status games) is extremely hard to code. The G in AGI means 'more general than a chess-playing robot, but less general than AIXI'

[-][anonymous]-10

Ah, but that proof is trivial, uninteresting, and utterly useless.

AIXI(tl) is basically brute-force search. The proof you reference shows that no program can do better than AIXI(tl) at general problem solving. Let’s taboo “AIXI(tl)”: no program can do better than brute-force search at general problem solving.

If you have any computer science training at all, alarm bells should be going off. There are exceedingly few real world problem domains where brute force is really the best algorithm. They are so rare and esoteric in fact that we collect and categorize them, with immediate application to cryptographic methods. Even then nearly all such identified problems have better-than-brute-force solutions, with sqrt reduction of the search space being a very common lower bound. Even in crypto, a true no-better-than-brute-force-search problem is a rare beast.

So what is going on here? As expected, the problem is in the assumed underlying definitions: no program can do better than AIXI(tl)/brute-force-search at general problem solving. What is general problem solving? Why, exactly the mathematically appealing definition of course: solving problems for which the agent has no a priori knowledge about the problem domain or solution space.

Looked at in this light, the proof very nearly becomes a tautology. Assuming the probability distribution of the solution space to be entirely flat and unstructured – that is to say, every candidate solution has equal probability of being a real solution, and finding bad solutions teaches you nothing about where the good solution might lay – means that no other search except brute-force enumeration of solution space is possible. Every form of search would either be a different enumeration of solutions (which has the same expected performance), or repeats solutions and therefore is less efficient. It’s not the best technique – it’s the only technique… assuming that there really is no information available about the problem domain and that the solution space is entirely unstructured.

However it is those assumptions that are ridiculous. The definition adopted by many AGI people is ability for an agent to solve problems it was not specifically programmed deal with. That in no way precludes using information about these new problems that can be extracted from the environment. And in real world use cases, there’s a lot of information available. Just as a trivial example, if nothing else a real embodied problem exists in a universe with the 2nd law of thermodynamics, which means that questions about efficiency and resource competition matter. Agents that are programmed to understand and give preference to such considerations will do better than agents which are not programmed as such (e.g. AIXI(tl)).

Or, to summarize this whole post: no agent can do better than AIXI(tl) at problems which meet some mathematical platonic ideal of randomness, but it is trivial to create a better agent than AIXI(tl) at problems encountered in the real world.

Your premise is incorrect. AIXIt is not brute-force search of solutions. It is a true self-optimizing search; see arxiv.org/abs/cs/0004001. AIXI itself isn't a brute force search either. The basic AIXI model actually specifies no algorithmic implementation, so saying 'brute force search' is incorrect.

All your other arguments follow from this premise so they are also incorrect.

[-][anonymous]00

Solomonoff induction involves enumerating all possible program strings and testing for validity. There's another term for the class of algorithms which behave this way in the literature: it's called brute-force search.

(AIXI is not Solomonoff induction, it is reinforcement learning using Solomonoff induction to evaluate possible strategies. If I'm playing loose with terminology, it is here. But that technicality does not affect my argument.)

It's only brute-force search in the trivial sense that all search is brute-force search, at some level. See: No free lunch theorem.

[-][anonymous]20

AIXIt is provably smarter than the human brain for general problem solving so it's clear that the human brain must be a narrow intelligence

Um, no? Maybe AIXIt is smarter (I'd quibble over how you define 'smart'), but that doesn't mean the human mind is incapable of solving general problems...

I never said the human mind is incapable of solving general problems. However it cannot be better than AIXIt at doing so. It's either equal to AIXIt (which is very unlikely), or a narrow intelligence.

Solomonoff induction is essentially pattern matching and humans seem to be quite good at pattern matching. It is arguable that this ability is an important part of the human problem solving ability.

AIXIt is only "smarter than the human brain" in the ridiculous sense that it performs as well as the brain up to a gargantuan slow-down factor of 2^{length of program emulating human brain}. To seriously show it is smarter than the brain for general problem solving one would have to use an intelligence metric that takes resource bounds into account like a resource-bounded version of Legg-Hutter or my updateless metric.

I'm somewhat disappointed in this answer. Sure, it's pattern matching, but there are many, many kinds of pattern matching, of which Solomonoff induction is just one kind.

it performs as well as the brain up to a gargantuan slow-down factor of 2^{length of program emulating human brain}.

That is not true. There is a large multiplicative factor but it's not what you're describing.

I'm not sure what kings of pattern matching you have in mind. Possibly you mean the sort of "pattern matching" typical to narrow AI e.g. a neural network firing with response to a certain "pattern". Usually it is just clustering by dividing spaces with hypersurfaces constructed from some (very restricted) set of mathematical primitives. This sort of "pattern matching" is probably very important to the low-level working of the brain and the operation of unconscious systems such as visual processing, however it is not what I mean by "pattern matching" in the current context. I am referring to patterns expressible by language or logic. For example, if you see all numbers in a sequence are prime, it is a pattern. If you see all shapes in a collection are convex, it is a pattern. This sort of patterns are of fundamental importance in conscious thought and IMO can only be modeled mathematical by constructions akin to Solomonoff induction and Kolmogorov complexity.

There is a large multiplicative factor but it's not what you're describing.

On each step, AIXIt loops over all programs of given length and selects the one with the best performance in the current scenario. This is exponential in the length of the programs.

About the first point on pattern matching, I suggest you look at statistical inference techniques like GMM, DPGMM, LDA, LSA, or BM/RBMs. None of these have to do with Solomonoff induction, and they are more than capable of learning 'patterns expressible by language or logic,' yet they seem to more closely correspond to what the brain does than Solomonoff induction.

On each step, AIXIt loops over all programs of given length and selects the one with the best performance in the current scenario

Not precisely, but anyway that doesn't lead to 2^{length of program emulating human brain} for human-level intelligence.

Kolmogorov complexity is not (closely) related to NP completeness. Random sequences maximize Kolmogorov complexity but are trivial to produce. 3-SAT solvers have tiny Kolmogorov complexity despite their exponential worst case performance.

I also object to thinking of intelligence as "being NP-Complete", unless you mean that incremental improvements in intelligence should take longer and longer (growing at a super-polynomial rate). When talking about achieving a fixed level of intelligence, complexity theory is a bad analogy. Kolmogorov complexity is also a bad analogy here because we want any solution, not the shortest solution.

Thx for commenting!

I'm talking about Levin-Kolmogorov complexity. The LK complexity of x is defined to be the minimum over all programs p producing x of log(T(p)) + L(p) where T(p) is the execution time of p and L(p) is the length of p.

Sorry for getting that one wrong (I can only say that it's an unfortunately confusing name).

Your claim is that AGI programs have large min-length-plus-log-of-running-time complexity.

I think you need more justification for this being a useful analogy for how AGI is hard. Clarifications, to avoid mixing notions of problems getting harder as they get longer for any program with notions of single programs taking a lot of space to specify, would also be good.

Unless we're dealing with things like the Ackermann function or Ramsey numbers, the log-of-running-time component of KL complexity is going to be negligible compared to the space component.

Even in the case of search problems, this holds. Sure it takes 2^100 years to solve a huge 3-SAT problem, but that contribution of ~160 time-bits pales in comparison to the several kilobytes of space-bits you needed when encoding the input into the program.

Or suppose we're looking at the complexity of programs that find an AGI program. Presumably high, right? Except that the finder can bypass the time cost by pushing the search into the returned AGI's bootstrap code. Basically, you replace "run this" with "return this" at the start of the program and suddenly AGI-finding's KL complexity is just its K complexity. (see also: the P=NP algorithm that brute force finds programs that work, and so only "works" if P=NP)

I think what I'm getting at is: just use length plus running time, without the free logarithm. That will correctly capture the difficulty of search, instead of making it negligible compared to specifying the input.

Plus, after you move to non-logarithmed time complexity, you can more appropriately appeal to things like the no free lunch theorem and NP-completeness as weak justification for expecting AGI to be hard.

The complexity I'm referring to is the complexity of the AGI code i.e. the length + log-running time of a hypothetical program producing the AGI code.

Or suppose we're looking at the complexity of programs that find an AGI program. Presumably high, right? Except that the finder can bypass the time cost by pushing the search into the returned AGI's bootstrap code.

This wouldn't quite work since the "AGI" would then be penalized by very long bootstrap. Since the measure of intelligence takes resource constraints into account such an "AGI" wouldn't be very intelligent: it wouldn't count as an actual AGI.

This seems to conflict with the most intuitive location of the intelligence hard step: somewhere between chimp and human.

Chimpanzees seem to exhibit human traits such as self awareness, theory of mind, appreciation of beauty, language, problem solving, vendettas and so forth. I would think the hard step would be earlier than this. (and AlexMennen's comment throws more doubt on this).

If this circumstance doesn't change, the first government to try will be an authoritarian one like China. Such a government will ensure the resulting superhumans will have extreme built-in loyalty*, resulting in a world-wide superdictatorship.

I don't think China is a dictatorship, because its the Chinese communist party in charge, rather than one specific individual. This isn't just argument over semantics - while China is comparatively authoritarian, there isn't a single point of failure, one person with absolute power who can go insane and wreck the country.

Therefore, the highest priority seems to be changing culture in a way that will ensure a supportive public opinion for a future friendly superintelligence project.

Or, pursuing a form of government which is neither a dictatorship nor a democracy. I previously advocated the replacement of elections with examinations.

Thx for commenting!

Chimpanzees seem to exhibit human traits such as self awareness, theory of mind, appreciation of beauty, language, problem solving, vendettas and so forth. I would think the hard step would be earlier than this.

I admit I am not extremely confident about the precise location of the hard step. Obviously, if the hard step is really there, it is somewhere beyond the level of intelligence exhibited by modern AI software.

I don't think China is a dictatorship, because its the Chinese communist party in charge, rather than one specific individual. This isn't just argument over semantics - while China is comparatively authoritarian, there isn't a single point of failure, one person with absolute power who can go insane and wreck the country.

My impression is that the Politburo Standing Committee is in charge (7 people). It might be better than one person but I still don't want those 7 to rule the world. Even less I want them to rule the world with power no dictator has ever possessed before.

My impression is that the Politburo Standing Committee is in charge (7 people). It might be better than one person but I still don't want those 7 to rule the world. Even less I want them to rule the world with power no dictator has ever possessed before.

Well, 7 people ruling the world is better than 1, but I agree its not ideal.

I previously advocated the replacement of elections with examinations.

That's an interesting idea. Care to elaborate?

There was a substantial discussion of it I started here:

http://lesswrong.com/lw/l5r/nonstandard_politics/bi1g

I might at some point write up a more in depth discussion, including the (mostly helpful) feedback I received. Having thought some more about it, an especially interesting aspect is that part of the government is responsible for designing the examinations, which means we have a recursively self-improving system which needs to maintain its original purpose. Sounds familiar?

All bureaucracies eventually wind up as psychopathic UFAIs; that's not news to me. But it's an interesting idea at any rate. A similar idea (but one that's not as radical) is to have voters take aptitude tests. That is, in order to be a full citizen capable of voting, you must take an examination that measures the very basics of knowledge about politics, business/industry, history, and psychology. The great thing about this is that the idea isn't to measure a very high level of intelligence, so the questions would be easier to design and could be made more unbiased. I imagine that even with very simple questions ("How many senators are there?", "What is the availability heuristic?"), at least 90% of people would fail the test. You must take the test every year and you are obviously allowed to train for the test. The goal of the test isn't to make sure the voter base is highly intelligent and rational. The goal of the test is to eliminate the people who are definitely lacking in intelligence.

I think this simple idea would improve the state of things a lot.

[-]Jiro30

How about a counterproposal: If you want to subject some other citizen to laws made by people who are voted for, you need to grant them the right to vote (unless they are considered incompetent on a general level).

Failing an aptitude test will not get you out of having to pay taxes or going to jail, nor will it get your gay marriage recognized.

All bureaucracies eventually wind up as psychopathic UFAIs; that's not news to me.

If you are saying that my proposal will inevitably eventually go wrong, I think that's a little pessimistic. Caution is certainly required, but there are extra precautions that can be taken, such as the part of the government that decides the tests being non-self-modifying, or a constitution, or a supermajority of citizens being able to impeach the government. I'm not saying that these do not have their own downsides, but I don't think its time to give up, and I also don't think its more dangerous than a democratic bureaucracy.

A similar idea (but one that's not as radical) is to have voters take aptitude tests.

I would describe this as "representative technocracy" as opposed to "direct technocracy". I would however raise some concerns about taking the test every year - revision is a lot of work, and you select for people who really care strongly about politics, rather than just intelligence (I, for instance, would not be able to answer "How many senators are there?") at first glance this might be a good thing, but it could also give a disproportionately large voice to people with extreme views, as perhaps centralists would not care as much.

Part of my proposal was that the first stage of exams would be administered as part of the exams people take when leaving school, like the SATs, and that universities and employers would also care about the results. Later stages would be given to people who had a real chance of pursuing a career in politics, so obviously they have a motive to work hard.

Unfortunately, tests for voting have been tried before, and were used to stop blacks voting in the US. Now, these tests were designed to be unfair, most of them were oral (and therefore not anonymous) and initially whites did not have to take the test. None of these things need apply to a future voting test, but nevertheless requiring voters to take tests seems to have gained a very bad press because of this.

But yes, broadly I approve of your proposal.

(I, for instance, would not be able to answer "How many senators are there?")

I tend to think that knowing the answer to a question like this is important when voting for your senator (and there are 100 senators - 2 for each state).

and you select for people who really care strongly about politics

If caring is paired with knowledge, this isn't necessarily bad.

tests for voting have been tried before, and were used to stop blacks voting in the US.

Yes, this the primary criticism I have received from people about this idea, and I agree with you that none of these things need apply to a future voting test.

If caring is paired with knowledge, this isn't necessarily bad.

I'm a little worried that the people who shout the loudest are not the wisest, but the most ideolgically driven, therefore the most blinded by ideology. Maybe it's the extremists who really care, the centralists are often more ambivalent.

I tend to think that knowing the answer to a question like this is important when voting for your senator (and there are 100 senators - 2 for each state).

I should point out that I'm not actually american, and the equivalent situation is slightly more complex in the UK, because the number of MPs keeps changing.

Evolution only had to get it right once, so in an infinite universe, anthropic selection, etc. This overlooks a more significant fact: ontogeny gets it right over and over again, regularly, predictably, reproducibly. Somehow, starting from just a few tens of thousands of genes and their cellular support structures, in which no intelligence is discernable, brains develop that respond to the environment around them by organising themselves into thinking beings. The same thing happens on a lesser scale with the chimpanzees, and dogs, and even insects.

How does it happen? Nobody knows. That the mechanism has been brought into existence at all can be an anthropically selected fluke, but once it exists, how it works is not a fluke but a repeatable process.

Reverse engineer the process, and you will be well on the way to creating an AGI.

ETA: It occurs to me that I don't know how often evolution has invented neurons. Just once, or more than that? It's certainly invented eyes more than once.

A way to test your theory is to take a smart creature whose common ancestor with mankind was stupid and see if we could put human level or above intelligence into it by, say, minor gene changes or eliminating its mutational load..

There are many examples of animals not closely related to humans that appear to have nontrivial amounts of intelligence (dolphins, octopuses, some birds). This seems to already constrain the effect Squark hypothesized to acting in a fairly narrow range.

[-][anonymous]60

Have you read this paper by Shulman and Bostrom?

Thx for commenting!

To my great shame, I have not, even though I knew about its existence from Yudkowsky's "Microeconomics of Intelligence Explosion" paper. I read it yesterday and realized my treatment of anthropics is unforgivably naive: I so should have known better! However, I'm not thrilled by Shulman & Bostrom's approach either: they use SSA/SIA instead of UDT. It seems that a correct analysis (using UDT & taking my complexity theoretic considerations into account) supports my conclusion that human-based super-intelligence is the primary reference scenario rather than de novo superintelligence. Hopefully, I will write about it soon.

[-]V_V50

I generally agree that computational complexity is probably relevant to the problem of AGI, but I can see several problems in your arguments:

There are several arguments the AGI problem is of a similar "flavor" to problems that are NP-complete.

Many proposed AI algorithms solve problems which are NP-hard (or worse, PSPACE-hard) in the general case. Keep in mind however, that NP-hardness and related notions are part of worst-case complexity theory, while average-complexity w.r.t. some appropriate class of "natural" problem distributions is relevant for AI and AGI. The relationship between worst-case complexity and average-complexity is poorly understood in the general case, and what would constitute a "natural" problem distribution is even less understood.

Similarly, Goedel incompleteness means there is no single algorithm for proving all provable theorems.

No. You can prove all provable theorems by just enumerating and checking all candidate proofs. Gödel's incompleteness theorem states that there are certain statements which are "true" (in a suitable technical sense which intuitively boils down to the fact that you will never be able to find a counterexample) but are unprovable, unless arithmetic in inconsistent.

It feels like there is a principle of mathematics which rules out algorithms that are "too good to be true": a single "magic wand" to solve all problems. In a similar way, AGI is a "magic wand": it solves "all" problems because you can simply delegate them to the AGI.

It can be true that "magic wand" algorithms can't exist because of complexity issues, but this would not be an issue for a humal-level AGI since humans aren't "magic wands". A strongly super-human AGI would be much more like a "magic wand" and I would agree that the theoretical and practical possibility of this kind of intelligence is much more speculative.

Another argument has to do with Solomonoff induction.

I think that Solomonoff induction and the intelligence metrics based on it, even with the inclusion of resource penalties a la Levin complexity, are too strong and don't accurately capture the notion of effectively physically realizable intelligence.

Consider the bit sequence generated by a strong stream chiper with an (uniformly sampled) key. It has a very small Levin complexity, but, as per the one-way function conjecture we don't expect any AI to be ever able to efficiently predict it, as it would amount to breaking the chiper.
And yet, according to intelligence metrics based on Kolmogorov or Levin complexity, a smart AI should easily solve this problem.

It seems plausible that AGI theory has failed to produce practical algorithms so far is that because it use definitions of algorithmic complexity which are too coarse and classify as easy problems which are in fact very hard. That's why I think that a better understanding of complexity is probably relevant for AGI research, but given the current state of knowledge, I don't see complexity issues as fundamentally dooming the field.

Before the hard step, an early version E1 of E co-evolves with a module h which performs a similar function to H but does it much worse (imagine a rough heuristic which works for many of the cases in a relatively narrow domain). During the hard step, H appears "out of the blue" due to sheer anthropic luck after which the E1-h "wire" is replaced by an E1-H wire. After the hard step, natural selection proceeds to transform E1 into its final version E2. This picture seems to be consistent with hard step happening to our chimp-like ancestor after which natural selection rapidly transformed the result into homo sapiens sapiens.

So your claim is that going from, say, worm to chimp is relatively easy while going from chimp to human is hard and can be only accomplished by anthropic brute forcing.

This should imply that replicating the level of intelligence of a chimp should be relatively easy, while replicating what sets apart humans from chimps should be hard. This is inconsistent with current achievements in narrow-AI/operations research:
Playing chess, playing Jeopardy, betting on the stock market, placing and routing a network of components in an electric grid, or a network of computers, or an integrated circuit, scheduling logistics for a large company or government agency, etc., are all things that chimps can't do at all, human without computers can do with some difficulty, and narrow-AIs can do better than humans.
On the other hand, visual object recognition in an unstructured environment like a jungle, precise 3D dynamic motion control using limbs, general planning in an unstructured environment full of uncertainty, non-eusocial coordination with other competitive agents enabling the formation of effective groups with cheater detection and punishment mechanics and non-trivial politics, are all things that wolves can do, chimps can do, humans can do but narrow-AIs can't do yet.
This suggests that worm-to-chimp cognitive abilities are harder, or at least it is harder to find effective algorithms for them, than chimp-to-human cognitive abilities.

EDIT:

If I have to guess, I would say that what sets humans apart from chimps is that humans became able to perform general-purpose computing: our brains are equipped with a sort of "universal Turing machine", except that it is not only very slow but it has also a very short tape and it is very noisy. Yet, it works well enough that, in combination with all the other heuristic and domain-specific modules in our brain, it allows us to perform complex cognitive tasks that other animals can't do.
The fact that (AFAIK) all human languages are able to express abstract thoughts and in particular meta statements about thinking and language itself, is evidence of this fact, and it is probably also the machinery on which this general-purpose computation is implemented in our brains.

The problem with AGI is that researchers are trying to build a human-like intelligence exactly backwards compared with how evolution did: digital electronics allows us to build excellent general-purpose computers, much better than the one embedded in the human brain, but they lack all the domain-specific heuristic modules that humans acquired over million years of evolution. That's why, IMO, narrow-AI succeeds at hard, human-only cognitive tasks and fails at the kind of things that a dog does instinctively.

Many proposed AI algorithms solve problems which are NP-hard (or worse, PSPACE-hard) in the general case. Keep in mind however, that NP-hardness and related notions are part of worst-case complexity theory, while average-complexity w.r.t. some appropriate class of "natural" problem distributions is relevant for AI and AGI

There is a beautiful NP-completeness theory for average case complexity. See Goldreich section 10.2.

No. You can prove all provable theorems by just enumerating and checking all candidate proofs.

By "algorithm" I mean a program / Turing machine that halts on all inputs.

It can be true that "magic wand" algorithms can't exist because of complexity issues, but this would not be an issue for a humal-level AGI since humans aren't "magic wands".

The "magic wandness" here is quantitative rather than binary. I'm not claiming there is a discrete class of agents designated as "intelligent" producing which is hard. I'm claiming there is a quantitative measure of intelligence and the complexity of producing an agent of given intelligence grows very fast.

Consider the bit sequence generated by a strong stream chiper with an (uniformly sampled) key. It has a very small Levin complexity, but, as per the one-way function conjecture we don't expect any AI to be ever able to efficiently predict it, as it would amount to breaking the chiper. And yet, according to intelligence metrics based on Kolmogorov or Levin complexity, a smart AI should easily solve this problem.

This is not the case since intelligence is relative. There are problem so difficult no agent can solve them. It doesn't mean including them in the problem distribution is an error. Unsolvable problems are problems no agent is going to solve so they don't affect the relative intelligence of agents anyway.

...That's why, IMO, narrow-AI succeeds at hard, human-only cognitive tasks and fails at the kind of things that a dog does instinctively.

I don't agree that narrow AI succeeds as "hard cognitive tasks". I think narrow AI succeeds at tasks that are very narrow and fails at tasks that require creativity (diverse solutions) like proving mathematical theorems or formulating theories of physics.

The "magic wand" you're talking about is called the procrastination paradox in tiling agents.

[-]V_V00

There is a beautiful NP-completeness theory for average case complexity. See Goldreich section 10.2.

Sure, there is a theory, but AFAIK it is not as developed as the theory for worst-case complexity.

I'm claiming there is a quantitative measure of intelligence and the complexity of producing an agent of given intelligence grows very fast.

Possibly true in general, but it is not obvious to me that the difference in complexity between a chimp and a human is large enough that it can be only overcome by anthropic brute-forcing.

This is not the case since intelligence is relative. There are problem so difficult no agent can solve them. It doesn't mean including them in the problem distribution is an error. Unsolvable problems are problems no agent is going to solve so they don't affect the relative intelligence of agents anyway.

The point is that if you try to build an agent based on a theory of general intelligence whose problem space includes (with significant probability mass) intractable problems, then any agent which has quality-of-solution performance guarantees on that problem space will have impractical resource bounds.

One way of dealing with this is to assume that our theory of general intelligence is true a priori, and thus declare the construction of a practical general intelligent agent forever impossible, barring some freak random accident that just happens to embed in it many bits of "oracle" information relevant to problem-solving in our world.
Another way is to consider that our theory of general intelligence may not be well developed, after all, and therefore theoretically principled practical agents could be possible if we based them on a more refined theory that managed to remove or penalize intractable problems from its problem space.

I don't agree that narrow AI succeeds as "hard cognitive tasks". I think narrow AI succeeds at tasks that are very narrow and fails at tasks that require creativity (diverse solutions) like proving mathematical theorems or formulating theories of physics.

Narrow AI does not succeed at all hard cognitive tasks, but it in general it does better at them than it does at chimp-like tasks.
I would say that proving mathematical theorems is a relatively narrow task once you write the theorem statement, axioms and inference rules in a precise formal way. Automatic theorem provers exist, but humans still have an edge over them for now.
Formulating theories of physics requires unstructured interaction with the world, or at the very least unstructured observation. If we can't make an AI reliably distinguish cat pictures from dog pictures, then it's not surprising that there is no AI physicist.

In general I think that what you call "creativity" is some not sort of "magic wand" or "oracle" that came from the sky, bestowed unto us by the god of infinite random trials, rather it is the combined effect of all the heuristic modules in our brain, whose mechanics we can't well access by introspection and verbalize.

For instance, mathematicians often say that when they reason about a theorem they "view" the mathematical structures with their mind eye. This suggests that they are co-opting their spatial reasoning abilities, evolved for things like leaping from a branch to the next, as heuristics for solving a formal logic problem. And in fact there is empirical evidence that spatial reasoning ability and mathematical reasoning ability are positively correlated. Possibly the reason why automatic theorem provers still perform worse than humans is that they lack these kind of heuristics.

The point is that if you try to build an agent based on a theory of general intelligence whose problem space includes (with significant probability mass) intractable problems, then any agent which has quality-of-solution performance guarantees on that problem space will have impractical resource bounds.

I analyze the problem in terms of continuous intelligence metrics, not in terms of binary guarantees. Agents with worse performance rate lower but it doesn't mean you have any guarantees in the picture. As a simplified analogy, think of an exam with a collection of problems and a time-limit for solving each. The overall score is a weighted sum over the scores of individual problems. What happens if we have intractable problems in the set (problems which cannot be solved in the given time limit)? Obviously, no examinee will get non-zero score for those. However this doesn't in any way impair the relative comparison of examinees based on the other problems.

In general I think that what you call "creativity" is some not sort of "magic wand" or "oracle" that came from the sky, bestowed unto us by the god of infinite random trials, rather it is the combined effect of all the heuristic modules in our brain, whose mechanics we can't well access by introspection and verbalize.

I think you have a false dichotomy in there :) My claim is that these heuristic modules are so heuristic that there is no practical way to invent them without copying the answer from the actual brain.

In general, my confidence that there is a hard step is significantly higher than my confidence than it is between chimp and human.

I have doubts regarding this:

It feels like there is a principle of mathematics which rules out algorithms that are "too good to be true": a single "magic wand" to solve all problems. In a similar way, AGI is a "magic wand": it solves "all" problems because you can simply delegate them to the AGI.

It seems to me that either human-level GI is impossible or it is not. But, we humans constitute an existence proof that it is not impossible. Therefore, Gödel does not apply; clearly human-level GI is not "too good to be true". And, Gödel's incompleteness theorems really have nothing to do with how hard it might be to come up with a solution to a computable problem.

I also have doubts regarding this:

This way the AGI design problem becomes an optimization problem: find a program with an intelligence metric as high as possible. The NP-connection now suggests the following conjecture: the AGI optimization program is of exponential complexity in program length. Of course we don't necessarily need the best program of a given length...

The bold portion of the above section is key - there are various NP-hard optimization problems for which good approximate solutions can be found in polynomial time. Since human or super-human AGI does not require a maximally intelligent program, even if you could show that the AGI optimization problem is NP-hard that would say nothing about the difficulty of finding a human or super-human level AGI.

It seems to me that either human-level GI is impossible or it is not. But, we humans constitute an existence proof that it is not impossible.

Of course. This is why I'm not saying human-level GI is impossible. I'm saying that designing it from scratch is impossible.

And, Gödel's incompleteness theorems really have nothing to do with how hard it might be to come up with a solution to a computable problem.

My argument is not in any sense a proof from Goedel incompleteness. Goedel incompleteness is just a suggestive analogy.

...there are various NP-hard optimization problems for which good approximate solutions can be found in polynomial time. Since human or super-human AGI does not require a maximally intelligent program, even if you could show that the AGI optimization problem is NP-hard that would say nothing about the difficulty of finding a human or super-human level AGI.

I mostly agree: except for the "nothing" part. I think it would definitely cause me to update towards "designing AGI is infeasible". I hinted at a possible relation between intelligence and LK-complexity. If such a relation can be proven and human intelligence can be estimated we would have a more definite answer.

I'm not very convinced. Relatively simple heuristics can perform very well on NP-complete problems. Machine learning methods can learn very accurate models of data, without doing Solomonoff induction.

I don't see any reason to believe an enormous amount of complexity is required to reach human abilities on these tasks.

Goedel incompleteness means there is no single algorithm for proving all provable theorems.

Are you sure you meant to say that? There is such an algorithm: enumerate all proofs. What doesn't exist is an algorithm for finding a proof if one exists and reporting the nonexistence of a proof if it doesn't.

By "algorithm" I meant a program that terminates on all inputs. Maybe I should have been more specific.

[-][anonymous]00

That is very atypical terminology.

According to Wikipedia, "an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing output and terminating at a final ending state." There is a linked quote of Knuth who says "A procedure which has all the characteristics of an algorithm except that it possibly lacks finiteness may be called a 'computational method'".

Also, even if the terminology is atypical the charitable reader should be able to understand the intent.

Intelligence doesn't operate on the basis of proving things. Intelligence operates on the basis of heuristics and so I find the parallel to NP-complete problems flawed.

For clarity, consider a human from 50,000 years ago and see if the way his intelligence operates has resemblance to solving problems in an NP-complete domain.

I don't quite follow your reasoning. Consider a Go-playing AI based on a Monte Carlo multi-armed bandit algorithm. A human player can analyze its moves and explain them from the point of view of regular human Go-theory. However this would reveal little about the inner operation of the algorithm.

Actual intelligence operates in the world of seriously incomplete data, fuzzy approximations, good-enough heuristics, and making a lot of mistakes. It is NOT the world of math proofs and hard analytical solutions.

So you're saying that complexity theory is useless in problems involve approximations, heuristics etc?

For one thing, there is a theory of approximate solutions and average case complexity as well. For another, the beauty of complexity theory is that it puts bounds on your abilities regardless of the methods you use: a complexity bound applies to any algorithm, whether analytical or heuristic.

No, I am not willing to make such a sweeping statement. However I will say that constraints and limitations that the hard complexity theory faces are not necessarily binding in the approximations/heuristics world.

On the other hand note that we have conjectures which are stronger than P != NP which taken together do amount to saying that approximation can be tough and that many heuristics will also not be likely to succeed. P=BPP, and the Unique Games Conjecture can both be thought of as variations of that theme.

many heuristics will also not be likely to succeed.

Succeed in which sense? Take the traveling salesman problem. You can't analytically solve it for N in the hundreds. But there are heuristics which give acceptable results and are widely used in practice.

Heuristics are not designed to get you to the optimum. They are, by definition, geared to produce a merely acceptable result.

Succeed in which sense? Take the traveling salesman problem. You can't analytically solve it for N in the hundreds. But there are heuristics which give acceptable results and are widely used in practice.

It depends what you mean by acceptable. For many problems, assuming that P != NP, one cannot get approximations beyond a certain amount. In particular, one cannot get approximations which asymptotically approach the optimal. See for more details my earlier comment here.

I feel you should give a strict definition of general intelligence (artificial or not). If you define it as "an intelligence that can solve all solvable problems with constrained resources", it's not clear that that humans have a general intelligence. (For instance, you yourself argue that humans might be unable to solve the problem of creating AGI.) But if humans don't have a general intelligence, then creating an artificial human-level intelligence might be easier than creating an AGI.

Hi Daniel, thx for commenting!

How strict do you want the definition to be? :) A lax definition would be efficient cross domain optimization. A mathematically rigorous definition would be the updateless intelligence metric (once we solve logical uncertainty and make the definition truly rigorous).

Roughly, general intelligence is the ability to efficiently solve the average problem where the averaging is done over a Solomonoff ensemble.

As well known, in scenarios with hard steps that are overcome anthropically, the hard steps are expected to be distributed on the timeline approximately uniformly. This seems to conflict with the most intuitive location of the intelligence hard step: somewhere between chimp and human.

This is basically the Doomsday problem. Any of its proposed resolutions would solve this problem and let us place a single hard (i.e. evolutionarily unlikely) step in our recent history.

As I said in a reply to Mark Friedenbach, I somewhat regret the naive treatment of anthropics in this post. Hopefully I'll write a UDT-based analysis later.

This earlier thread touched on some related issues.

Hi Joshua, thx for commenting!

There is an enormous difference between my claim and what 27chaos is saying. 27chaos is suggesting that "maximizing human happiness" is computationally intractable. However, this is obviously not the case since humans maximize human values, or at least they do it to an extent which would be impressive in AI. My claim is that designing an AGI (finding the right program in the space of all possible programs) is computationally intractable. Once designed, the AGI itself is efficient, otherwise it wouldn't count as an AGI.

Yes, I agree that what you are suggesting is different. I was pointing to the thread also because there's further discussion in the thread beyond 27chaos's initial point, and some of it is more relevant.

I see. I would be grateful if you can point to specific comments you think are the most relevant.

I would be grateful if you can point to specific comments you think are the most relevant.

At the risk of sounding egotistical, this bit seems relevant.

Thanks, this is definitely relevant and thought provoking!