timtyler comments on What if AI doesn't quite go FOOM? - Less Wrong

11 Post author: Mass_Driver 20 June 2010 12:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread.

Comment author: timtyler 20 June 2010 07:18:26AM 2 points [-]

1 seems unlikely and 2 and 3 seem silly to me. An associated problem of unknown scale is the wirehead problem. Some think that this won't be a problem - but we don't really know that yet. It probably would not slow down machine intelligence very much, until way past human level - but we don't yet know for sure what its effects will be.

Comment author: JoshuaZ 20 June 2010 02:35:23PM *  1 point [-]

1 seems unlikely and 2 and 3 seem silly to me

I'm curious why 3 seems silly.

If the complexity hierarchy does not exhibit major collapse (say P, NP. coNP, PSPACE, and EXP are all distinct (which at this point most theoretical computer scientists seem to believe)) then many genuinely practical problems cannot be done much more efficiently than we can do them today. For example, this would imply that factoring integers probably cannot be done in polynomial time. It also implies that the traveling salesman problem cannot be solved efficiently, a problem which shows up in many practical contexts including circuit design. If that were the case, even if there are no Option 2 problems (in that really good hardware is actually possible), designing such hardware might become increasingly difficult at a rate faster than the hardware improves. I consider that situation to be unlikely, but from what I know of the discipline it is plausible (possibly someone involved in the industries in question can comment on the plausibility. I think we we have a few such people on LW). Graph coloring would also be intrinsically hard, and graph coloring comes up in memory design and memory management issues which would be very relevant to an AI trying to go FOOM.

Even if the complexity hierarchy collapses or exhibits partial collapse there are still going to be bounds on all these practical problems beyond which they cannot be optimized. They will be polynomial bounds and so won't grow fast which will make things happy for our AI but absolute bounds will still exist.

It is possible that the entire hierarchy doesn't collapse but that there's some algorithm for solving some standard NP complete problems that is very efficient as long as the length of the input is less than 10^80 or something like that. In which case even without the complexity collapse, the AI would still be able to go FOOM. But this possibility seems very unlikely.

Similarly, it is possible that someone will develop hardware allowing small wormholes to aid computation in which case the physical laws of the universe will allow heavy collapse (see Scott Aaronson's remarks here) with everything up to PSPACE collapsing completely. But that's essentially getting around problem 3 by making ridiculously optimistic hardware assumptions. It is also possible that quantum computing will become very practical and it turns out that BQP=NP or so close as to not make a practical difference (similar to our hypothetical algorithm that works well with inputs less than 10^80th one could conceive of a quantum algorithm that did the same thing for all small inputs even with BQP turning out to be a proper subset of NP (as I understand it, at present we don't actually know even if BQP is a subset of NP but it is suspected that it is)). But that a) assumes that quantum computing will be strongly practical and b) requires extremely strange and unlikely results about computational complexity.

The best argument as far as I am aware against Option 3 failure is that if hardware takes off really well (say the hardware is possible and nanotech makes it fast to build) then the software constraints become not very relevant. So if hardware turns out to be good enough, software constraints might not matter much. But if FOOMing requires improvement of both then this becomes a real concern.

Comment author: cousin_it 20 June 2010 05:10:45PM *  5 points [-]

To solve the NP-hard problems in hardware design and such, you don't necessarily need a solver that works on all ("random", "general position") NP-hard problems. You can find and exploit regularities in the problems that actually arise. We humans seem to do just that: someone who plays chess well can't easily convert their skill to other chess-like games.

Comment author: JoshuaZ 27 July 2010 03:01:01AM 1 point [-]

Replying separately to earlier reply since long-time gap will make an edited remark unlikely to be noticed by you.

I just ran across in a completely different context this paper on average difficulty of NP-hard problems. This paper is highly technical and I don't think I follow all (or even most) of the details, but the upshot seems to be that, roughly speaking, the only way for most instances of fairly natural NP complete problems to be as difficult as the worst case is if NP complete problems are actually easy. This makes your objection all the more powerful because it suggests that it is likely that for specific NP complete problems an AI would need to solve it would not just be able to take advantage of regularity in the specific instances it cares about but it would also be able to rely on the fact that most random instances simply aren't that tough compared to the very worst case. This issue combined with your remark forces me to update my estimation that a software issues will interfere substantially with FOOMing. Taken together, these issues undermine the arguments I gave for software barriers to FOOM.

Comment author: JoshuaZ 20 June 2010 05:46:52PM 1 point [-]

Yes, that seems to be a very good point. It is unlikely an AI is going to need to solve arbitrary instances of the traveling salesman or graph coloring.

Comment author: Perplexed 02 October 2010 07:57:03PM 2 points [-]

It is important to realize that producing an agent capable of finding the optimum solution in a search space 1000 times as large is not the same thing as producing an agent capable of finding solutions that are 1000 times as good.

It sometimes seems to me that FOOM believers fail to take this distinction into account.

Comment author: timtyler 20 June 2010 04:44:19PM *  2 points [-]

3 says software will eventually run into optimality limits, which will eventually slow growth. That is right - but we can see that that is far off - and is far enough away to allow machine intelligences to zoom far past human ones in all domains worth mentioning.

Comment author: JoshuaZ 20 June 2010 04:56:28PM 1 point [-]

software will eventually run into optimality limits, which will eventually slow growth. That is right - but we can see that that is far off - and is far enough away to allow machine intelligences to zoom far past human ones in all domains worth mentioning.

How do we know this is far off? For some very useful processes we're already close to optimal. For example, linear programming is close to the theoretical optimum already as are the improved versions of the Euclidean algorithm, and even the most efficient of those are not much more efficient than Euclid's original which is around 2000 years old. And again, if it turns out that the complexity hierarchy strongly does not collapse then many algorithms we have today will turn out to be close to best possible. So what makes you so certain that we can see that reaching optimality limits is far off?

Comment author: timtyler 20 June 2010 05:10:21PM *  1 point [-]

I was comparing with the human brain. That is far from optimal - due to 1-size-fits-all pattern, ancestral nutrient availability issues (now solved) - and other design constraints.

Machine intelligence algorithms are currently well behind human levels in many areas. They will eventually wind up far ahead - and so currently there is a big gap.

Comment author: JoshuaZ 20 June 2010 05:45:35PM *  1 point [-]

Comparing to the human brain is primarily connected to failure option 2, not option 3. We've had many years now to make computer systems and general algorithms that don't rely on human architecture. We know that machine intelligence is behind humans in many areas but we also know that computers are well ahead of humans in other areas (I'm pretty sure that no human on the planet can factor 100 digit integers in a few seconds unaided). FOOMing would likely require not just an AI that is much better than humans at many of the tasks that humans are good at but also an AI that is very good at tasks like factoring that computers are already much better at than humans. So pointing out that the human brains are very suboptimal doesn't make this a slamdunk case. So I still don't see how you can label concerns about 3 as silly.

Cousin it's point (gah, making the correct possessive there looks really annoying because it looks like one has typed "it's" when one should have "its") that the NP hard problems that an AI would need to deal with may be limited to instances which have high regularity seems like a much better critique.

Edit: Curious for reason for downvote.

Comment author: wedrifid 20 June 2010 05:53:49PM 1 point [-]

Cousin it's point (gah, making the correct possessive there looks really annoying because it looks like one has typed "it's" when one should have "its")

It feels a little better if I write cousin_it's. Mind you I feel 'gah' whenever I write 'its'. It's a broken hack in English grammar syntax.

Comment author: gwern 26 November 2011 06:45:15AM 0 points [-]

If linear programming is so close to the optimum, why did we see such massive speedups in it and integer programming over the past few decades? (Or are you saying those speedups brought us almost to the optimum?)

Comment author: JoshuaZ 26 November 2011 08:14:32PM 0 points [-]

There are a variety of things going on here. One is that those speedups helped. A major part those is imprecision on my part. This is actually a good example of an interesting (and from the perspective of what is discussed above) potentially dangerous phenomenon. Most of the time when we discuss the efficiency of algorithms one is looking at big-O bounds. But those by nature have constants built into them. In the case of linear programming, it turned out that we could drastically improve the constants. This is related to what I discussed above where even if one has P != NP, one could have effective efficient ways of solving all instances of 3-SAT that have fewer than say 10^80 terms. That sort of situation could be about as bad from a FOOM perspective. To the immediate purpose being discussed above, the Euclidean algorithm example is probably better since in that case we actually know that there's not much room for improvements in the constants.