Comment author: luzr 11 December 2008 09:26:56PM 0 points [-]

Eliezer:

"Tim probably read my analysis using the self-optimizing compiler as an example, then forgot that I had analyzed it and thought that he was inventing a crushing objection on his own."

Why do you think it is crushing objection? I believe Tim just repeats his favorite theme (which, in fact, I tend to agree with) where machine augmented humans build better machines. If you can use automated refactoring to improve the way compiler works (and today, you often can), that is in fact pretty cool augmentation of human capabilities. It is recursive FOOM. The only difference of your vision and his is that as long as k < 1 (and perhaps some time after that point), humans are important FOOM agents. Also, humans are getting much more capable in the process. For example, machine augmented human (think weak AI + direct neural interface and all that cyborging whistles + mind drugs) might be quite likely to follow the FOOM.

Comment author: luzr 11 December 2008 09:10:41PM 4 points [-]

"FOOM that takes two years"

In addition to comments by Robin and Aron, I would also pointed out the possibility that longer the FOOM takes, larger the chance it is not local, regardless of security - somewhere else, there might be another FOOMing AI.

Now as I understand, some consider this situation even more dangerous, but it as well might create "take over" defence.

Another comment to FOOM scenario and this is sort of addition to Tim's post:

"As machines get smarter, they will gradually become able to improve more and more of themselves. Yes, eventually machines will be able to cut humans out of the loop - but before that there will have been much automated improvement of machines by machines - and after that there may still be human code reviews."

Eliezer seems to spend a lot of time explaining what happens when "k > 1" - when AI intelligence surpases human and starts selfimproving. But I suspect that the phase 0.3 < k < 1 might be pretty long, maybe decades.

Also, moreover, by the time of FOOM, we should be able to use vast amounts of fast 'subcritical' AIs (+ weak AIs) as guardians of process. In fact, by that time, k < 1 AIs might play a pretty important role in world economy and security by that time and it does not take too much pattern recognition power to keep things at bay. (Well, in fact, I believe Eliezer proposes something similar in his thesis, except for locality issue).

Comment author: luzr 08 December 2008 10:51:22AM 0 points [-]

Tim Tyler:

As much as I like your posts, one formal note:

If you are responding to somebody else, it is always a good idea to put his name at the beginning of post.

Comment author: luzr 06 December 2008 04:46:33PM -2 points [-]

Vladimir Nesov:

"Only few responses to changing context are the right ones"

As long as they are "few" instead of "one" - and these "few" still means basically infinite subset of larger infinite set, differences will accumulate over time, leading to different personality.

Note that such personality might not diverge from the basic goal. But it will inevitable start to 'disagree' about choosing one of those "few" good choices because of different learning experience.

This, BTW, is the reason why despite what Tooby & Cosmides says, we have highly diverse ecosystem with very large number of species.

Comment author: luzr 06 December 2008 11:40:56AM 0 points [-]

"because parallelizing is programmatically difficult"

Minor note: "Parallelization is programmatically difficult" is in fact another example of recursion.

The real reason why programming focused on serial execution was the fact that the most hardware was serial. There is not much point learning mysteries of multithreaded development if chances that your SW will run on multicore CPU is close to zero.

Now when multicore CPUs are de facto standard, parallel programming is no longer considered prohibitively difficult, it is just another thing you have to learn. There are new tools, new languages etc..

SW always lags behind HW. Intel had 32-bit CPU since 1986, it took 10 years before 32-bit PC software became mainstream...

In response to Hard Takeoff
Comment author: luzr 04 December 2008 09:30:07AM 0 points [-]

>>> They do that using dedicated hardware. Try to paint Crysis in realtime 'per pixel', using a vanilla CPU.

Interestingly, today's high-end vanilla CPU (quadcore at 3Ghz) would paint 7-8 years old games just fine. Means in another 8 years, we will be capable of doing Crysis without GPU.

In response to Hard Takeoff
Comment author: luzr 03 December 2008 08:59:34AM -1 points [-]

anon:

"You're currently using a program which can access the internet. Why do you think an AI would be unable to do the same?"

I hope it will. Still, that would get it only to *preexisting* knowledge.

It can draw many hypothesis, but it will have to TEST them (gain empirical knowledge). Think LHC.

BTW, not that there are problems in quantum physics that do not have analytical solution. Some equations simply cannot be solved. Now of course, perhaps superintelligence will find how to do that, but I believe there are quite solid mathematic proofs that it is not possible.

[quote] Also, computer hardware exists for manipulating objects and acquiring sensory data. Furthermore: by hypothesis, the AI can improve itself better then we can, because, as EY pointed out, we're not exactly cut out for programming. Also, improving an algorithm does not necessarily increase its complexity. [/quote]

I am afraid that you have missed the part about algorithm being essential, but not the core of AI mind. The mind can as well be data. And it can be unoptimizable, for the same reasons some of equations cannot be analytically solved.

[quote] And you don't have to simulate reality perfectly to understand it, so there is no showstopper there. [/quote]

To understand certain aspects of reality. All I am saying is that to understand certain aspects might not be enough.

What I suggest is that the "mind" might be something as network of interconnected numerical values. For the outside observer, there will be no order in connections or values. To truly understand the "mind" a poorly as by simulation, you would need much bigger mind, as you would have to simulate and carefully examine each of nodes.

Crude simulation does not help here, because you do not know which aspects to look for. Anything can be important.

Comment author: luzr 03 December 2008 08:39:54AM 0 points [-]

"Intelligence tests are timed for a good reason. If you see intelligence as an optimisation process, it is obvious why speed matters - you can do more trials."

Inteligence tests are designed to measure performance of human brain.

Try this: Strong AI running on 2Ghz CPU. You reduce it to 1Ghz, without changing anything else. Will it make less intelligent? Slower, definitely.

Comment author: luzr 02 December 2008 10:48:42PM 0 points [-]

" The faster they get the smarter they are - since one component of intelligence is speed."

I think this might be incorrect. The speed means that you can solve the problem faster, not that you can solve more complex problem.

In response to Hard Takeoff
Comment author: luzr 02 December 2008 09:16:59PM 1 point [-]

"I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less."

I am glad I can agree for once :)

"The main thing I'll venture into actually expecting from adding "insight" to the mix, is that there'll be a discontinuity at the point where the AI understands how to do AI theory, the same way that human researchers try to do AI theory. An AI, to swallow its own optimization chain, must not just be able to rewrite its own source code;"

Anyway, my problem with your speculation about hard takeoff is that you seem to do the same conceptual mistake that you so dislike about Cyc - you seem to thing that AI will be mostly "written in the code".

I suspect it is very likely that the true working AI code will be relatively small and already pretty well optimized. The "mind" itself will be created from it by some self-learning process (my favorite scenario involves weak AI as initial "tutor") and in fact will be mostly consist of vast amount of classification coeficients and connections or something like that (think bayesian or neural networks).

While it probably will be in AI power to optimize its "primal algorithm", gains there will be limited (it will be pretty well optimized by humans anyway). The ability to reorganize its "thinking network" might be severely low. Same as with human - we nearly understand how single neuron work, but are far from understanding the whole network. Also, with any further possible self-improvement, the complexity grows further and it is quite reasonable to predict this complexity will grow faster than AI ability to understand it.

I think it all boils down to very simple showstopper - considering you are building perfect simulation, how many atoms you need to simulate a atom? (BTW, this is also showstopper for "nested virtual reality" idea).

Note however that this whole argument is not really mutually exclusive with hard takeoff. AI still can build next generation AI that is better. But "self" part might not work. (BTW, interesting part is that "parent" AI might then face the same dilemma with descendant's friendliness ;)

I also thing that in all your "foom" posts, you understimate empirical form of knowledge. It sounds like you expect AI to just sit down in the cellar and think, without much inputs and actions, then invent the theory of everything and take over the world.

That is not going to happen at least for the same reason why the endless chain of nested VRs is unlikely.

View more: Prev | Next