komponisto comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: nhamann 16 August 2010 03:48:56AM *  3 points [-]

I don't think AGI in a few decades is very farfetched at all. There's a heckuvalot of neuroscience being done right now (the Society for Neuroscience has 40,000 members), and while it's probably true that much of that research is concerned most directly with mere biological "implementation details" and not with "underlying algorithms" of intelligence, it is difficult for me to imagine that there will still be no significant insights into the AGI problem after 3 or 4 more decades of this amount of neuroscience research.

Comment author: komponisto 16 August 2010 04:53:11AM *  3 points [-]

Of course there will be significant insights into the AGI problem over the coming decades -- probably many of them. My point was that I don't see AGI as hard because of a lack of insights; I see it as hard because it will require vast amounts of "ordinary" intellectual labor.

Comment author: nhamann 16 August 2010 06:10:36AM 9 points [-]

I'm having trouble understanding how exactly you think the AGI problem is different from any really hard math problem. Take P != NP, for instance the attempted proof that's been making the rounds on various blogs. If you've skimmed any of the discussion you can see that even this attempted proof piggybacks on "vast amounts of 'ordinary' intellectual labor," largely consisting of mapping out various complexity classes and their properties and relations. There's probably been at least 30 years of complexity theory research required to make that proof attempt even possible.

I think you might be able to argue that even if we had an excellent theoretical model of an AGI, that the engineering effort required to actually implement it might be substantial and require several decades of work (e.g. Von Neumann architecture isn't suitable for AGI implementation, so a great deal of computer engineering has to be done).

If this is your position, I think you might have a point, but I still don't see how the effort is going to take 1 or 2 centuries. A century is a loooong time. A century ago humans barely had powered flight.

Comment author: Daniel_Burfoot 18 August 2010 06:03:12PM 4 points [-]

but I still don't see how the effort is going to take 1 or 2 centuries. A century is a loooong time.

I think the following quote is illustrative of the problems facing the field:

After [David Marr] joined us, our team became the most famous vision group in the world, but the one with the fewest results. His idea was a disaster. The edge finders they have now using his theories, as far as I can see, are slightly worse than the ones we had just before taking him on. We've lost twenty years.

-Marvin Minsky, quoted in "AI" by Daniel Crevier.

Some notes and interpretation of this comment:

  • Most vision researchers, if asked who is the most important contributor to their field, would probably answer "David Marr". He set the direction for subsequent research in the field; students in introductory vision classes read his papers first.
  • Edge detection is a tiny part of vision, and vision is a tiny part of intelligence, but at least in Minsky's view, no progress (or reverse progress) was achieved in twenty years of research by the leading lights of the field.
  • There is no standard method for evaluating edge detector algorithms, so it is essentially impossible to measure progress in any rigorous way.

I think this kind of observation justifies AI-timeframes on the order of centuries.

Comment author: jacob_cannell 25 August 2010 03:27:22AM -1 points [-]

Edge detection is rather trivial. Visual recognition however is not, and there certainly are benchmarks and comparable results in that field. Have you browsed the recent pubs of Poggio et al at MIT vision lab? There is lots of recent progress, with results matching human levels for quick recognition tasks.

Also, vision is not a tiny part of intelligence. Its the single largest functional component of the cortex, by far. The cortex uses the same essential low-level optimization algorithm everywhere, so understanding vision at the detailed level is a good step towards understanding the whole thing.

And finally and most relevant for AGI, the higher visual regions also give us the capacity for visualization and are critical for higher creative intelligence. Literally all scientific discovery and progress depends on this system.

"visualization is the key to enlightenment" and all that

the visual system

Comment author: Daniel_Burfoot 26 August 2010 03:55:45AM 0 points [-]

Edge detection is rather trivial.

It's only trivial if you define an "edge" in a trivial way, e.g. as a set of points where the intensity gradient is greater than a certain threshold. This kind of definition has little use: given a picture of a tree trunk, this definition will indicate many edges corresponding to the ridges and corrugations of the bark, and will not highlight the meaningful edge between the trunk and the background.

I don't believe that there is much real progress recently in vision. I think the state of the art is well illustrated by the "racist" HP web camera that detects white faces but not black faces.

Also, vision is not a tiny part of intelligence [...] The cortex uses the same essential low-level optimization algorithm everywhere,

I actually agree with you about this, but I think most people on LW would disagree.

Comment author: jacob_cannell 26 August 2010 04:25:23AM *  0 points [-]

Whether you are talking about canny edge filters, gabor like edge detection more similar to what V1 self-organizes into, they are all still relatively simple - trivial compared to AGI. Trivial as in something you code in a few hours for your screen filter system in a modern game render engine.

The particular problem you point out with the tree trunk is a scale problem and is easily handled in any good vision system.

An edge detection filter is just a building block, its not the complete system.

In HVS, initial edge preprocessing is done in the retina itself which essentially does on-center, off-surround gaussian filters (similar to low-pass filters in photoshop). The output of the retina is thus essentially a multi-resolution image set, similar to a wavelet decomposition. The image output at this stage becomes a series of edge differences (local gradients), but at numerous spatial scales.

The high frequency edges such as the ridges and corrugations of the bark are cleanly separated from the more important low frequency edges separating the tree trunk from the background. V1 then detects edge orientations at these various scales, and higher layers start recognizing increasingly complex statistical patterns of edges across larger fields of view.

Whether there is much real progress recently in computer vision is relative to one's expectations, but the current state of the art in research systems at least is far beyond your simplistic assessment. I have a layman's overview of HVS here. If you really want to know about the current state of the art in research, read some recent papers from a place like Poggio's lab at MIT.

In the product space, the HP web camera example is also very far from the state of the art, I'm surprised that you posted that.

There is free eye tracking software you can get (running on your PC) that can use your web cam to track where your eyes are currently focused in real time. That's still not even the state of the art in the product space - that would probably be the systems used in the more expensive robots, and of course that lags the research state of the art.

Comment author: komponisto 16 August 2010 07:35:04AM 7 points [-]

Take P != NP, for instance the attempted proof that's been making the rounds on various blogs. If you've skimmed any of the discussion you can see that even this attempted proof piggybacks on "vast amounts of 'ordinary' intellectual labor,

By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of "vast" in mind.

The way I think about it is: think of all the intermediate levels of technological development that exist between what we have now and outright Singularity. I would only be half-joking if I said that we ought to have flying cars before we have AGI. There are of course more important examples of technologies that seem easier than AGI, but which themselves seem decades away. Repair of spinal cord injuries; artificial vision; useful quantum computers (or an understanding of their impossibility); cures for the numerous cancers; revival of cryonics patients; weather control. (Some of these, such as vision, are arguably sub-problems of AGI: problems that would have to be solved in the course of solving AGI.)

Actually, think of math problems if you like. Surely there are conjectures in existence now -- probably some of them already famous -- that will take mathematicians more than a century from now to prove (assuming no Singularity or intelligence enhancement before then). Is AGI significantly easier than the hardest math problems around now? This isn't my impression -- indeed, it looks to me more analogous to problems that are considered "hopeless", like the "problem" of classifying all groups, say.

Comment author: Eliezer_Yudkowsky 18 August 2010 02:36:25PM 10 points [-]

By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of "vast" in mind.

I hate to go all existence proofy on you, but we have an existence proof of a general intelligence - accidentally sneezed out by natural selection, no less, which has severe trouble building freely rotating wheels - and no existence proof of a proof of P != NP. I don't know much about the field, but from what I've heard, I wouldn't be too surprised if proving P != NP is harder than building FAI for the unaided human mind. I wonder if Scott Aaronson would agree with me on that, even though neither of us understand the other's field? (I just wrote him an email and asked, actually; and this time remembered not to say my opinion before asking for his.)

Comment author: Eliezer_Yudkowsky 18 August 2010 04:33:21PM 8 points [-]

Scott says that he thinks P != NP is easier / likely to come first.

Comment author: XiXiDu 18 August 2010 05:56:50PM 5 points [-]

Here an interview with Scott Aaronson:

After glancing over a 100-page proof that claimed to solve the biggest problem in computer science, Scott Aaronson bet his house that it was wrong. Why?

Comment author: bcoburn 21 August 2010 04:11:01PM 2 points [-]

It's interesting that you both seem to think that your problem is easier, I wonder if there's a general pattern there.

Comment author: ciphergoth 21 August 2010 04:36:27PM 7 points [-]

What I find interesting is that the pattern nearly always goes the other way: you're more likely to think that a celebrated problem you understand well is harder than one you don't know much about. It says a lot about both Eliezer's and Scott's rationality that they think of the other guy's hard problems as even harder than their own.

Comment author: JoshuaZ 18 August 2010 03:19:30PM 3 points [-]

Well, I for one strongly hope that we resolve whether P = NP before we have AI since a large part of my estimate for the probability of AI being able to go FOOM is based on how much of the complexity hierarchy collapses. If there's heavy collapse, AI going FOOM Is much more plausible.

Comment author: FAWS 18 August 2010 03:29:16PM 3 points [-]

and no existence proof of a proof of P != NP

Obviously not. That would be a proof of P != NP.

As for existence proof of a general intelligence, that doesn't prove anything about how difficult it is, for anthropic reasons. For all we know 10^20 evolutions each in 10^50 universes that would in principle allow intelligent life might on average result in 1 general intelligence actually evolving.

Comment author: Emile 18 August 2010 03:36:09PM 5 points [-]

We can make better guesses than that: evolution coughed up quite a few things that would be considered pretty damn intelligent for a computer program, like ravens, octopuses, rats or dolphins.

Comment author: FAWS 18 August 2010 03:44:38PM 1 point [-]

Not independently (not even cephalopods, at least completely). And we have no way of estimating the difference in difficulty between that level of intelligence and general intelligence other than evolutionary history (which for anthropic reasons could be highly untypical), and similarity in makeup, but already know that our type of nervous system is capable of supporting general intelligence, most rat level intelligences might hit fundamental architectural problems first.

Comment author: Emile 18 August 2010 04:36:31PM 3 points [-]

We can always estimate, even with very little knowledge - we'll just have huge error margins. I agree it is possible that "For all we know 10^20 evolutions each in 10^50 universes that would in principle allow intelligent life might on average result in 1 general intelligence actually evolving", I would just bet on a much higher probability than that, though I agree with the principle.

The evidence that pretty smart animals exist in distant branches of the tree of life, and in different environments is weak evidence that intelligence is "pretty accessible" in evolution's search space. It's stronger evidence than the mere fact that we, intelligent beings, exist.

Comment author: FAWS 18 August 2010 05:27:17PM *  1 point [-]

Intelligence sure. The original point was that our existence doesn't put a meaningful upper bound on the difficultly of general intelligence. Cephalopods are good evidence that given whatever rudimentary precursors of a nervous system our common ancestor had (I know it had differentiated cells, but I'm not sure what else. I think it didn't really have organs like higher animals, let alone anything that really qualified as a nervous system) cephalopod level intelligence is comparatively easy, having evolved independently two times. It doesn't say anything about how much more difficult general intelligence is compared to cephalopod intelligence, nor whether whatever precursors to a nervous system our common ancestor had were unusually conductive to intelligence compared to the average of similar complex evolved beings.

If I had to guess I would assume cephalopod level intelligence within our galaxy and a number of general intelligences somewhere outside our past light cone. But that's because I already think of general intelligence as not fantastically difficult independently of the relevance of the existence proof.

Comment author: CarlShulman 18 August 2010 03:43:36PM *  4 points [-]

Of course, if you buy the self-indication assumption (which I do not) or various other related principles you'll get an update that compels belief in quite frequent life (constrained by the Fermi paradox and a few other things).

More relevantly, approaches like Robin's Hard Step analysis and convergent evolution (e.g. octopus/bird intelligence) can rule out substantial portions of "crazy-hard evolution of intelligence" hypothesis-space. And we know that human intelligence isn't so unstable as to see it being regularly lost in isolated populations, as we might expect given ludicrous anthropic selection effects.

Comment author: timtyler 22 August 2010 12:43:56PM 0 points [-]

I looked at Nick's:

http://www.anthropic-principle.com/preprints/olum/sia.pdf

I don't get it. Anyone know what is supposed to be wrong with the SIA?

Comment author: komponisto 18 August 2010 11:32:03PM *  1 point [-]

I don't know much about the field, but from what I've heard, I wouldn't be too surprised if proving P != NP is harder than building FAI for the unaided human mind

Well actually, after thinking about it, I'm not sure I would either. There is something special about P vs NP, from what I understand, and I didn't even mean to imply otherwise above; I was only disputing the idea that "vast amounts" of work had already gone into the problem, for my definition of "vast".

Scott Aaronson's view on this doesn't move my opinion much (despite his large contribution to my beliefs about P vs NP), since I think he overestimates the difficulty of AGI (see your Bloggingheads diavlog with him).

Comment author: XiXiDu 18 August 2010 03:04:10PM 1 point [-]

I don't know much about the field, but from what I've heard, I wouldn't be too surprised if proving P != NP is harder than building FAI for the unaided human mind.

Awesome! Be sure to let us know what he thinks. Sounds unbelievable to me though, but what do I know.

Comment author: jacob_cannell 25 August 2010 03:33:31AM 0 points [-]

Why is AGI a math problem? What is abstract about it?

We don't need math proofs to know if AGI is possible. It is, the brain is living proof.

We don't need math proofs to know how to build AGI - we can reverse engineer the brain.

Comment author: timtyler 25 August 2010 06:01:54AM *  0 points [-]

Why is AGI a math problem? What is abstract about it?

This is a good part of the guts of it. That bit of it is a math problem:

http://timtyler.org/sequence_prediction/

Comment author: timtyler 25 August 2010 06:11:48AM *  0 points [-]

We don't need math proofs to know how to build AGI - we can reverse engineer the brain.

There may be a few clues in there - but engineers are likely to get to the goal looong before the emulators arrive - and engineers are math-friendly.

Comment author: jacob_cannell 25 August 2010 07:08:26AM -2 points [-]

A 'few clues' sounds like a gross underestimation. It is the only working example, so it certainly contains all the clues, not just a few. The question of course is how much of a shortcut is possible. The answer to date seems to be: none to slim.

I agree engineers reverse engineering will succeed way ahead of full emulation, that wasn't my point.

Comment author: timtyler 25 August 2010 07:38:41AM 0 points [-]

If information is not extracted and used, it doesn't qualify as being a "clue".

The question of course is how much of a shortcut is possible. The answer to date seems to be: none to slim.

The search oracles and stockmarketbot makers have paid precious little attention to the brain. They are based on engineering principles instead.

I agree engineers reverse engineering will succeed way ahead of full emulation,

Most engineers spend very little time on reverse-engineering nature. There is a little "bioinspiration" - but inspiration is a bit different from wholescale copying.

Comment author: timtyler 16 August 2010 06:28:37AM 2 points [-]

...but you don't really know - right?

You can't say with much confidence that there's no AIXI-shaped magic bullet.

Comment author: komponisto 16 August 2010 07:38:22AM *  2 points [-]

That's right; I'm not an expert in AI. Hence I am describing my impressions, not my fully Aumannized Bayesian beliefs.

Comment author: jacob_cannell 25 August 2010 03:14:50AM *  -1 points [-]

AIXI-shaped magic bullet?

AIXI's contribution is more philosophical than practical. I find a depressing over-emphasis of bayesian probability theory here as the 'math' of choice vs computational complexity theory, which is the proper domain.

The most likely outcome of a math breakthrough will be some rough lower and or upper bounds on the shape of the intelligence over space/time complexity function. And right now the most likely bet seems to be that the brain is pretty well optimized at the circuit level, and that the best we can do is reverse engineer it.

EY and the math folk here reach a very different conclusion, but I have yet to find his well considered justification. I suspect that the major reason the mainstream AI community doesn't subscribe to SIAI's math magic bullet theory is that they hold the same position outline above: ie that when we get the math theorems, all they will show is what we already suspect: human level intelligence requires X memory bits and Y bit ops/second, where X and Y are roughly close to brain levels.

This, if true, kills the entirety of the software recursive self-improvement theory. The best that software can do is approach the theoretical optimum complexity class for the problem, and then after that point all one can do is fix it into hardware for a further large constant gain.

I explore this a little more here

Comment author: timtyler 25 August 2010 05:55:16AM 0 points [-]
Comment author: timtyler 25 August 2010 05:47:37AM 0 points [-]

AIXI-shaped magic bullet?

Good quality general-purpose data-compression would "break the back" of the task of buliding synthetic intelligent agents - and that's a "simple" math problem - as I explain on: http://timtyler.org/sequence_prediction/

At least it can be stated very concisely. Solutions so far haven't been very simple - but the brain's architecture offers considerable hope for a relatively simple solution.

Comment author: timtyler 25 August 2010 05:51:55AM 0 points [-]

right now the most likely bet seems to be that the brain is pretty well optimized at the circuit level, and that the best we can do is reverse engineer it.

That seems like crazy talk to me. The brain is not optimal - not its hardware or software - and not by a looooong way! Computers have already steam-rollered its memory and arithmetic -units - and that happened before we even had nanotechonolgy computing components. The rest of the brain seems likely to follow.