nhamann comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 15 August 2010 06:59:28PM 3 points [-]

Yes, the subfield of computer science is what I'm referring to.

I'm not sure that the difference between "clever machine learning techniques" and "minds" is as hard and fast as you make it. A machine that drives a car is doing one of the things a human mind does; it may, in some cases, do it through a process that's structurally similar to the way the human mind does it. It seems to me that machines that can do these simple cognitive tasks are the best source of evidence we have today about hypothetical future thinking machines.

Comment author: nhamann 15 August 2010 08:10:18PM 5 points [-]

I'm not sure that the difference between "clever machine learning techniques" and "minds" is as hard and fast as you make it.

I gave the wrong impression here. I actually think that machine learning might be a good framework for thinking about how parts of the brain work, and I am very interested in studying machine learning. But I am skeptical that more than a small minority of projects where machine learning techniques have been applied to solve some concrete problem have shed any light on how (human) intelligence works.

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

Comment author: komponisto 15 August 2010 10:19:50PM 11 points [-]

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

Although one should be very, very careful not to confuse the opinions of someone like Goertzel with those of the people (currently) at SIAI, I think it's fair to say that most of them (including, in particular, Eliezer) hold a view similar to this. And this is the location -- pretty much the only important one -- of my disagreement with those folks. (Or, rather, I should say my differing impression from those folks -- to make an important distinction brought to my attention by one of the folks in question, Anna Salamon.) Most of Eliezer's claims about the importance of FAI research seem obviously true to me (to the point where I marvel at the fuss that is regularly made about them), but the one that I have not quite been able to swallow is the notion that AGI is only decades away, as opposed to a century or two. And the reason is essentially disagreement on the above point.

At first glance this may seem puzzling, since, given how much more attention is given to narrow AI by researchers, you might think that someone who believes AGI is "fundamentally different" from narrow AI might be more pessimistic about the prospect of AGI coming soon than someone (like me) who is inclined to suspect that the difference is essentially quantitative. The explanation, however, is that (from what I can tell) the former belief leads Eliezer and others at SIAI to assign (relatively) large amounts of probability mass to the scenario of a small set of people having some "insight" which allows them to suddenly invent AGI in a basement. In other words, they tend to view AGI as something like an unsolved math problem, like those on the Clay Millennium list, whereas it seems to me like a daunting engineering task analogous to colonizing Mars (or maybe Pluto).

This -- much more than all the business about fragility of value and recursive self-improvement leading to hard takeoff, which frankly always struck me as pretty obvious, though maybe there is hindsight involved here -- is the area of Eliezer's belief map that, in my opinion, could really use more public, explicit justification.

Comment author: Daniel_Burfoot 15 August 2010 11:32:57PM 5 points [-]

whereas it seems to me like a daunting engineering task analogous to colonizing Mars

I don't think this is a good analogy. The problem of colonizing Mars is concrete. You can make a TODO list; you can carve the larger problem up into subproblems like rockets, fuel supply, life support, and so on. Nobody knows how to do that for AI.

Comment author: John_Maxwell_IV 16 August 2010 12:31:40AM *  1 point [-]

OK, but it could still end up being like colonizing Mars if at some point someone realizes how to do that. Maybe komponisto thinks that someone will probably carve AGI in to subproblems before it is solved.

Comment author: komponisto 16 August 2010 02:02:16AM *  1 point [-]

Well, it seems we disagree. Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of "narrow AI" subcomponents.

Perhaps another way to put it would be that I suspect the Kolmogorov complexity of any AGI is so high that it's unlikely that the source code could be stored in a small number of human brains (at least the way the latter currently work).

EDIT: When I say "I suspect" here, of course I mean "my impression is". I don't mean to imply that I don't think this thought has occurred to the people at SIAI (though it might be nice if they could explain why they disagree).

Comment author: CarlShulman 16 August 2010 11:35:39AM 6 points [-]

The portion of the genome coding for brain architecture is a lot smaller than Windows 7, bit-wise.

Comment author: whpearson 17 August 2010 02:29:22PM 3 points [-]

An oddly somewhat relevant article on the information needed for specifying the brain. It is a biologist tearing a strip out of kurzweil for suggesting that we'll be able reverse engineer the human brain in a decade by looking at the genome.

Comment author: CarlShulman 17 August 2010 02:54:39PM *  4 points [-]

P.Z. is misreading a quote from a secondhand report. Kurzweil is not talking about reading out the genome and simulating the brain from that, but about using improvements in neuroimaging to inform input-output models of brain regions. The genome point is just an indicator of the limited number of component types involved, which helps to constrain estimates of difficulty.

Edit: Kurzweil has now replied, more or less along the lines above.

Comment author: timtyler 17 August 2010 04:46:30PM *  0 points [-]

Kurzweil's analysis is simply wrong. Here's the gist of my refutation of it:

"So, who is right? Does the brain's design fit into the genome? - or not?

The detailed form of proteins arises from a combination of the nucleotide sequence that specifies them, the cytoplasmic environment in which gene expression takes place, and the laws of physics.

We can safely ignore the contribution of cytoplasmic inheritance - however, the contribution of the laws of physics is harder to discount. At first sight, it may seem simply absurd to argue that the laws of physics contain design information relating to the construction of the human brain. However there is a well-established mechanism by which physical law may do just that - an idea known as the anthropic principle. This argues that the universe we observe must necessarily permit the emergence of intelligent agents. If that involves a coding the design of the brains of intelligent agents into the laws of physics then: so be it. There are plenty of apparently-arbitrary constants in physics where such information could conceivably be encoded: the fine structure constant, the cosmological constant, Planck's constant - and so on.

At the moment, it is not even possible to bound the quantity of brain-design information so encoded. When we get machine intelligence, we will have an independent estimate of the complexity of the design required to produce an intelligent agent. Alternatively, when we know what the laws of physics are, we may be able to bound the quantity of information encoded by them. However, today neither option is available to us."

Comment author: whpearson 17 August 2010 10:44:28PM 0 points [-]

Wired really messed up the flow of the talk in that case. Is it based off a singularity summit talk?

Comment author: Perplexed 17 August 2010 03:31:02PM 0 points [-]

I agree with your analysis, but I also understand where PZ is coming from. You write above that the portion of the genome coding for the brain is small. PZ replies that the small part of the genome you are referring to does not by itself explain the brain; you also need to understand the decoding algorithm - itself scattered through the whole genome and perhaps also the zygotic "epigenome". You might perhaps clarify that what you were talking about with "small portion of the genome" was the Kolmogorov complexity, so you were already including the decoding algorithm in your estimate.

The problem is, how do you get the point through to PZ and other biologists who come at the question from an evo-devo PoV? I think that someone ought to write a comment correcting PZ, but in order to do so, the commenter would have to speak the languages of three fields - neuroscience, evo-devo, and information-theory. And understand all three well enough to unpack the jargon to laymen without thereby loosing credibility with people who do know one or more of the three fields.

Comment author: timtyler 17 August 2010 04:52:51PM *  0 points [-]

The problem is, how do you get the point through to PZ and other biologists who come at the question from an evo-devo PoV?

Why bother? PZ's rather misguided rant isn't doing very much damage. Just ignore him, I figure.

Maybe it is a slow news day. PZ's rant got Slashdotted:

http://science.slashdot.org/story/10/08/17/1536233/Ray-Kurzweil-Does-Not-Understand-the-Brain

PZ has stooped pretty low with the publicity recently:

http://scienceblogs.com/pharyngula/2010/08/the_eva_mendes_sex_tape.php

Maybe he was trolling with his Kurzweil rant. He does have a history with this subject matter, though:

http://scienceblogs.com/pharyngula/2009/02/singularly_silly_singularity.php

Comment author: Jonathan_Graehl 16 August 2010 09:58:56PM *  1 point [-]

Obviously the genome alone doesn't build a brain. I wonder how many "bits" I should add on for the normal environment that's also required (in terms of how much additional complexity is needed to get the first artificial mind that can learn about the world given additional sensory-like inputs). Probably not too many.

Comment author: komponisto 16 August 2010 12:14:45PM *  1 point [-]

Thanks, this is useful to know. Will revise beliefs accordingly.

Comment author: Eliezer_Yudkowsky 18 August 2010 02:32:54PM 2 points [-]

Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of "narrow AI" subcomponents.

What do you think you know and how do you think you know it? Let's say you have a thousand narrow AI subcomponents. (Millions = implausible due to genome size, as Carl Shulman points out.) Then what happens, besides "then a miracle occurs"?

Comment author: komponisto 19 August 2010 12:13:15AM *  2 points [-]

What happens is that the machine has so many different abilities (playing chess and walking and making airline reservations and...) that its cumulative effect on its environment is comparable to a human's or greater; in contrast to the previous version with 900 components, which was only capable of responding to the environment on the level of a chess-playing, web-searching squirrel.

This view arises from what I understand about the "modular" nature of the human brain: we think we're a single entity that is "flexible enough" to think about lots of different things, but in reality our brains consist of a whole bunch of highly specialized "modules", each able to do some single specific thing.

Now, to head off the "Fly Q" objection, Iet me point out that I'm not at all suggesting that an AGI has to be designed like a human brain. Instead, I'm "arguing" (expressing my perception) that the human brain's general intelligence isn't a miracle: intelligence really is what inevitably happens when you string zillions of neurons together in response to some optimization pressure. And the "zillions" part is crucial.

(Whoever downvoted the grandparent was being needlessly harsh. Why in the world should I self-censor here? I'm just expressing my epistemic state, and I've even made it clear that I don't believe I have information that SIAI folks don't, or am being more rational than they are.)

Comment author: Eliezer_Yudkowsky 19 August 2010 12:38:13AM 5 points [-]

If a thousand species in nature with a thousand different abilities were to cooperate, would they equal the capabilities of a human? If not, what else is missing?

Comment author: thomblake 19 August 2010 12:48:18AM 2 points [-]

Tough problem. My first reaction is 'yes', but I think that might be because we're assuming cooperation, which might be letting more in the door than you want.

Comment author: wedrifid 19 August 2010 04:54:20AM 0 points [-]

Exactly the thought I had. Cooperation is kind of a big deal.

Comment author: komponisto 19 August 2010 12:47:29AM *  -1 points [-]

Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation.

Comment author: komponisto 19 August 2010 02:04:07AM 2 points [-]

I am highly confused about the parent having been voted down, to the point where I am in a state of genuine curiosity about what went through the voter's mind as he or she saw it.

Eliezer asked whether a thousand different animals cooperating could have the power of a human. I answered:

Yes, if there were a sufficiently powerful optimization process controlling the form of their cooperation

And then someone came along, read this, and thought....what? Was it:

  • "No, you idiot, obviously no optimization process could be that powerful." ?

  • "There you go: 'sufficiently powerful optimization process' is equivalent to 'magic happens'. That's so obvious that I'm not going to waste my time pointing it out; instead, I'm just going to lower your status with a downvote." ?

  • "Clearly you didn't understand what Eliezer was asking. You're in over your head, and shouldn't be discussing this topic." ?

  • Something else?

Comment author: WrongBot 19 August 2010 02:01:32AM 0 points [-]

The optimization process is the part where the intelligence lives.

Comment author: whpearson 19 August 2010 09:01:32AM 0 points [-]

Do you expect the conglomerate entity to be able to read or to be able to learn how to? Considering Eliezer can quite happily pick many many things like archer fish (ability to shoot water to take out flying insects) and chameleons (ability to control eyes independently), I'm not sure how they all add up to reading.

Comment author: jacob_cannell 25 August 2010 03:46:49AM *  0 points [-]

This view arises from what I understand about the "modular" nature of the human brain: we think we're a single entity that is "flexible enough" to think about lots of different things, but in reality our brains consist of a whole bunch of highly specialized "modules", each able to do some single specific thing.

The brain has many different components with specializations, but the largest and human dominant portion, the cortex, is not really specialized at all in the way you outline.

The cortex is no more specialized than your hard drive.

Its composed of a single repeating structure and associated learning algorithm that appears to be universal. The functional specializations that appear in the adult brain arise due to topological wiring proximity to the relevant sensory and motor connections. The V1 region is not hard-wired to perform mathematically optimal gabor-like edge filters. It automatically evolves into this configuration because it is the optimal configuration for modelling the input data at that layer, and it evolves thus soley based on exposure to said input data from retinal ganglion cells.

You can think of cortical tissue as a biological 'neuronium'. It has a semi-magical emergent capacity to self-organize into an appropriate set of feature detectors based on what its wired to. more on this

All that being said, the inter-regional wiring itself is currently less understood and is probably more genetically predetermined.

Comment author: Jonathan_Graehl 16 August 2010 10:01:23PM 0 points [-]

Well, it seems we disagree. Honestly, I see the problem of AGI as the fairly concrete one of assembling an appropriate collection of thousands-to-millions of "narrow AI" subcomponents.

There may be other approaches that are significantly simpler (that we haven't yet found, obviously). Assuming AGI happens, it will have been a race between the specific (type of) path you imagine, and every other alternative you didn't think of. In other words, you think you have an upper bound on how much time/expense it will take.

Comment author: whpearson 16 August 2010 11:00:24AM *  0 points [-]

I'm not a member of SIAI but my reason for thinking that AGI is not just going to be like lots of narrow bits of AI stuck together is that I can see interesting systems that haven't been fully explored (due to difficulty of exploration). These types of systems might solve some of the open problems not addressed by narrow AI.

These are problems such as

  • How can a system become good at so many different things when it starts off the same. Especially puzzling is how people build complex (unconscious) machinery for dealing with problems that we are not adapted for, like Chess.
  • How can a system look after/upgrade itself without getting completely pwned by malware (We do get partially pwned by hostile memes, but is not complete take over of the same type as getting rooted).

Now I also doubt that these systems will develop quickly when people get around to investigating them. And they will have elements of traditional narrow AI in as well, but they will be changeable/adaptable parts of the system, not fixed sub-components. What I think needs is exploring is primarily changes in software life-cycles rather than a change in the nature of the software itself.

Comment author: nhamann 16 August 2010 03:48:56AM *  3 points [-]

I don't think AGI in a few decades is very farfetched at all. There's a heckuvalot of neuroscience being done right now (the Society for Neuroscience has 40,000 members), and while it's probably true that much of that research is concerned most directly with mere biological "implementation details" and not with "underlying algorithms" of intelligence, it is difficult for me to imagine that there will still be no significant insights into the AGI problem after 3 or 4 more decades of this amount of neuroscience research.

Comment author: komponisto 16 August 2010 04:53:11AM *  3 points [-]

Of course there will be significant insights into the AGI problem over the coming decades -- probably many of them. My point was that I don't see AGI as hard because of a lack of insights; I see it as hard because it will require vast amounts of "ordinary" intellectual labor.

Comment author: nhamann 16 August 2010 06:10:36AM 9 points [-]

I'm having trouble understanding how exactly you think the AGI problem is different from any really hard math problem. Take P != NP, for instance the attempted proof that's been making the rounds on various blogs. If you've skimmed any of the discussion you can see that even this attempted proof piggybacks on "vast amounts of 'ordinary' intellectual labor," largely consisting of mapping out various complexity classes and their properties and relations. There's probably been at least 30 years of complexity theory research required to make that proof attempt even possible.

I think you might be able to argue that even if we had an excellent theoretical model of an AGI, that the engineering effort required to actually implement it might be substantial and require several decades of work (e.g. Von Neumann architecture isn't suitable for AGI implementation, so a great deal of computer engineering has to be done).

If this is your position, I think you might have a point, but I still don't see how the effort is going to take 1 or 2 centuries. A century is a loooong time. A century ago humans barely had powered flight.

Comment author: Daniel_Burfoot 18 August 2010 06:03:12PM 4 points [-]

but I still don't see how the effort is going to take 1 or 2 centuries. A century is a loooong time.

I think the following quote is illustrative of the problems facing the field:

After [David Marr] joined us, our team became the most famous vision group in the world, but the one with the fewest results. His idea was a disaster. The edge finders they have now using his theories, as far as I can see, are slightly worse than the ones we had just before taking him on. We've lost twenty years.

-Marvin Minsky, quoted in "AI" by Daniel Crevier.

Some notes and interpretation of this comment:

  • Most vision researchers, if asked who is the most important contributor to their field, would probably answer "David Marr". He set the direction for subsequent research in the field; students in introductory vision classes read his papers first.
  • Edge detection is a tiny part of vision, and vision is a tiny part of intelligence, but at least in Minsky's view, no progress (or reverse progress) was achieved in twenty years of research by the leading lights of the field.
  • There is no standard method for evaluating edge detector algorithms, so it is essentially impossible to measure progress in any rigorous way.

I think this kind of observation justifies AI-timeframes on the order of centuries.

Comment author: jacob_cannell 25 August 2010 03:27:22AM -1 points [-]

Edge detection is rather trivial. Visual recognition however is not, and there certainly are benchmarks and comparable results in that field. Have you browsed the recent pubs of Poggio et al at MIT vision lab? There is lots of recent progress, with results matching human levels for quick recognition tasks.

Also, vision is not a tiny part of intelligence. Its the single largest functional component of the cortex, by far. The cortex uses the same essential low-level optimization algorithm everywhere, so understanding vision at the detailed level is a good step towards understanding the whole thing.

And finally and most relevant for AGI, the higher visual regions also give us the capacity for visualization and are critical for higher creative intelligence. Literally all scientific discovery and progress depends on this system.

"visualization is the key to enlightenment" and all that

the visual system

Comment author: Daniel_Burfoot 26 August 2010 03:55:45AM 0 points [-]

Edge detection is rather trivial.

It's only trivial if you define an "edge" in a trivial way, e.g. as a set of points where the intensity gradient is greater than a certain threshold. This kind of definition has little use: given a picture of a tree trunk, this definition will indicate many edges corresponding to the ridges and corrugations of the bark, and will not highlight the meaningful edge between the trunk and the background.

I don't believe that there is much real progress recently in vision. I think the state of the art is well illustrated by the "racist" HP web camera that detects white faces but not black faces.

Also, vision is not a tiny part of intelligence [...] The cortex uses the same essential low-level optimization algorithm everywhere,

I actually agree with you about this, but I think most people on LW would disagree.

Comment author: jacob_cannell 26 August 2010 04:25:23AM *  0 points [-]

Whether you are talking about canny edge filters, gabor like edge detection more similar to what V1 self-organizes into, they are all still relatively simple - trivial compared to AGI. Trivial as in something you code in a few hours for your screen filter system in a modern game render engine.

The particular problem you point out with the tree trunk is a scale problem and is easily handled in any good vision system.

An edge detection filter is just a building block, its not the complete system.

In HVS, initial edge preprocessing is done in the retina itself which essentially does on-center, off-surround gaussian filters (similar to low-pass filters in photoshop). The output of the retina is thus essentially a multi-resolution image set, similar to a wavelet decomposition. The image output at this stage becomes a series of edge differences (local gradients), but at numerous spatial scales.

The high frequency edges such as the ridges and corrugations of the bark are cleanly separated from the more important low frequency edges separating the tree trunk from the background. V1 then detects edge orientations at these various scales, and higher layers start recognizing increasingly complex statistical patterns of edges across larger fields of view.

Whether there is much real progress recently in computer vision is relative to one's expectations, but the current state of the art in research systems at least is far beyond your simplistic assessment. I have a layman's overview of HVS here. If you really want to know about the current state of the art in research, read some recent papers from a place like Poggio's lab at MIT.

In the product space, the HP web camera example is also very far from the state of the art, I'm surprised that you posted that.

There is free eye tracking software you can get (running on your PC) that can use your web cam to track where your eyes are currently focused in real time. That's still not even the state of the art in the product space - that would probably be the systems used in the more expensive robots, and of course that lags the research state of the art.

Comment author: komponisto 16 August 2010 07:35:04AM 7 points [-]

Take P != NP, for instance the attempted proof that's been making the rounds on various blogs. If you've skimmed any of the discussion you can see that even this attempted proof piggybacks on "vast amounts of 'ordinary' intellectual labor,

By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of "vast" in mind.

The way I think about it is: think of all the intermediate levels of technological development that exist between what we have now and outright Singularity. I would only be half-joking if I said that we ought to have flying cars before we have AGI. There are of course more important examples of technologies that seem easier than AGI, but which themselves seem decades away. Repair of spinal cord injuries; artificial vision; useful quantum computers (or an understanding of their impossibility); cures for the numerous cancers; revival of cryonics patients; weather control. (Some of these, such as vision, are arguably sub-problems of AGI: problems that would have to be solved in the course of solving AGI.)

Actually, think of math problems if you like. Surely there are conjectures in existence now -- probably some of them already famous -- that will take mathematicians more than a century from now to prove (assuming no Singularity or intelligence enhancement before then). Is AGI significantly easier than the hardest math problems around now? This isn't my impression -- indeed, it looks to me more analogous to problems that are considered "hopeless", like the "problem" of classifying all groups, say.

Comment author: Eliezer_Yudkowsky 18 August 2010 02:36:25PM 10 points [-]

By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of "vast" in mind.

I hate to go all existence proofy on you, but we have an existence proof of a general intelligence - accidentally sneezed out by natural selection, no less, which has severe trouble building freely rotating wheels - and no existence proof of a proof of P != NP. I don't know much about the field, but from what I've heard, I wouldn't be too surprised if proving P != NP is harder than building FAI for the unaided human mind. I wonder if Scott Aaronson would agree with me on that, even though neither of us understand the other's field? (I just wrote him an email and asked, actually; and this time remembered not to say my opinion before asking for his.)

Comment author: Eliezer_Yudkowsky 18 August 2010 04:33:21PM 8 points [-]

Scott says that he thinks P != NP is easier / likely to come first.

Comment author: XiXiDu 18 August 2010 05:56:50PM 5 points [-]

Here an interview with Scott Aaronson:

After glancing over a 100-page proof that claimed to solve the biggest problem in computer science, Scott Aaronson bet his house that it was wrong. Why?

Comment author: bcoburn 21 August 2010 04:11:01PM 2 points [-]

It's interesting that you both seem to think that your problem is easier, I wonder if there's a general pattern there.

Comment author: JoshuaZ 18 August 2010 03:19:30PM 3 points [-]

Well, I for one strongly hope that we resolve whether P = NP before we have AI since a large part of my estimate for the probability of AI being able to go FOOM is based on how much of the complexity hierarchy collapses. If there's heavy collapse, AI going FOOM Is much more plausible.

Comment author: FAWS 18 August 2010 03:29:16PM 3 points [-]

and no existence proof of a proof of P != NP

Obviously not. That would be a proof of P != NP.

As for existence proof of a general intelligence, that doesn't prove anything about how difficult it is, for anthropic reasons. For all we know 10^20 evolutions each in 10^50 universes that would in principle allow intelligent life might on average result in 1 general intelligence actually evolving.

Comment author: Emile 18 August 2010 03:36:09PM 5 points [-]

We can make better guesses than that: evolution coughed up quite a few things that would be considered pretty damn intelligent for a computer program, like ravens, octopuses, rats or dolphins.

Comment author: CarlShulman 18 August 2010 03:43:36PM *  4 points [-]

Of course, if you buy the self-indication assumption (which I do not) or various other related principles you'll get an update that compels belief in quite frequent life (constrained by the Fermi paradox and a few other things).

More relevantly, approaches like Robin's Hard Step analysis and convergent evolution (e.g. octopus/bird intelligence) can rule out substantial portions of "crazy-hard evolution of intelligence" hypothesis-space. And we know that human intelligence isn't so unstable as to see it being regularly lost in isolated populations, as we might expect given ludicrous anthropic selection effects.

Comment author: komponisto 18 August 2010 11:32:03PM *  1 point [-]

I don't know much about the field, but from what I've heard, I wouldn't be too surprised if proving P != NP is harder than building FAI for the unaided human mind

Well actually, after thinking about it, I'm not sure I would either. There is something special about P vs NP, from what I understand, and I didn't even mean to imply otherwise above; I was only disputing the idea that "vast amounts" of work had already gone into the problem, for my definition of "vast".

Scott Aaronson's view on this doesn't move my opinion much (despite his large contribution to my beliefs about P vs NP), since I think he overestimates the difficulty of AGI (see your Bloggingheads diavlog with him).

Comment author: XiXiDu 18 August 2010 03:04:10PM 1 point [-]

I don't know much about the field, but from what I've heard, I wouldn't be too surprised if proving P != NP is harder than building FAI for the unaided human mind.

Awesome! Be sure to let us know what he thinks. Sounds unbelievable to me though, but what do I know.

Comment author: jacob_cannell 25 August 2010 03:33:31AM 0 points [-]

Why is AGI a math problem? What is abstract about it?

We don't need math proofs to know if AGI is possible. It is, the brain is living proof.

We don't need math proofs to know how to build AGI - we can reverse engineer the brain.

Comment author: timtyler 25 August 2010 06:01:54AM *  0 points [-]

Why is AGI a math problem? What is abstract about it?

This is a good part of the guts of it. That bit of it is a math problem:

http://timtyler.org/sequence_prediction/

Comment author: timtyler 25 August 2010 06:11:48AM *  0 points [-]

We don't need math proofs to know how to build AGI - we can reverse engineer the brain.

There may be a few clues in there - but engineers are likely to get to the goal looong before the emulators arrive - and engineers are math-friendly.

Comment author: jacob_cannell 25 August 2010 07:08:26AM -2 points [-]

A 'few clues' sounds like a gross underestimation. It is the only working example, so it certainly contains all the clues, not just a few. The question of course is how much of a shortcut is possible. The answer to date seems to be: none to slim.

I agree engineers reverse engineering will succeed way ahead of full emulation, that wasn't my point.

Comment author: timtyler 16 August 2010 06:28:37AM 2 points [-]

...but you don't really know - right?

You can't say with much confidence that there's no AIXI-shaped magic bullet.

Comment author: komponisto 16 August 2010 07:38:22AM *  2 points [-]

That's right; I'm not an expert in AI. Hence I am describing my impressions, not my fully Aumannized Bayesian beliefs.

Comment author: jacob_cannell 25 August 2010 03:14:50AM *  -1 points [-]

AIXI-shaped magic bullet?

AIXI's contribution is more philosophical than practical. I find a depressing over-emphasis of bayesian probability theory here as the 'math' of choice vs computational complexity theory, which is the proper domain.

The most likely outcome of a math breakthrough will be some rough lower and or upper bounds on the shape of the intelligence over space/time complexity function. And right now the most likely bet seems to be that the brain is pretty well optimized at the circuit level, and that the best we can do is reverse engineer it.

EY and the math folk here reach a very different conclusion, but I have yet to find his well considered justification. I suspect that the major reason the mainstream AI community doesn't subscribe to SIAI's math magic bullet theory is that they hold the same position outline above: ie that when we get the math theorems, all they will show is what we already suspect: human level intelligence requires X memory bits and Y bit ops/second, where X and Y are roughly close to brain levels.

This, if true, kills the entirety of the software recursive self-improvement theory. The best that software can do is approach the theoretical optimum complexity class for the problem, and then after that point all one can do is fix it into hardware for a further large constant gain.

I explore this a little more here

Comment author: timtyler 25 August 2010 05:55:16AM 0 points [-]
Comment author: timtyler 25 August 2010 05:47:37AM 0 points [-]

AIXI-shaped magic bullet?

Good quality general-purpose data-compression would "break the back" of the task of buliding synthetic intelligent agents - and that's a "simple" math problem - as I explain on: http://timtyler.org/sequence_prediction/

At least it can be stated very concisely. Solutions so far haven't been very simple - but the brain's architecture offers considerable hope for a relatively simple solution.

Comment author: timtyler 25 August 2010 05:51:55AM 0 points [-]

right now the most likely bet seems to be that the brain is pretty well optimized at the circuit level, and that the best we can do is reverse engineer it.

That seems like crazy talk to me. The brain is not optimal - not its hardware or software - and not by a looooong way! Computers have already steam-rollered its memory and arithmetic -units - and that happened before we even had nanotechonolgy computing components. The rest of the brain seems likely to follow.

Comment author: Vladimir_Nesov 15 August 2010 10:38:55PM 1 point [-]

Note that allowing for a possibility of sudden breakthrough is also an antiprediction, not a claim for a particular way things are. You can't know that no such thing is possible, without having understanding of the solution already at hand, hence you must accept the risk. It's also possible that it'll take a long time.

Comment author: jacob_cannell 25 August 2010 02:56:41AM 1 point [-]

I'm reading through and catching up on this thread, and rather strongly agreed with your statement:

Eliezer and others at SIAI to assign (relatively) large amounts of probability mass to the scenario of a small set of people having some "insight" which allows them to suddenly invent AGI in a basement. In other words, they tend to view AGI as something like an unsolved math problem, like those on the Clay Millennium list, whereas it seems to me like a daunting engineering task analogous to colonizing Mars (or maybe Pluto).

However, pondering it again, I realize there is an epistemological spectrum ranging from math on the one side to engineering on the other. Key insights into new algorithms can undoubtedly speed up progress, and such new insights often can be expressed as pure math, but at the end of the day it is a grand engineering (or reverse engineering) challenge.

However, I'm somewhat taken aback when you say, "the notion that AGI is only decades away, as opposed to a century or two."

A century or two?

Comment author: JoshuaZ 15 August 2010 10:31:10PM *  4 points [-]

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

One obvious piece of evidence is that many forms of narrow learning are mathematically incapable of doing much. There are for example a whole host of theorems about what different classes of neural networks can actually recognize, and the results aren't very impressive. Similarly, support vector machine's have a lot of trouble learning anything that isn't a very simple statistical model, and even then humans need to decide which stats are relevant. Other linear classifiers run into similar problems.

Comment author: Simulation_Brain 18 August 2010 06:20:49AM 3 points [-]

I work in this field, and was under approximately the opposite impression; that voice and visual recognition are rapidly approaching human levels. If I'm wrong and there are sharp limits, I'd like to know. Thanks!

Comment author: timtyler 18 August 2010 06:31:35AM *  2 points [-]

Machine intelligence has surpassed "human level" in a number of narrow domains. Already, humans can't manipulate enough data to do anything remotely like a search engine or a stockbot can do.

The claim seems to be that in narrow domains there are often domain-specific "tricks" - that wind up not having much to do with general intelligence - e.g. see chess and go. This seems true - but narrow projects often broaden out. Search engines and stockbots really need to read and understand the web. The pressure to develop general intelligence in those domains seems pretty strong.

Those who make a big deal about the distinction between their projects and "mere" expert systems are probably mostly trying to market their projects before they are really experts at anything.

One of my videos discusses the issue of whether the path to superintelligent machines will be "broad" or "narrow":

http://alife.co.uk/essays/on_general_machine_intelligence_strategies/

Comment author: JoshuaZ 18 August 2010 03:28:59PM 0 points [-]

Thanks, it always is good to actually have input from people who work in a given field. So please correct me if I'm wrong but I'm under the impression that

1) neutral networks cannot in general detect connected components unless the network has some form of recursion. 2) No one knows how to make a neural network with recursion learn in any effective, marginally predictable fashion.

This is the sort of thing I was thinking of. Am I wrong about 1 or 2?

Comment author: Simulation_Brain 20 August 2010 08:58:47PM 1 point [-]

Not sure what you mean about by 1), but certainly, recurrent neural nets are more powerful. 2) is no longer true; see for example the GeneRec algorithm. It does something much like backpropagation, but with no derivatives explicitly calculated, there's no concern with recurrent loops.

On the whole, neural net research has slowed dramatically based on the common view you've expressed; but progress continues apace, and they are not far behind cutting edge vision and speech processing algorithms, while working much more like the brain does.

Comment author: JoshuaZ 21 August 2010 02:47:12PM 0 points [-]

Thanks. GeneRec sounds very interesting. Will take a look. Regarding 1, I was thinking of something like the theorems in chapter 9 in Perceptrons which shows that there are strong limits on what topological features of input a non-recursive neural net can recognize.