AngryParsley comments on Call for new SIAI Visiting Fellows, on a rolling basis - Less Wrong

29 Post author: AnnaSalamon 01 December 2009 01:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (264)

You are viewing a single comment's thread. Show more comments above.

Comment author: AngryParsley 04 December 2009 09:20:06AM *  4 points [-]

You can progress scientifically to make AI if you copy human architecture somewhat.

I think you're making the mistake of relying too heavily on our one sample of a general intelligence: the human brain. How do we know which parts to copy and which parts to discard? To draw an analogy to flight, how can we tell which parts of the brain are equivalent to a bird's beak and which parts are equivalent to wings? We need to understand intelligence before we can successfully implement it. Research on the human brain is expensive, requires going through a lot of red tape, and it's already being done by other groups. More importantly, planes do not fly because they are similar to birds. Planes fly because we figured out a theory of aerodynamics. Planes would fly just as well if no birds ever existed, and explaining aerodynamics doesn't require any talk of birds.

I don't see how we can hope make significant progress on non-human AI. How will we test whether our theories are correct or on the right path?

I don't see how we can hope to make significant progress on non-bird flight. How will we test whether our theories are correct or on the right path?

Just because you can't think of a way to solve a problem doesn't mean that a solution is intractable. We don't yet have the equivalent of a theory of aerodynamics for intelligence, but we do know that it is a computational process. Any algorithm, including whatever makes up intelligence, can be expressed mathematically.

As to the rest of your comment, I can't really respond to the questions about SIAI's behavior, since I don't know much about what they're up to.

Comment author: Jordan 04 December 2009 10:10:34AM 1 point [-]

The bird analogy rubs me the wrong way more and more. I really don't think it's a fair comparison. Flight is based on some pretty simple principles, intelligence not necessarily so. If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI. Certainly intelligence might have some nice underlying theory, so we should pursue that angle as well, but I don't see how we can be certain either way.

Comment author: AngryParsley 04 December 2009 06:55:08PM *  5 points [-]

Flight is based on some pretty simple principles, intelligence not necessarily so.

I think the analogy still maps even if this is true. We can't build useful AIs until we really understand intelligence. This holds no matter how complicated intelligence ends up being.

If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI.

First, nothing is "fundamentally complex." (See the reductionism sequence.) Second, brain emulation won't work for FAI because humans are not stable goal systems over long periods of time.

Comment author: Jordan 05 December 2009 02:19:44AM 1 point [-]

We can't build useful AIs until we really understand intelligence.

You're overreaching. Uploads could clearly be useful, whether we understand how they are working or not.

brain emulation won't work for FAI because humans are not stable goal systems over long periods of time.

Agreed, uploads aren't provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.

Comment author: Vladimir_Nesov 05 December 2009 02:23:06AM *  3 points [-]

Agreed, uploads aren't provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.

But you still can't get to FAI unless you (or the uploads) understand intelligence.

Comment author: Jordan 05 December 2009 10:01:52AM *  2 points [-]

Right, the two things you must weigh and 'choose' between (in the sense of research, advocacy, etc):

1) Go for FAI, with the chance that AGI comes first

2) Go for uploads, with the chance they go crazy when self modifying

You don't get provable friendless with uploads without understanding intelligence, but you do get a potential upgrade path to super intelligence that doesn't result in the total destruction of humanity. The safety of that path may be small, but the probability of developing FAI before AGI is likewise small, so it's not clear in my mind which option is better.

Comment author: CarlShulman 05 December 2009 10:26:04AM *  8 points [-]

At the workshop after the Singularity Summit, almost everyone (including Eliezer, Robin, and myself), including all the SIAI people, said they hoped that uploads would be developed before AGI. The only folk who took the other position were those actively working on AGI (but not FAI) themselves.

Also, people at SIAI and FHI are working on papers on strategies for safer upload deployment.

Comment author: Jordan 06 December 2009 06:54:45AM 2 points [-]

Interesting, thanks for sharing that. I take it then that it was generally agreed that the time frame for FAI was probably substantially shorter than for uploads?

Comment author: CarlShulman 06 December 2009 10:43:08AM 1 point [-]

Separate (as well as overlapping) inputs go into de novo AI and brain emulation, giving two distinct probability distributions. AI development seems more uncertain, so that we should assign substantial probability to it coming before or after brain emulation. If AI comes first/turns out to be easier, then FAI-type safety measures will be extremely important, with less time to prepare, giving research into AI risks very high value.

If brain emulations come first, then shaping the upload transition to improve the odds of solving collective action problems like regulating risky AI development looks relatively promising. Incidentally, however, a lot of useful and as yet unpublished analysis (e.g. implications of digital intelligences that can be copied and run at high speed) is applicable to thinking about both emulation and de novo AI.

Comment author: Mitchell_Porter 06 December 2009 11:26:42AM 2 points [-]

I think AGI before human uploads is far more likely. If you have hardware capable of running an upload, the trial-and-error approach to AGI will be a lot easier (in the form of computationally expensive experiments). Also, it is going to be hard to emulate a human brain without knowing how it works (neurons are very complex structures and it is not obvious which component processes need to appear in the emulation), and as you approach that level of knowledge, trial-and-error again becomes easier, in the form of de novo AI inspired by knowledge of how the human brain works.

Maybe you could do a coarse-grained emulation of a living brain by high-resolution fMRI-style sampling, followed by emulation of the individual voxels on the basis of those measurements. You'd be trying to bypass the molecular and cellular complexities, by focusing on the computational behavior of brain microregions. There would still be potential for leakage of discoveries made in this way into the AGI R&D world before a complete human upload was carried out, but maybe this method closes the gap a little.

I can imagine upload of simple nonhuman nervous systems playing a role in the path to AGI, though I don't think it's at all necessary - again, if you have hardware capable of running a human upload, you can carry out computational experiments in de novo AI which are currently expensive or impossible. I can also see IA (intelligence augmentation) of human beings through neurohacks, computer-brain interfaces, and sophisticated versions of ordinary (noninvasive) interfaces. I'd rate a Singularity initiated by that sort of IA as considerably more likely than one arising from uploads, unless they're nondestructive low-resolution MRI-produced uploads. Emulating a whole adult human brain is not just an advanced technological action, it's a rather specialized one, and I expect the capacity to do so to coincide with the capacity to do IA and AI in a variety of other forms, and for superhuman intelligence to arise first on that front.

To sum up, I think the contenders in the race to produce superintelligence are trial-and-error AGI, theory-driven AGI, and cognitive neuroscience. IA becomes a contender only when cognitive neuroscience advances enough that you know what you're doing with these neurohacks and would-be enhancements. And uploads are a bit of a parlor trick that's just not in the running, unless it's accomplished via modeling the brain as a network of finite-state-machine microregions to be inferred from high-resolution fMRI. :-)

Comment author: Jordan 07 December 2009 01:50:37AM 0 points [-]

How valuable is trying to shape the two probability distributions themselves? Should we be devoting resources to encouraging people to do research in computational neuroscience instead of AGI?

Comment author: timtyler 09 December 2009 11:46:27PM *  0 points [-]

re: "almost everyone [...] said they hoped that uploads would be developed before AGI"

IMO, that explains much of the interest in uploads: wishful thinking.

Comment author: gwern 10 December 2009 12:20:53AM 5 points [-]

Reminds me of Kevin Kelly's The Maes-Garreau Point:

"Nonetheless, her colleagues really, seriously expected this bridge to immortality to appear soon. How soon? Well, curiously, the dates they predicted for the Singularity seem to cluster right before the years they were expected to die. Isn’t that a coincidence?"

Possibly the most single disturbing bias-related essay I've read, because I realized as I was reading it that my own uploading prediction was very close to my expected lifespan (based on my family history) - only 10 or 20 years past my death. It surprises me sometimes that no one else on LW/OB seems to've heard of Kelly's Maes-Garreau Point.

Comment author: CarlShulman 10 December 2009 03:04:41PM *  6 points [-]

It's an interesting methodology, but the Maes-Garreau data is just terrible quality. For every person I know on that list, the attached point estimate is misleading to grossly misleading. For instance, it gives Nick Bostrom as predicting a Singularity in 2004, when Bostrom actually gives a broad probability distribution over the 21st century, with much probability mass beyond it as well. 2004 is in no way a good representative statistic of that distribution, and someone who had read his papers on the subject or emailed him could easily find that out. The Yudkowsky number was the low end of a range (if I say that between 100 and 500 people were at an event, that's not the same thing as an estimate of 100 people!), and subsequently disavowed in favor of a broader probability distribution regardless. Marvin Minsky is listed as predicting 2070, when he has also given an estimate of most likely "5 to 500" years, and this treatment is inconsistent with the treatment of the previous two estimates. Robin Hanson's name is spelled incorrectly, and the figure beside his name is grossly unrepresentative of his writing on the subject (available for free on his website for the 'researcher' to look at). The listing for Kurzweil gives 2045, which is when Kurzweil expects a Singularity, as he defines it (meaning just an arbitrary benchmark for total computing power), but in his books he suggests that human brain emulation and life extension technology will be available in the previous decade, which would be the "living long enough to live a lot longer" break-even point if he were right about that.

I'm not sure about the others on that list, but given the quality of the observed date, I don't place much faith in the dataset as a whole. It also seems strangely sparse: where is Turing, or I.J. Good? Dan Dennett, Stephen Hawking, Richard Dawkins, Doug Hofstadter, Martin Rees, and many other luminaries are on record in predicting the eventual creation of superintelligent AI with long time-scales well after their actuarially predicted deaths. I think this search failed to pick up anyone using equivalent language in place of the term 'Singularity,' and was skewed as a result. Also, people who think that a technological singularity or the like will probably not occur for over 100 years are less likely to think it an important issue to talk about right now, and so are less likely to appear in a group selected by looking for attention-grabbing pronouncements.

A serious attempt at this analysis would aim at the following:

1) Not using point estimates, which can't do justice to a probability distribution. Give a survey that lets people assign their probability mass to different periods, or at least specifically ask for an interval, e.g. 80% confidence that an intelligence explosion will have begun/been completed after X but before Y.

2) Emailing the survey to living people to get their actual estimates.

3) Surveying a group identified via some other criterion (like knowledge of AI, note that participants at the AI@50 conference were electronically surveyed on timelines to human-level AI) to reduce selection effects.

Comment author: Vladimir_Nesov 10 December 2009 12:10:18PM *  1 point [-]

It surprises me sometimes that no one else on LW/OB seems to've heard of Kelly's Maes-Garreau Point.

It would be very surprising if you are right. I expect most of the people who have thought about the question of how such estimates could be biased would think of this idea within the first several minutes (even if without experimental data).

Comment author: mattnewport 10 December 2009 12:28:03AM 0 points [-]

Kelly doesn't give references for the dates he cites as predictions for the singularity. Did Eliezer really predict at some point that the singularity would occur in 2005? That sounds unlikely to me.

Comment author: Vladimir_Nesov 05 December 2009 01:16:30PM 2 points [-]

I tentatively agree, there well may be a way to FAI that doesn't involve normal humans understanding intelligence, but rather improved humans understanding intelligence, for example carefully modified uploads or genetically engineered/selected smarter humans.

Comment author: wedrifid 05 December 2009 03:06:35AM 2 points [-]

Agreed, uploads aren't provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.

I rather suspect uploads would arrive at AGI before their more limited human counterparts. Although I suppose uploading only the right people could theoretically increase the chances of FAI coming first.

Comment author: timtyler 09 December 2009 11:52:27PM -1 points [-]

Re: "Flight is based on some pretty simple principles, intelligence not necessarily so. If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI."

Hmm. Are there many more genes expressed in brains than in wings? IIRC, it's about equal.

Comment author: whpearson 04 December 2009 11:34:38AM 0 points [-]

Okay, let us say you want to make a test for intelligence, just as there was a test for the lift generated by a fixed wing.

As you are testing a computational system there are two things you can look at, the input-output relation and the dynamics of the internal system.

Looking purely at the IO relation is not informative, they can be fooled by GLUTs or compressed versions of the same. This is why the loebner prize has not lead to real AI in general. And making a system that can solve a single problem that we consider requires intelligence (such as chess), just gets you a system that can solve chess and does not generalize.

Contrast this with the air tunnels that the wright brothers had, they could test for lift which they knew would keep them up

If you want to get into the dynamics of the internals of the system they are divorced from our folk idea of intelligence which is problem solving (unlike the folk theory of flight, which connects nicely with lift from a wing). So what sort of dynamics should we look for?

If the theory of intelligence is correct the dynamics will have to be found in the human brain. Despite the slowness and difficulties of analysing it it. we are generating more data which we should be able to use to narrow down the dynamics.

How would you go about creating a testable theory of intelligence? Preferably without having to build a many person-year project each time you want to test your theory.

Comment author: timtyler 09 December 2009 11:56:25PM -1 points [-]

Intelligence is defined in terms of response to a variable environment - so you just use an environment with a wide range of different problems in it.