whpearson comments on Call for new SIAI Visiting Fellows, on a rolling basis - Less Wrong

29 Post author: AnnaSalamon 01 December 2009 01:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (264)

You are viewing a single comment's thread.

Comment author: whpearson 01 December 2009 11:01:00PM 5 points [-]

I really like what SIAI is trying to do, the spirit that it embodies.

However I am getting more skeptical of any projections or projects based on non-good old fashioned scientific knowledge (my own included).

You can progress scientifically to make AI if you copy human architecture somewhat. By making predictions about how the brain works and organises itself. However I don't see how we can hope make significant progress on non-human AI. How will we test whether our theories are correct or on the right path? For example, what evidence from the real world would convince the SIAI to abandon the search for a fixed decision theory as a module of the AI. And why isn't SIAI looking for the evidence, to make sure that you aren't wasting your time?

For every Einstein that makes the "right" cognitive leap there are probably many orders of magnitudes of more Kelvin's that do things like predict that meteors provide fuel for the sun.

How are you going to winnow out the wrong ideas if they are consistent with everything we know, especially if they are pure mathematical constructs.

Comment author: AngryParsley 04 December 2009 09:20:06AM *  4 points [-]

You can progress scientifically to make AI if you copy human architecture somewhat.

I think you're making the mistake of relying too heavily on our one sample of a general intelligence: the human brain. How do we know which parts to copy and which parts to discard? To draw an analogy to flight, how can we tell which parts of the brain are equivalent to a bird's beak and which parts are equivalent to wings? We need to understand intelligence before we can successfully implement it. Research on the human brain is expensive, requires going through a lot of red tape, and it's already being done by other groups. More importantly, planes do not fly because they are similar to birds. Planes fly because we figured out a theory of aerodynamics. Planes would fly just as well if no birds ever existed, and explaining aerodynamics doesn't require any talk of birds.

I don't see how we can hope make significant progress on non-human AI. How will we test whether our theories are correct or on the right path?

I don't see how we can hope to make significant progress on non-bird flight. How will we test whether our theories are correct or on the right path?

Just because you can't think of a way to solve a problem doesn't mean that a solution is intractable. We don't yet have the equivalent of a theory of aerodynamics for intelligence, but we do know that it is a computational process. Any algorithm, including whatever makes up intelligence, can be expressed mathematically.

As to the rest of your comment, I can't really respond to the questions about SIAI's behavior, since I don't know much about what they're up to.

Comment author: Jordan 04 December 2009 10:10:34AM 1 point [-]

The bird analogy rubs me the wrong way more and more. I really don't think it's a fair comparison. Flight is based on some pretty simple principles, intelligence not necessarily so. If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI. Certainly intelligence might have some nice underlying theory, so we should pursue that angle as well, but I don't see how we can be certain either way.

Comment author: AngryParsley 04 December 2009 06:55:08PM *  5 points [-]

Flight is based on some pretty simple principles, intelligence not necessarily so.

I think the analogy still maps even if this is true. We can't build useful AIs until we really understand intelligence. This holds no matter how complicated intelligence ends up being.

If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI.

First, nothing is "fundamentally complex." (See the reductionism sequence.) Second, brain emulation won't work for FAI because humans are not stable goal systems over long periods of time.

Comment author: Jordan 05 December 2009 02:19:44AM 1 point [-]

We can't build useful AIs until we really understand intelligence.

You're overreaching. Uploads could clearly be useful, whether we understand how they are working or not.

brain emulation won't work for FAI because humans are not stable goal systems over long periods of time.

Agreed, uploads aren't provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.

Comment author: Vladimir_Nesov 05 December 2009 02:23:06AM *  3 points [-]

Agreed, uploads aren't provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.

But you still can't get to FAI unless you (or the uploads) understand intelligence.

Comment author: Jordan 05 December 2009 10:01:52AM *  2 points [-]

Right, the two things you must weigh and 'choose' between (in the sense of research, advocacy, etc):

1) Go for FAI, with the chance that AGI comes first

2) Go for uploads, with the chance they go crazy when self modifying

You don't get provable friendless with uploads without understanding intelligence, but you do get a potential upgrade path to super intelligence that doesn't result in the total destruction of humanity. The safety of that path may be small, but the probability of developing FAI before AGI is likewise small, so it's not clear in my mind which option is better.

Comment author: CarlShulman 05 December 2009 10:26:04AM *  8 points [-]

At the workshop after the Singularity Summit, almost everyone (including Eliezer, Robin, and myself), including all the SIAI people, said they hoped that uploads would be developed before AGI. The only folk who took the other position were those actively working on AGI (but not FAI) themselves.

Also, people at SIAI and FHI are working on papers on strategies for safer upload deployment.

Comment author: Jordan 06 December 2009 06:54:45AM 2 points [-]

Interesting, thanks for sharing that. I take it then that it was generally agreed that the time frame for FAI was probably substantially shorter than for uploads?

Comment author: CarlShulman 06 December 2009 10:43:08AM 1 point [-]

Separate (as well as overlapping) inputs go into de novo AI and brain emulation, giving two distinct probability distributions. AI development seems more uncertain, so that we should assign substantial probability to it coming before or after brain emulation. If AI comes first/turns out to be easier, then FAI-type safety measures will be extremely important, with less time to prepare, giving research into AI risks very high value.

If brain emulations come first, then shaping the upload transition to improve the odds of solving collective action problems like regulating risky AI development looks relatively promising. Incidentally, however, a lot of useful and as yet unpublished analysis (e.g. implications of digital intelligences that can be copied and run at high speed) is applicable to thinking about both emulation and de novo AI.

Comment author: timtyler 09 December 2009 11:46:27PM *  0 points [-]

re: "almost everyone [...] said they hoped that uploads would be developed before AGI"

IMO, that explains much of the interest in uploads: wishful thinking.

Comment author: gwern 10 December 2009 12:20:53AM 5 points [-]

Reminds me of Kevin Kelly's The Maes-Garreau Point:

"Nonetheless, her colleagues really, seriously expected this bridge to immortality to appear soon. How soon? Well, curiously, the dates they predicted for the Singularity seem to cluster right before the years they were expected to die. Isn’t that a coincidence?"

Possibly the most single disturbing bias-related essay I've read, because I realized as I was reading it that my own uploading prediction was very close to my expected lifespan (based on my family history) - only 10 or 20 years past my death. It surprises me sometimes that no one else on LW/OB seems to've heard of Kelly's Maes-Garreau Point.

Comment author: Vladimir_Nesov 05 December 2009 01:16:30PM 2 points [-]

I tentatively agree, there well may be a way to FAI that doesn't involve normal humans understanding intelligence, but rather improved humans understanding intelligence, for example carefully modified uploads or genetically engineered/selected smarter humans.

Comment author: wedrifid 05 December 2009 03:06:35AM 2 points [-]

Agreed, uploads aren't provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.

I rather suspect uploads would arrive at AGI before their more limited human counterparts. Although I suppose uploading only the right people could theoretically increase the chances of FAI coming first.

Comment author: timtyler 09 December 2009 11:52:27PM -1 points [-]

Re: "Flight is based on some pretty simple principles, intelligence not necessarily so. If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI."

Hmm. Are there many more genes expressed in brains than in wings? IIRC, it's about equal.

Comment author: whpearson 04 December 2009 11:34:38AM 0 points [-]

Okay, let us say you want to make a test for intelligence, just as there was a test for the lift generated by a fixed wing.

As you are testing a computational system there are two things you can look at, the input-output relation and the dynamics of the internal system.

Looking purely at the IO relation is not informative, they can be fooled by GLUTs or compressed versions of the same. This is why the loebner prize has not lead to real AI in general. And making a system that can solve a single problem that we consider requires intelligence (such as chess), just gets you a system that can solve chess and does not generalize.

Contrast this with the air tunnels that the wright brothers had, they could test for lift which they knew would keep them up

If you want to get into the dynamics of the internals of the system they are divorced from our folk idea of intelligence which is problem solving (unlike the folk theory of flight, which connects nicely with lift from a wing). So what sort of dynamics should we look for?

If the theory of intelligence is correct the dynamics will have to be found in the human brain. Despite the slowness and difficulties of analysing it it. we are generating more data which we should be able to use to narrow down the dynamics.

How would you go about creating a testable theory of intelligence? Preferably without having to build a many person-year project each time you want to test your theory.

Comment author: timtyler 09 December 2009 11:56:25PM -1 points [-]

Intelligence is defined in terms of response to a variable environment - so you just use an environment with a wide range of different problems in it.

Comment author: [deleted] 03 December 2009 02:56:53AM 2 points [-]

If a wrong idea is both simple and consistent with everything you know, it cannot be winnowed out. You have to either find something simpler or find an inconsistency.