whpearson comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: whpearson 12 August 2010 07:49:15PM 6 points [-]

If I was SIAI my reasoning would be the following. First stop with the believes- believes not dichotomy and move to probabilities.

So what is the probability of a good outcome if you can't formalize friendliness before AGI? Some of them would argue infinitesimal. This is based on fast take-off winner take all type scenarios (I have a problem with this stage, but I would like it to be properly argued and that is hard).

So looking at the decision tree (under these assumptions) the only chance of a good outcome is to try to formalise FAI before AGI becomes well known. All the other options lead to extinction.

So to attack the "formalise Friendliness before AGI" position you would need to argue that the first AGIs are very unlikely to kill us all. That is the major battleground as far as I am concerned.

Comment author: Benja 12 August 2010 07:58:11PM *  5 points [-]

Agreed about what the "battleground" is, modulo one important nit: not the first AGI, but the first AGI that recursively self-improves at a high speed. (I'm pretty sure that's what you meant, but it's important to keep in mind that, e.g., a roughly human-level AGI as such is not what we need to worry about -- the point is not that intelligent computers are magically superpowerful, but that it seems dangerously likely that quickly self-improving intelligences, if they arrive, will be non-magically superpowerful.)

Comment author: jimrandomh 12 August 2010 07:54:53PM 3 points [-]

I don't think formalize-don't formalize should be a simple dichotomy either; friendliness can be formalized in various levels of detail, and the more details are formalized, the fewer unconstrained details there are which could be wrong in a way that kills us all.

Comment author: ciphergoth 13 August 2010 06:07:51AM 2 points [-]

I'd look at it the other way: I'd take it as practically certain that any superintelligence built without explicit regard to Friendliness will be unFriendly, and ask what the probability is that through sufficiently slow growth in intelligence and other mere safeguards, we manage to survive building it.

My best hope currently rests on the AGI problem being hard enough that we get uploads first.

(This is essentially the Open Thread about everything Eliezer or SIAI have ever said now, right?)

Comment author: NihilCredo 15 August 2010 12:19:51AM 1 point [-]

Uploading would have quite a few benefits, but I get the impression it would make us more vulnerable to whatever tools a hostile AI may possess, not less.

Comment author: timtyler 13 August 2010 07:36:18AM *  1 point [-]

"So what is the probability of a good outcome if you can't formalize friendliness before AGI? Some of them would argue infinitesimal."

One problem here is the use of a circular definition of "friendliness" - that defines the concept it in terms of whether it leads to a favourable outcome. If you think "friendly" is defined in terms of whether or not the machine destroys humanity, then clearly you will think that an "unfriendly" machine would destroy the world. However, this is just a word game - which doesn't tell us anything about the actual chances of such destruction happening.

Comment deleted 12 August 2010 08:31:13PM *  [-]
Comment author: whpearson 12 August 2010 08:54:08PM *  1 point [-]

I'd start here to get an overview.

My summary would be: there are huge numbers of types of minds and motivations, so if we pick one at random from the space of minds then it likely to be contrary to our values because it will have a different sense of what is good or worthwhile. This moderately relies on the speed/singleton issue, because evolution pressure between AI might force them in the same direction as us. We would likely be out-competed before this happens though, if we rely on competition between AIs.

I think various people associated with SIAI mean different things by formalizing friendliness. I remember Vladimir Nesov means getting better than 50% probability for providing a good outcome.

Edited to add my own overview.

Comment deleted 12 August 2010 09:24:20PM [-]
Comment author: whpearson 12 August 2010 09:38:35PM 2 points [-]

Consider my "at random" short hand for "at random from the space of possible minds built by humans".

The Eliezer approved example of humans not getting a simple system to do what they want is the classic Machine Learning example where a Neural Net was trained on two different sorts of tanks. It had happened that the photographs of the different types of tanks had been taken at different times of day. So the classifier just worked on that rather than actually looking at the types of tank. So we didn't build a tank classifier but a day/night classifier. More here.

While I may not agree with Eliezer on everything, I do agree with him it is damn hard to get a computer to do what you want when you stop programming them explicitly .

Comment deleted 12 August 2010 09:48:21PM [-]
Comment author: whpearson 12 August 2010 09:54:38PM 2 points [-]

How do you consider "formalizing friendliness" to be different from "building safeguards"?

Comment deleted 12 August 2010 09:56:10PM *  [-]
Comment author: whpearson 12 August 2010 10:19:41PM 2 points [-]

Are you really suggesting a trial and error approach where we stick evolved and human created AIs in boxes and then eyeball them to see what they are like? Then pick the nicest looking one, on a hunch, to have control over our light cone?

I've never seen the appeal of AI boxing.

Comment author: wedrifid 13 August 2010 05:42:07AM 1 point [-]

This is why we need to create friendliness before AGI -> A lot of people who are loosely familiar with the subject think those options will work!

A goal directed intelligence will work around any obstacles in front of it. It'll make damn sure that it prevents anyone from pressing emergency stop buttons.

Comment author: Vladimir_Nesov 12 August 2010 09:01:11PM 1 point [-]

Better than chance? What chance?

Comment author: whpearson 12 August 2010 09:14:38PM 1 point [-]

Sorry, "Better than chance" is an english phrase than tends to mean more than 50%.

It assumes an even chance of each outcome. I.e. do better than selecting randomly.

Not appropriate in this context, my brain didn't think of the wider implications as it wrote it.

Comment author: Vladimir_Nesov 12 August 2010 09:18:09PM 0 points [-]

It's easy to do better than random. *Pours himself a cup of tea.*

Comment author: timtyler 13 August 2010 06:52:28AM 0 points [-]

Programmers do not operate by "picking programs at random", though.

The idea that "picking programs at random" has anything to do with the issue seems just confused to me.

Comment author: whpearson 13 August 2010 08:10:45AM 0 points [-]

The first AI will be determined by the first programmer, sure. But I wasn't talking about that level; the biases and concern for the ethics of the AI of that programmer will be random from the space of humans. Or at least I can't see any reason why I should expect people who care about ethics to be more likely to make AI than those that think economics will constrain AI to be nice,

Comment author: timtyler 13 August 2010 08:29:50AM *  0 points [-]

That is now a completely different argument to the original "there are huge numbers of types of minds and motivations, so if we pick one at random from the space of minds".

Re: "the biases and concern for the ethics of the AI of that programmer will be random from the space of humans"

Those concerned probably have to be an expert programmers, able to build a company or research group, and attract talented assistance, as well as probably customers. They will probably be far-from what you would get if you chose at "random".

Comment author: whpearson 13 August 2010 09:22:46AM *  1 point [-]

Do we pick a side of a coin "at random" from the two possibilities when we flip it?

Epistemically, yes, we don't have sufficient information to predict it*. However if we do the same thing twice it has the same outcome so it is not physically random.

So while the process that decides what the first AI is like is not physically random, it is epistemically random until we have a good idea of what AIs produce good outcomes and get humans to follow those theories. For this we need something that looks like a theory of friendliness, to some degree.

Considering we might use evolutionary methods for part of the AI creation process, randomness doesn't look like too bad a model.

*With a few caveats. I think it is biased to land the same way up as it was when flipped, due to the chance of making it spin and not flip.

Edit: Oh and no open source AI then?

Comment author: timtyler 13 August 2010 09:43:59AM *  0 points [-]

We do have an extensive body of knowledge about how to write computer programs that do useful things. The word "random" seems like a terrible mis-summary of that body of information to me.

As for "evolution" being equated to "randomnness" - isn't that one of the points that creationists make all the time? Evolution has two motors - variation and selection. The first of these may have some random elements, but it is only one part of the overall process.

Comment author: whpearson 13 August 2010 09:59:40AM 0 points [-]

I think we have a disconnect on how much we believe proper scary AIs will be like previous computer programs.

My conception of current computer programs is that they are crystallised thoughts plucked from our own minds and easily controllable and unchangeable. When we get interesting AI the programs will morphing and be far less controllable without a good theory of how to control the change.

I shudder every time people say the "AI's source code" as if it is some unchangeable and informative thing about the AI's behaviour after the first few days of the AI's existence.

I'm not sure how to resolve that difference.