eli_sennesh comments on International cooperation vs. AI arms race - Less Wrong

15 Post author: Brian_Tomasik 05 December 2013 01:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (143)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 11 December 2013 06:26:57PM *  0 points [-]

May I recommend less drama?

Frankly, when someone writes a post recommending global thermonuclear war as a possible option, that's my line. My suggested courses of action are noticeably less melodramatic and noticeably closer to the plain, boring field of WW3-prevention.

But I gave you the upvote anyway for calling out my davkanik tendencies.

Comment author: Nornagest 11 December 2013 06:59:32PM 2 points [-]

Frankly, when someone writes a post recommending global thermonuclear war as a possible option, that's my line.

I'm genuinely confused. There's an analogy to a nuclear arms race running through the OP, but as best I can tell it's mostly linking AI development controls to Cold War-era arms control efforts -- which seems reasonable, if inexact. Certainly it's not advocating tossing nukes around.

Can you point me to exactly what you're responding to?

Comment author: [deleted] 11 December 2013 08:21:22PM 1 point [-]

Ah, I seem to be referring to James' excerpt from his book rather than the OP:

A friendly AI would allow trillions and trillions of people to eventually live their lives, and mankind and our descendents could survive to the end of the universe in utopia. In contrast, an unfriendly AI would destroy us. I have decided to make the survival of mankind my overwhelming priority. Consequently, since a thermonuclear war would non-trivially increase the chance of mankind’s survival, I believe that it's my moral duty to initiate war, even though my war will kill over a billion human beings.

Comment author: Nornagest 11 December 2013 08:51:12PM *  1 point [-]

Oh, that makes more sense. I'd assumed, since this thread was rooted under the OP, that you were responding to that.

After reading James's post, though, I don't think it's meant to be treated as comprehensive, much less prescriptive. He seems to be giving some (fictional) outlines of outcomes that could arise in the absence of early and aggressive cooperation on AI development; the stakes at that point are high, so the consequences are rather precipitous, but this is still something to avoid rather than something to pursue. Reading between the lines, in fact, I'd say the policy implications he's gesturing towards are much the same as those you've been talking about upthread.

On the other hand, it's very early to be hashing out scenarios like this, and doing so doesn't say anything particularly good about us from a PR perspective. It's hard enough getting people to take AI seriously as a risk, full stop; we don't need to exacerbate that with wild apocalyptic fantasies just yet.

Comment author: [deleted] 12 December 2013 12:23:48AM 0 points [-]

It's hard enough getting people to take AI seriously as a risk, full stop

This bears investigating. I mean, come on, the popular view of AI among the masses is that All AI Is A Crapshoot, that every single time it will end in the Robot Wars. So how on Earth can it be difficult to convince people that UFAI is an issue?

I mean, hell, if I wanted to scare someone, I'd just point out that no currently-known model of AGI includes a way to explicitly specify goals desirable to humans. That oughtta scare folks.

Comment author: TheOtherDave 12 December 2013 03:23:35AM 2 points [-]

I've talked to a number of folks who conclude that AIs will be superintelligent and therefore will naturally derive and follow the true morality (you know, the same one we do), and dismiss all that Robot Wars stuff as television crap (not unreasonably, as far as it goes).

Comment author: [deleted] 14 December 2013 10:20:44PM 0 points [-]

(you know, the same one we do),

Which one's that, eh ;-)?

folks who conclude that AIs will be superintelligent and therefore will naturally derive and follow the true morality

Are these religious people? I mean, come on, where do you get moral realism if not from some kind of moral metaphysics?

dismiss all that Robot Wars stuff as television crap (not unreasonably, as far as it goes).

Certainly it's not unreasonable. One UFAI versus humans with no FAI to fight back, I wouldn't call anything so one-sided a war.

(And I'm sooo not making the Dalek reference that I really want to. Someone else should do it.)

Comment author: nshepperd 18 December 2013 01:17:43PM *  2 points [-]

moral realism

Pedantic complaint about language: moral realism simply says that moral claims do state facts, and at least some of them are true. It takes further assumptions ("internalism") to claim that these moral facts are universally compelling in the sense of moving any intelligent being to action. (I personally believe the latter assumption to be nonsense, hence AGI is a really bad idea.)

Granted, I don't know of any nice precise term for that position that all intelligent beings must necessarily do the right thing, possibly because it's so ridiculous no philosopher would profess it publicly in such words. On the other hand, motivational internalism would seem to be very intuitive, judging by the pervasiveness of the view that AI doesn't pose any risk.

Comment author: TheAncientGeek 18 December 2013 02:37:04PM -2 points [-]

Granted, I don't know of any nice precise term for that position that all intelligent beings must necessarily do the right thing

Isn't it called Convergence?

Comment author: TheOtherDave 18 December 2013 03:12:35PM 1 point [-]

Are you under the impression that CEV advocates around here believe that all intelligent beings must necessarily do the right thing?

Comment author: nshepperd 18 December 2013 05:19:17PM *  0 points [-]

Eh, maybe? I've seen "convergence thesis" thrown about on LW, but it's hardly established terminology. Not sure it would be fair to use a phrase so easily confused with Bostrom's much more reasonable Instrumental Convergence Thesis either. (Also, it has nothing to do with CEV so I don't see the point of that link.)

Comment author: TheOtherDave 14 December 2013 11:45:53PM 2 points [-]

I've never had that conversation with explicitly religious people, and moral realism at the "some things are just wrong and any sufficiently intelligent system will know it" level is hardly unheard of among atheists.

Comment author: [deleted] 15 December 2013 09:03:29AM 3 points [-]

moral realism at the "some things are just wrong and any sufficiently intelligent system will know it" level is hardly unheard of among atheists.

Really? I mean, sorry for blathering, but I find this extremely surprising. I always considered it a simple fact that if you don't have some kind of religious/faith-based metaphysics operating, you can't be a moral realist. What experiment could you possibly perform to test moral-realist hypotheses, particularly when dealing with nonhumans? It simply doesn't make any sense.

Oh well.

Comment author: Brian_Tomasik 17 December 2013 10:55:34AM *  3 points [-]

Moral realism makes no more sense with religion. As CS Lewis said: "Nonsense does not cease to be nonsense when we put the words 'God can' before it."

Comment author: hyporational 15 December 2013 01:03:42PM 1 point [-]

I find this extremely surprising.

Why? Beliefs that make no sense are very common. Atheists are no exception.

Comment author: TheAncientGeek 18 December 2013 02:48:21PM *  0 points [-]

Really? I mean, sorry for blathering, but I find this extremely surprising. I always considered it a simple fact that if you don't have some kind of religious/faith-based metaphysics operating, you can't be a moral realist. What experiment could you possibly perform

That would be epistemology...

to test moral-realist hypotheses, particularly when dealing with nonhumans? It simply doesn't make any sense.

There are rationally acceptable subjects that don't use empiricism, such as maths, and there are subjects such as economics which have a mixed epistemology.

However, if this epistemological-sounding complaint is actually about metaphysics, ie "what experiment could you perform to detect a non-natural moral property", the answer is that moral realists have to suppose the existence of special psychological faculty.

Comment author: passive_fist 18 December 2013 06:12:00AM 0 points [-]

Might I suggest you take a look at the metaethics sequence? This position is explained very well.

Comment author: TheOtherDave 15 December 2013 03:40:27PM 0 points [-]

You talk as though religion were something that appeared in people's minds fully formed and without causes, and that the logical fallacies associated with it were then caused by religion.

Comment author: TheAncientGeek 18 December 2013 02:26:49PM *  0 points [-]

Are these religious people? I mean, come on, where do you get moral realism if not from some kind of moral metaphysics?

From abstract reason or psychological facts, or physical facts, or a mixture.

There is a subject called economics. It tells you how to achieve certain goals, such as maximising GDP. It doesn't do that by corresponding to a metaphysical Economics Object, it does that with a mixture of theoretical reasoning and examination of evidence.

There is a subject called ethics. It tells you how to achieve certain goals, such as maximising happiness....

Comment author: [deleted] 18 December 2013 05:12:34PM 1 point [-]

There is a subject called ethics. It tells you how to achieve certain goals, such as maximising happiness....

Well there's the problem: ethics does not automatically start out with a happiness-utilitarian goal. Lots of extent ethical systems use other terminal goals. For instance...

Comment author: TheAncientGeek 18 December 2013 06:43:02PM 0 points [-]

"Such as"

Comment author: polymathwannabe 18 December 2013 03:10:32PM -1 points [-]
Comment author: TheAncientGeek 18 December 2013 03:46:23PM 1 point [-]

Of course economics doesn't have the well-established laws of physical science: it wouldn't be much of an analogy for ethics if it did.But having an epistemology that doens't work very well is not the same as having an epistemology that requires non-natural entities.

Comment author: Nornagest 12 December 2013 01:08:30AM *  2 points [-]

So how on Earth can it be difficult to convince people that UFAI is an issue?

Well, there's a couple prongs to that. For one thing, it's tagged as fiction in most people's minds, as might be suggested by the fact that it's easily described in trope. That's bad enough by itself.

Probably more importantly, though, there's a ferocious tendency to anthropomorphize this sort of thing, and you can't really grok UFAI without burning a good bit of that tendency out of your head. Sure, we ourselves aren't capital-F Friendly, but we're a far cry yet from a paperclip maximizer or even most of the subtler failures of machine ethics; a jealous or capricious machine god is bad, but we're talking Screwtape here, not Azathoth. HAL and Agent Smith are the villains of their stories, but they're human in most of the ways that count.

You may also notice that we tend to win fictional robot wars.

Comment author: ialdabaoth 12 December 2013 01:15:17AM *  2 points [-]

Also, note that the tropes tend to work against people who say "we have a systematic proof that our design of AI will be Friendly". In fact, in general the only way a fictional AI will turn out 'friendly' is if it is created entirely by accident - ANY fictional attempt to intentionally create a Friendly AI will result in an abomination, usually through some kind of "dick Genie" interpretation of its Friendliness rules.

Comment author: Nornagest 12 December 2013 01:23:30AM *  2 points [-]

Yeah. I think I'd consider that a form of backdoor anthropomorphization by way of vitalism, though. Since we tend to think of physically nonhuman intelligences as cognitively human, and since we tend to think of human ethics and cognition as something sacred and ineffable, fictional attempts to eff them tend to be written as crude morality plays.

Intelligence arising organically from a telephone exchange or an educational game or something doesn't trigger the same taboos.

Comment author: Lumifer 11 December 2013 06:35:04PM *  1 point [-]

when someone writes a post recommending global thermonuclear war as a possible option

Looks like you (emphasis mine):

In case of an actual UFAI appearing and beginning a process of paper-clipping the world within a timespan that we can see it coming before it kills us: consider annihilating the planet

and

my davkanik tendencies

You can be a contrarian with less drama perfectly well :-)

Comment author: [deleted] 11 December 2013 07:01:03PM -1 points [-]

Looks like you (emphasis mine):

I would note that "we are all in the process of dying horribly" is actually a pretty dramatic situation. At the moment, actually, I'm not banking on ever seeing it: I think actual AI creation requires such expertise and has such extreme feasibility barriers that successfully building a functioning software-embodied optimization process tends to require such group efforts that someone thinks hard about what the goal system is.

Comment author: Lumifer 11 December 2013 07:05:13PM *  1 point [-]

I would note that "we are all in the process of dying horribly" is actually a pretty dramatic situation.

Given that "we are all in the process of dying" is true for all living beings for as long as living beings existed, I don't see anything dramatic in here. As to "horribly", what is special about today's "horror" compared to, say, a hundred years ago?

Comment author: [deleted] 11 December 2013 08:18:56PM -1 points [-]

I hadn't meant today. I had meant in the case of a UFAI getting loose. That's one of those rare situations where you should consider yourself assuredly dead already and start considering how you're going to kill the damn UFAI, whatever that costs you.

Whereas in the present day, I would not employ "nuke it from orbit; only way to be sure" solutions to, well, anything.

Comment author: ialdabaoth 11 December 2013 06:30:59PM *  0 points [-]

Frankly, when someone writes a post recommending global thermonuclear war as a possible option, that's my line. My suggested courses of action are noticeably less melodramatic and noticeably closer to the plain, boring field of WW3-prevention.

The currently fashionable descriptor is "metacontrarianism" - you might get better responses if you phrase your objection in that way.

(man, I LOVE when things go factorially N-meta)

Comment author: [deleted] 11 December 2013 07:02:22PM 1 point [-]

I'm not actually sure who the metacontrarian is here.

Comment author: ialdabaoth 11 December 2013 07:15:59PM 0 points [-]

Hence my delight in the factorial metaness.