Richard_Loosemore comments on Debunking Fallacies in the Theory of AI Motivation - Less Wrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: nshepperd 10 May 2015 06:01:59AM 12 points [-]

This article just makes the same old errors over and over again. Here's one:

"An all-powerful computer that was programmed to maximize human pleasure, for example, might consign us all to an intravenous dopamine drip [and] almost any easy solution that one might imagine leads to some variation or another on the Sorcerer’s Apprentice, a genie that’s given us what we’ve asked for, rather than what we truly desire." (Marcus 2012)

He is depicting a Nanny AI gone amok. It has good intentions (it wants to make us happy) but the programming to implement that laudable goal has had unexpected ramifications, and as a result the Nanny AI has decided to force all human beings to have their brains connected to a dopamine drip.

No. The AI does not have good intentions. Its intentions are extremely bad. It wants to make us happy, which is a completely distinct thing from actually doing what is good. The AI was in fact never programmed to do what is good, and there are no errors in its code.

The lack of precision here is depressing.

Comment author: Richard_Loosemore 10 May 2015 07:34:24PM -1 points [-]

The lack of understanding in this comment is depressing.

You say:

"No. The AI does not have good intentions. Its intentions are extremely bad."

If you think this is wrong, take it up with the people whose work I am both quoting and analyzing in this paper, because THAT IS WHAT THEY ARE CLAIMING. I am not the one saying that "the AI is programmed with good intentions", that is their claim.

So I suggest you write a letter to Muehlhauser, Omohundro, Yudkowsky and the various others quoted in the paper, explaining to them that you find their lack of precision depressing.

Comment author: nshepperd 11 May 2015 09:19:31AM *  5 points [-]

If that's the case, then please enclose that sentence in quotes and add a citation. Note that a quote saying that the AI was programmed to maximise happiness (or indeed, pleasure, as that is what the original quote described) is insufficient because, as is my whole point, "happiness" and "good" are different things.

And then add a sentence, not in quotes, claiming that the AI does not have good intentions, instead of one claiming that the AI has good intentions.

Or perhaps, as I suspect, you still believe that you can carelessly rephrase "programmed to maximise human pleasure" into "has good intentions" without anyone noticing that you are putting words in mouths?

Comment author: Richard_Loosemore 11 May 2015 11:35:10PM 1 point [-]

This seems a little pedantic, so I thought about not replying (my usual policy), but I guess I will.

The paper is all about the precise nature of the distinction between

wants to make us happy

and

actually doing what is good

Most commenters got that straight away. The paper examines a particular issue within that contrast, but even so, that is clearly the topic of the paper. You, on the other hand, seem very, very keen to tell me that those two things are, arguably, different. Thank you, but since that is what the paper is about, you can safely assume that I got that.

Without exception, everyone so far who has read the paper understood that in the sentence that I wrote, which you quote:

It has good intentions (it wants to make us happy) but the programming to implement that laudable goal has had unexpected ramifications, and as a result the Nanny AI has decided to force all human beings to have their brains connected to a dopamine drip.

.... the phrase "good intentions" was being used as a colloquial paraphrase for the parenthetical clarification "it wants to make us happy". My intention (no pun intended) was clearly NOT to use the phrase "good intentions" in any technical sense, but to given a normal-usage summary of an idea. The two phrasings are supposed to say the same thing, and that thing is what you summarize with the words:

wants to make us happy

By contrast, the other part of my sentence, where I say

.... but the programming to implement that laudable goal has had unexpected ramifications, and as a result the Nanny AI has decided to force all human beings to have their brains connected to a dopamine drip.

.... was universally understood to refer to the other side of the distinction that is at the heart of the paper, namely (in your words):

actually doing what is good

I can't help but notice that TODAY there is a new article on the Future of Life Institute website written by Nathan Collins, whose title is:

'''Artificial Intelligence: The Danger of Good Intentions'''

with the subtitle:

'''Why well-intentioned AI could pose a greater threat to humanity than malevolent cyborgs'''

So my question to you is: why are you so smart in the absolute precision of your word usage, but everyone else is so "careless"?

Comment author: nshepperd 12 May 2015 03:19:46AM 1 point [-]

Well, I do take issue to even people at FLI describing UFAI as having "good intentions". It disguises a challengeable inductive inference. It certainly sounds less absurd to claim that an AI with a pleasure maximisation goal is likely to connect brains to dopamine drips, than that one with "good" intentions would do so. Even if you then assert that you were only using "good" in a colloquial sense, and you actually meant "bad" all along.