ata comments on The Urgent Meta-Ethics of Friendly Artificial Intelligence - Less Wrong

45 Post author: lukeprog 01 February 2011 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: ata 01 February 2011 09:10:08PM *  13 points [-]

They do. (Many of EY's own posts are tagged "philosophy".) Indeed, FAI will require robust solutions to several standard big philosophical problems, not just metaethics; e.g. subjective experience (to make sure that CEV doesn't create any conscious persons while extrapolating, etc.), the ultimate nature of existence (to sort out some of the anthropic problems in decision theory), and so on. The difference isn't (just) in what questions are being asked, but in how we go about answering them. In traditional philosophy, you're usually working on problems you personally find interesting, and if you can convince a lot of other philosophers that you're right, write some books, and give a lot of lectures, then that counts as a successful career. LW-style philosophy (as in the "Reductionism" and "Mysterious Answers" sequences) is distinguished in that there is a deep need for precise right answers, with more important criteria for success than what anyone's academic peers think.

Basically, it's a computer science approach to philosophy: any progress on understanding a phenomenon is measured by how much closer it gets you to an algorithmic description of it. Academic philosophy occasionally generates insights on that level, but overall it doesn't operate with that ethic, and it's not set up to reward that kind of progress specifically; too much of it is about rhetoric, formality as an imitation of precision, and apparent impressiveness instead of usefulness.

Comment author: NancyLebovitz 01 February 2011 09:33:30PM 4 points [-]

e.g. subjective experience (to make sure that CEV doesn't create any conscious persons while extrapolating, etc.),

Also, to figure out whether particular uploads have qualia, and whether those qualia resemble pre-upload qualia, it that's wanted.

Comment author: jacob_cannell 02 February 2011 07:12:36AM 1 point [-]

I should just point out that these two goals (researching uploads, and not creating conscious persons) are starkly antagonistic.

Comment author: shokwave 02 February 2011 07:40:56AM 3 points [-]

are starkly antagonistic.

Not in the slightest. First, uploads are continuing conscious persons. Second, creating conscious persons is a problem if they might be created in uncomfortable or possibly hellish conditions - if, say, the AI was brute-forcing every decision it would simulate countless numbers of humans in pain before it found the least painful world. I do not think we would have a problem with the AI creating conscious persons in a good environment. I mean, we don't have that problem with parenthood.

Comment author: NancyLebovitz 02 February 2011 10:14:07PM 1 point [-]

What if it's researching pain qualia at ordinary levels because it wants to understand the default human experience?

I don't know if we're getting into eye-speck territory, but what are the ethics of simulating an adult human who's just stubbed their toe, and then ending the simulation?

Comment author: shokwave 03 February 2011 07:39:05AM 1 point [-]

I feel like the consequences are net positive, but I don't trust my human brain to correctly determine this question. I would feel uncomfortable with an FAI deciding it, but I would also feel uncomfortable with a person deciding it. It's just a hard question.

Comment author: DSimon 02 February 2011 02:54:58PM *  0 points [-]

What if they were created in a good environment and then abruptly destroyed because the AI only needed to simulate them for a few moments to get whatever information it needed?

Comment author: shokwave 02 February 2011 03:40:58PM 2 points [-]

Well - what if a real person went through the same thing? What does your moral intuition say?

Comment author: DSimon 02 February 2011 06:01:55PM *  1 point [-]

That it would be wrong. If I had the ability to spontaneously create fully-formed adult people, it would be wrong to subsequently kill them, even if I did so painlessly and in an instant. Whether a person lives or dies should be under the control of that person, and exceptions to this rule should lean towards preventing death, not encouraging it.

Comment author: johnlawrenceaspden 30 October 2012 06:38:51PM 1 point [-]

What if they were created in a good environment, (20) stopped, and then restarted (goto 20) ?

Is that one happy immortal life or an infinite series of murders?

Comment author: DSimon 02 November 2012 05:53:37AM 0 points [-]

I think closer to the latter. Starting a simulated person, running them for a while, and then ending and discarding the resulting state effectively murders the person. If you then start another copy of that person, then depending on how you think about identity, that goes two ways:

Option A: The new person, being a separate running copy, is unrelated to the first person identity-wise, and therefore the act of starting the second person does not change the moral status of ending the first. Result: Infinite series of murders.

Option B: The new person, since they are running identically to the old person, is therefore actually the same person identity-wise. Thus, you could in a sense un-murder them by letting the simulation continue to run after the reset point. If you do the reset again, however, you're just recreating the original murder as it was. Result: Single murder.

Neither way is a desirable immortal life, which I think is a more useful way to look at it then "happy".