TheOtherDave comments on Welcome to Less Wrong! (5th thread, March 2013) - Less Wrong

27 Post author: orthonormal 01 April 2013 04:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1750)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 19 September 2013 04:42:41PM 0 points [-]

Ah.

So when you say "most typically human feelings (hungry, thirsty, tired, etc.) will not be preserved creating a new type of an agent" you're making a definitional claim that whatever the new agent experiences, it won't be a human feeling, because (being software) the agent definitionally won't be a human. So on your view it might experience hunger, thirst, fatigue, etc., or it might not, but if it does they won't be human hunger, thirst, fatigue, etc., merely simulated hunger, thirst, fatigue, etc.

Yes? Do I understand you now?

FWIW, I agree that there are definitions of "human being" and "software" by which a piece of software is definitionally not a human being, though I don't think those are useful definitions to be using when thinking about the behavior of software emulations of human beings. But I'm willing to use your definitions when talking to you.

You go on to say that this agent, not being human, will not want the same things as a human.
Well, OK; that follows from your definitions.

One obvious followup question is: would a reliable software simulation of a human, equipped with reliable software simulations of the attributes and experiences that define humanity (whatever those turn out to be; I labelled them X2 above), generate reliable software simulations of wanting what a human wants?

Relatedly, do we care? That is, given a choice between an upload U1 that reliably simulates wanting what a human wants, and an upload U2 that doesn't reliable simulate wanting what a human wants, do we have any grounds for preferring to create U1 over U2?

Because if it's important to us that uploads reliably simulate being human, then we should design our uploads so that they have reliable simulations of X2. Right?

Comment author: Roman_Yampolskiy 19 September 2013 06:39:44PM -1 points [-]

So uploads are typically not mortal, hungry for food, etc. You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes. The original question Wei Dai was asking me was about my statement that if we becomes uploads "At that point you already lost humanity by definition". Allow me to propose a simple thought experiment. We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people. Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario? I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.

Comment author: TheOtherDave 19 September 2013 07:42:05PM 2 points [-]

You are asking if we create such exact simulations of humans that they will have all the typical limitations would they have the same wants as real humans, probably yes.

I'm also asking, should we care?
More generally, I'm asking what is it about real humans we should prefer to preserve, given the choice? What should we be willing to discard, given a reason?

The original question Wei Dai was asking me was about my statement that if we becomes uploads "At that point you already lost humanity by definition".

Fair enough. I've already agreed that this is true for the definitions you've chosen, so if that's really all you're talking about, then I guess there's nothing more to say. As I said before, I don't think those are useful definitions, and I don't use them myself.

Does the fact that somewhere in the cyberspace there is still a piece of source code which wants the same things as I do makes a difference in this scenario?

Source code? Maybe not; it depends on whether that code is ever compiled.
Object code? Yes, it makes a huge difference.

I still feel like humanity gets destroyed in this scenario, but you are free to disagree with my interpretation.

Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I value what we've lost, and how much do I value what we've gained?
My answer depends on the specifics of the simulation, and is based on what I value about humanity.

The thing is, I could ask precisely the same question about aging from 18 to 80. Some things are lost, other things are not. Does my 18-year-old self get destroyed in the process, or does it just transform into an 80-year-old? My answer depends on the specifics of the aging, and is based on what I value about my 18-year-old self.

We face these questions every day; they aren't some weird science-fiction consideration. And for the most part, we accept that as long as certain key attributes are preserved, we continue to exist.

Comment author: Roman_Yampolskiy 21 September 2013 08:48:38PM 0 points [-]

Some things get destroyed. Other things survive. Ultimately, the question in this scenario is how much do I >value what we've lost, and how much do I value what we've gained?

I agree with your overall assessment. However, to me if any part of humanity is lost, it is already an unacceptable loss.

Comment author: TheOtherDave 21 September 2013 10:20:26PM 0 points [-]

OK. Thanks for clarifying your position.

Comment author: shminux 19 September 2013 06:56:33PM *  1 point [-]

We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people.

Ah, The Change in the Prime Intellect scenario. Is it possible to reconstruct meat humans if the uploads decide to do so? If not, then something has been irrecoverably lost.

Comment author: CCC 19 September 2013 07:51:47PM 1 point [-]

We make simulated version of all humans and put them in cyberspace. At that point we proceed to kill all people.

At the very lesat, by this point we've killed a lot of people. the fact that they've been backed up doesn't make the murder less henious.

Whether or not 'humanity' gets destroyed in this scenario depends on the definition that you aply to the word 'humanity'. If you mean the flesh and blood, the meat and bone, then yes, it gets destroyed. If you mean values and opinions, thoughts and dreams, then some of them are destroyed but not all of them - the cyberspace backup still have those things (presuming that they're actually working cyberspace backups).

Comment author: hairyfigment 19 September 2013 07:00:48PM 0 points [-]

Well, if nothing else happens our new computer substrate will stop working. But if we remove that problem - in what sense has this not already happened?

If you like, we can assume that Eliezer is wrong about that. In which case, I'll have to ask what you think is actually true, whether a smarter version of Aristotle could tell the difference by sitting in a dark room thinking about consciousness, and whether or not we should expect this to matter.