All of xxd's Comments + Replies

xxd10

Could reach the same point.

Said Eliezer agent is programmed genetically to value his own genes and those of humanity.

An artificial Elizer could reach the conclusion that humanity is worth keeping but is by no means obliged to come to that conclusion. On the contrary, genetics determines that at least some of us humans value the continued existence of humanity.

xxd-20

This is a cliche and may be false but it's assumed true: "Power corrupts and absolute power corrupts absolutely".

I wouldn't want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it.

To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil.

My version of evil is the least evil I believe.

EDIT: Why did I get voted down for saying "power corrupts" - the corrollary of which is rejection of power is... (read more)

1Ben_Welchner
Given humanity's complete lack of experience with absolute power, it seems like you can't even take that cliche for weak evidence. Having glided through the article and comments again, I also don't see where Eliezer said "rejection of power is less corrupt. The bit about Eliezer sighing and saying the null-actor did the right thing? (No, I wasn't the one who downvoted)
xxd00

Now this is the $64 google-illion question!

I don't agree that the null hypothesis: take the ring and do nothing with it is evil. My definition of evil is coercion leading to loss of resources up to and including loss of one's self. Thus absolute evil is loss of one's self across humanity which includes as one use case humanity's extinction (but is not limited to humanity's extinction obviously because being converted into zimboes isn't technically extinction..)

Nobody can argue that the likes of Gaddafi exist in the human population: those who are intereste... (read more)

xxd00

Xannon decides how much Zaire gets. Zaire decides how much Yancy gets. Yancy decides how much Xannon gets.

If any is left over they go through the process again for the remainder ad infinitum until an approximation of all of the pie has been eaten.

xxd00

Very Good response. I can't think of anything to disagree with and I don't think I have anything more to add to the discussion.

My apologies if you read anything adversarial into my message. My intention was to be pointed in my line of questioning but you responded admirably without evading any questions.

Thanks for the discussion.

xxd-10

Thanks for the suggestion. Yes I already have read it (steal beach). It was OK but didn't really touch much on our points of contention as such. In fact I'd say it steered clear from them since there wasn't really the concept of uploads etc. Interestingly, I haven't read anything that really examines closely whether the copied upload really is you. Anyways.

"I would also say that it doesn't matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren't identical to the cells that... (read more)

3TheOtherDave
I agree that there is physical continuity from moment to moment in typical human existence, and that there is similar continuity with a slow transition to a nonhuman form. I agree that there is no such continuity with an instantaneous copy-and-destroy operation. I understand that you consider that difference uniquely important, such that I continue living in the first case, and I don't continue living in the second case. I infer that you believe in some uniquely important attribute to my self that is preserved by the first process, and not preserved by the second process. I agree that if a person is being offered a choice, it is important for that person to understand the choice. I'm perfectly content to describe the choice as between the death of one body and the creation of another, on the one hand, and the continued survival of a single body, on the other. I'm perfectly content not to describe the latter process as the continuation of an existing life. I endorse individuals getting to make informed choices about their continued life, and their continued existence as people, and the parameters of that existence. I endorse respecting both their stated wishes, and (insofar as possible) their volition, and I acknowledge that these can conflict given imperfect information about the world. Yes. As I say, I endorse respecting individuals' stated wishes, and I endorse them getting to make informed choices about their continued existence and the parameters of that existence; involuntary destructive scanning interferes with those things. (So does denying people access to destructive scanning.) It depends on what 'much of' means. If my body continues to live, but my memories and patterns of interaction cease to exist, I have ceased to exist and I've left a living body behind. Partial destruction of those memories and patterns is trickier, though; at some point I cease to exist, but it's hard to say where that point is. I am content to say I'm the same person now that
xxd00

Other stuff:

"Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn't matter that the parent cell died at the instant of budding."

OK good to know. I'll have other questions but I need to mull it over.

"I would also say that it doesn't matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren't identical to the cells that comprised me then." I agree with this but I don't think it supports your line of reasoning. I'll explain why... (read more)

1TheOtherDave
There's a lot of decent SF on this theme. If you haven't read John Varley's Eight Worlds stuff, I recommend it; he has a lot of fun with this. His short stories are better than his novels, IMHO, but harder to find. "Steel Beach" isn't a bad place to start.
xxd10

Of course I would do it because it would be better than nothing. My memories would survive. But I would still be dead.

Here's a thought experiment for you to outline the difference (whether you think it makes sense from your position whether you only value the information or not): Let's say you could slowly transfer a person into an upload by the following method: You cut out a part of the brain. That part of the brain is now dead. You replace it with a new part, a silicon part (or some computational substrate) that can interface directly with the remaining... (read more)

xxd10

EDIT: Yes, you did understand though I can't personally say that I'm willing to come out and say definitively that the X is a red herring though it sounds like you are willing to do this.

I think it's an axiomatic difference Dave.

It appears from my side of the table that you're starting from the axiom that all that's important is information and that originality and/or physical existence including information means nothing.

And you're dismissing the quantum states as if they are irrelevant. They may be irrelevant but since there is some difference between th... (read more)

1TheOtherDave
I did not say the X is a red herring. If you believe I did, I recommend re-reading my comment. The X is far from being a red herring; rather, the X is precisely what I was trying to elicit details about for a while. (As I said above, I no longer believe I can do so through further discussion.) But I did say that identity of quantum states is a red herring. As I said before, I conclude this from the fact that you believe you are the same person you were last year, even though your quantum states aren't identical. If you believe that X can remain unchanged while Y changes, then you don't believe that X depends on Y; if you believe that identity can remain unchanged while quantum states change, then you don't believe that identity depends on quantum states. To put this another way: if changes in my quantum states are equivalent to my death, then I die constantly and am constantly replaced by new people who aren't me. This has happened many times in the course of writing this comment. If this is already happening anyway, I don't see any particular reason to avoid having the new person appear instantaneously in my mom's house, rather than having it appear in an airplane seat an incremental distance closer to my mom's house. Other stuff: * Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn't matter that the parent cell died at the instant of budding. * I would also say that it doesn't matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren't identical to the cells that comprised me then. * I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.) * I agree that volition is important for its own sake, but I don't understan
1dlthomas
What if you were in a situation where you had a near 100% chance of a seemingly successful destructive upload on the one hand, and a 5% chance of survival without upload on the other? Which would you pick, and how does your answer generalize as the 5% goes up or down?
xxd10

"Again, just to be clear, what I'm trying to understand is what you value that I don't. If data at these high levels of granularity is what you value, then I understand your objection. Is it?"

OK I've mulled your question over and I think I have the subtley of what you are asking down as distinct from the slight variation I answered.

Since I value my own life I want to be sure that it's actually me that's alive if you plan to kill me. Because we're basically creating an additional copy really quickly and then disposing of the original I have a hard... (read more)

1TheOtherDave
Maybe? Here's what I've understood; let me know if I've misunderstood anything. Suppose P is a person who was created and preserved in the ordinary way, with no funky hypothetical copy/delete operations involved. There is consequently something about P that you value... call that "something" X for convenience. If P' is a duplicate of P, then P' does not possess X, or at least cannot be demonstrated to possess X. This only applies to people; non-person objects either do not possess X in the first place, or if they do, it is possible in principle for a duplication process to create a duplicate that also possesses X. X is preserved for P from one moment/day/year to the next, even though P's information content -- at a macroscopic level, let alone a quantum one -- changes over time. I conclude that X does not depend on P's information content at all, even on a macroscopic level, and all this discussion of preserving quantum states is a red herring. By similar reasoning, I conclude that X doesn't depend on atoms, since the atoms of which P is comprised change over time. The same is true of energy levels. I don't have any idea of what that X might actually be; since we've eliminated from consideration everything about people I'm aware of. I'm still interested in more details about X, beyond the definitional attribute of "X is that thing P has that P' doesn't", but I no longer believe I can elicit those details through further discussion.
xxd10

I guess from your perspective you could say that the value of being the original doesn't derive from anything and it's just a primitive because the macro information is the same except for position (thought the quantum states are all different even at point of copy). But yes I value the original more than the copy because I consider the original to be me and the others to be just copies, even if they would legally and in fact be sentient beings in their own right.

Yes, if I woke up tomorrow and you could convince me I was just a copy then this is something I have already modeled/daydreamed about and my answer would be: I'd be disappointed that I wasn't the original but glad that I had existence.

1TheOtherDave
OK.
xxd10

Thanks Dave. This has been a very interesting discussion and although I think we can't close the gap on our positions I've really enjoyed it.

To answer your question "what do I value"? I think I answered it already, I valued not being killed.

The difference in our positions appears to be some version "but your information is still around" and my response is "but it's not me" and your response is "how is it not you?"

I don't know.

"What is it I value that you don't?" I don't know. Maybe I consider myself to be a... (read more)

xxd00

I thought I had answered but perhaps I answered what I read into it.

If you are asking "will I prevent you from gradually moving everything to digital perhaps including yourselves" then the answer is no.

I just wanted to clarify that we were talking about with consent vs without consent.

xxd00

Yes that's right.

I will not consent to being involuntarily destructively scanned and yes I will devote all of my resources to prevent myself from being involunarily destructively scanned.

That said, if you or anyone else wants to do it to themselves voluntarily it's none of my business.

If what you're really asking, however, is whether I will attempt to intervene if I notice a group of invididuals or an organization forcing destructive scanning on individuals I suspect that I might but we're not there yet.

0TheOtherDave
I understand that you won't consent to being destructively scanned, and that you might intervene to prevent others from being destructively scanned without their consent. That isn't what I asked. I encourage you to re-read my question. If, after doing so, you still think your reply answers it, then I think we do best to leave it at that.
xxd10

You're basically asking why I should value myself over a separate in space exact copy of myself (and by exact copy we mean as close as you can get) and then superimposing another question of "isn't it the information that's important?"

Not exactly.

I'm concerned that I will die and I'm examining the hyptheses as to why it's not me that dies. Best as I can come up with the response is "you will die but it doesn't matter because there's another identical (or close as possible) copy still around.

As to what you value that I don't I don't have an ... (read more)

0TheOtherDave
I'm not asking why you should value yourself over an exact copy, I'm asking why you do. I'm asking you (over and over) what you value. Which is a different question from why you value whatever that is. I've told you what I value, in this context. I don't know why I value it, particularly... I could tell various narratives, but I'm not sure I endorse any of them. Is that a typo? What I've been trying to elicit is what xxd values here that TheOtherDave doesn't, not the other way around. But evidently I've failed at that... ah well.
xxd10

"If the information is different, and the information constitutes people, then it constitutes different people."

True and therein lies the problem. Let's do two comparisons: You have two copies. One the original, the other the copy.

Compare them on the macro scale (i.e. non quantum). They are identical except for position and momentum.

Now let's compare them on the quantum scale: Even at the point where they are identical on the macro scale, they are not identical on the quantum scale. All the quantum states are different. Just the simple act of obs... (read more)

0TheOtherDave
So, what you value is the information lost during the copy process? That is, we've been saying "a perfect copy," but your concern is that no copy that actually exists could actually be a perfect copy, and the imperfect copies we could actually create aren't good enough? Again, just to be clear, what I'm trying to understand is what you value that I don't. If data at these high levels of granularity is what you value, then I understand your objection. Is it?
xxd00

This is a different point entirely. Sure it's more efficient to just work with instances of similar objects and I've already said elsewhere I'm OK with that if it's objects.

And if everyone else is OK with being destructively scanned then I guess I'll have to eke out an existence as a savage. The economy can have my atoms after I'm dead.

0TheOtherDave
Sorry I wasn't clear -- the sack of atoms I had in mind was the one comprising your body, not other objects. Also, my point is that it's not just a case of live and let live. Presumably, if the rest of us giving up the habit of carrying our bodies wherever we go means you are reduced to eking out your existence as a savage, then you will be prepared to devote quite a lot of resources to preventing us from giving up that habit... yes?
xxd00

I understand that you value the information content and I'm OK with your position.

Let's do another tought experiment then: Say we're some unknown X number of years in the future and some foreign entity/government/whatever decided it wanted the territory of the United States (could be any country, just using the USA as an example) but didn't want the people. It did, however, value the ideas, opinions, memories etc of the American people. If said entity then destructively scanned the landmass but painstakingly copied all of the ideas, opinions, memories etc ... (read more)

0TheOtherDave
In the thought experiment you describe, they've preserved the data and not the patterns of interaction (that is, they've replaced a dynamic system with a static snapshot of that system), and something of value is therefore missing, although they have preserved the ability to restore the missing component at their will. If they execute the model and allow the resulting patterns of interaction to evolve in an artificial environment they control, then yes, that would be just as valuable to me as taking the original living people and putting them into an artificial environment they control. I understand that there's something else in the original that you value, which I don't... or at least, which I haven't thought about. I'm trying to understand what it is. Is it the atoms? Is it the uninterrupted continuous existence (e.g., if you were displaced forward in time by two seconds, such that for a two-second period you didn't exist, would that be better or worse or the same as destroying you and creating an identical copy two seconds later?) Is it something else? Similarly, if you valued a postage stamp printed in the 1800s more than the result of destructively scanning such a stamp and creating an atom-by-atom replica of it, I would want to understand what about the original stamp you valued, such that the value was lost in that process. Thus far, the only answer I can infer from your responses is that you value being the original... or perhaps being the original, if that's different... and the value of that doesn't derive from anything, it's just a primitive. Is that it? If so, a thought experiment for you in return: if I convince you that last night I scanned xxd and created an identical duplicate, and that you are that duplicate, do you consequently become convinced that your existence is less valuable than you'd previously thought?
xxd00

Exactly. Reasonable assurance is good enough, absolute isn't necessary. I'm not willing to be destructively scanned even if a copy of me thinks it's me, looks like me, and acts like me.

That said I'm willing to accept the other stance that others take: they believe they are reasonably convinced that destructive scanning just means they will appear somewhere else a fraction of a second (or however long it takes). Just don't ask me to do it. And expect a bullet if you try to force me!

0TheOtherDave
Well, sure. But if we create an economy around you where people who insist on carrying a sack of atoms around with them wherever they go are increasingly a minority... for example, if we stop maintaining roads for you to drive a car on, stop flying airplanes to carry your atoms from place to place, etc. ... what then?
xxd00

What do I make of his argument? Well I'm not a PHD in Physics though I do have a Bachelors in Physics/Math so my position would be the following:

Quantum physics doesn't scale up to macro. While swapping the two helium atoms in two billiard balls results in you not being able to tell which helium atom was which, the two billiard balls certainly can be distinguished from each other. Even "teleporting" one from one place to another will not result in an identical copy since the quantum states will all have changed just by dint of having been read by... (read more)

xxd00

I think we're on the same page from a logical perspective.

My guess is the perspective taken is that of physical science vs compsci.

My guess is a compsci perspective would tend to view the two individuals as being two instances of the class of individual X. The two class instances are logically equivalent exception for position.

The physical science perspective is that there are two bunches of matter near each other with the only thing differing being the position. Basically the same scenario as two electrons with the same spin state, momentum, energy etc bu... (read more)

0TheOtherDave
I agree completely that there are two bunches of matter in this scenario. There are also (from what you're labeling the compsci perspective) two data structures. This is true. My question is, why should I care? What value does the one on the left have, that the one on the right doesn't have, such that having them both is more valuable than having just one of them? Why is destroying one of them a bad thing? What you seem to be saying is that they are valuable because they are different people... but what makes that a source of value? For example: to my way of thinking, what's valuable about a person is the data associated with them, and the patterns of interaction between that data and its surroundings. Therefore, I conclude that if I have that data and those interactions then I have preserved what's valuable about the person. There are other things associated with them -- for example, a particular set of atoms -- but from my perspective that's pretty valueless. If I lose the atoms while preserving the data, I don't care. I can always find more atoms; I can always construct a new body. But if I lose the data, that's the ball game -- I can't reconstruct it. In the same sense, what I care about in a book is the data, not the individual pieces of paper. If I shred the paper while digitizing the book, I don't care... I've kept what's valuable. If I keep the paper while allowing the patterns of ink on the pages t o be randomized, I do care... I've lost what's valuable. So when I look at a system to determine how many people are present in that system, what I'm counting is unique patterns of data, not pounds of biomass, or digestive systems, or bodies. All of those things are certainly present, but they aren't what's valuable to me. And if the system comprises two bodies, or five, or fifty, or a million, and they all embody precisely the same data, then I can preserve what's valuable about them with one copy of that data... I don't need to lug a million bundles of atom
2[anonymous]
In Identity Isn't In Specific Atoms, Eliezer argued that even from what you called the "physical science perspective," the two electrons are ontologically the same entity. What do you make of his argument?
1dlthomas
I wouldn't take a destructive upload if I didn't know that I would survive it (in the senses I care about), in roughly the same sense that I wouldn't cross the street if I didn't know I wasn't going to be killed by a passing car. In both cases, I require reasonable assurance. In neither case does it have to be absolute.
xxd10

It matters to you if you're the original and then you are killed.

You are right that they are both an instance of person X but my argument is that this is not the equivalent to them being the same person in fact or even in law (whatever that means).

Also when/if this comes about I bet the law will side with me and define them as two different people in the eyes of the law. (And I'm not using this to fallaciously argue from authority, just pointing out I strongly believe I am correct - though willing to concede if there is ultimately some logical way to prov... (read more)

1APMason
I agree with TheOtherDave. If you imagine that we scan someone's brain and then run one-thousand simulations of them walking around the same environment, all having exactly the same experiences, it doesn't matter if we turn one of those simulations off. Nobody's died. What I'm saying is that the person is the mental states, and what it means for two people to be different people is that they have different mental states. I'm not really sure about the morality of punishing them both for the crimes of one of them, though. On one hand, the one who didn't do it isn't the same person as the one who did - they didn't actually experience committing the murder or whatever. On the other hand, they're also someone who would have done it in the same circumstances - so they're dangerous. I don't know.
1TheOtherDave
Can't speak for APMason, but I say it because what matters to me is the information. If the information is different, and the information constitutes people, then it constitutes different people. If the information is the same, then it's the same person. If a person doesn't contain any unique information, whether they live or die doesn't matter nearly as much to me as if they do. And to my mind, what the law decides to do is an unrelated issue. The law might decide to hold me accountable for the actions of my 6-month-old, but that doesn't make us the same person. The law might decide not to hold me accountable for what I did ten years ago, but that doesn't mean I'm a different person than I was. The law might decide to hold me accountable for what I did ten years ago, but that doesn't mean I'm the same person I was.
xxd10

I understand completely your logic but I do not buy it because I do not agree that at the instant of the copying you have one person at two locations. They are two different people. One being the original and the other being an exact copy.

1Bugmaster
Which one is which ? And why ?
1TheOtherDave
OK, cool... I understand you, then. Can you clarify what, if anything, is uniquely valuable about a person who is an exact copy of another person? Or is this a case where we have two different people, neither of whom have any unique value?
xxd10

K here's where we disagree:

Original Copy A and new Copy B are indeed instances of person X but it's not a class with two instances as in CompSci 101. The class is Original A and it's B that is the instance. They are different people.

In order to make them the same person you'd need to do something like this: Put some kind of high bandwidth wifi in their heads which synchronize memories. Then they'd be part of the same hybrid entity. But at no point are they the same person.

0APMason
I don't know why it matters which is the original - the only difference between the original and the copy is location. A moment after the copy happens, their mental states begin to diverge because they have different experiences, and they become different people to each other - but they're both still Person X.
xxd00

Come on. Don't vote me down without responding.

[This comment is no longer endorsed by its author]Reply
1Nornagest
I read that earlier, and it doesn't answer the question. If you believe that the second copy in your scenario is different from the first copy in some deep existential sense at the time of division (equivalently, that personhood corresponds to something other than unique brain state), you've already assumed a conclusion to all questions along these lines -- and in fact gone past all questions of risk of death and into certainty. But you haven't provided any reasoning for that belief: you've just outlined the consequences of it from several different angles.
xxd20

I'm talking exactly about a process that is so flawless you can't tell the difference. Where my concern comes from is that if you don't destroy the original you now have two copies. One is the original (although you can't tell the difference between the copy and the original) and the other is the copy.

Now where I'm uncomfortable is this: If we then kill the original by letting Freddie Krueger or Jason do his evil thing then though the copy is still alive AND is/was indistinguishable from the original then the alternative hypothesis which I oppose states th... (read more)

1TheOtherDave
If I make a perfect copy of myself, then at the instant of duplication there exists one person at two locations. A moment later, the entities at those two locations start having non-identical experiences and entering different mental states, and thereby become different people (who aren't one another, although both of them are me). If prior to duplication I program a device to kill me once and only once, then I die, and I have killed myself, and I continue to live. I agree that this is a somewhat confusing way of talking, because we're not used to life and death and identity working that way, but we have a long history of technological innovations changing the way we talk about things.
1APMason
Well, think of it this way: Copy A and Copy B are both Person X. Copy A is then executed. Person X is still alive because Copy B is Person X. Copy A is dead. Nothing inconsistent there - and you have a perfectly fine explanation for the presence of a dead body. There is no such thing as "the same atoms" - atoms do not have individual identities. I don't think anyone was arguing that the AI needed to be conscious - intelligence and consciousness are orthogonal.
xxd10

Risk avoidance. I'm uncomfortable with taking the position that creating a second copy and destroying the original is the original simply because if it isn't then the original is now dead.

2Nornagest
Yes, but how do you conclude that a risk exists? Two philosophical positions don't mean fifty-fifty chances that one is correct; intuition is literally the only evidence for one of the alternatives here to the best of my knowledge, and we already know that human intuitions can go badly off the rails when confronted with problems related to anthropomorphism. Granted, we can't yet trace down human thoughts and motivations to the neuron level, but we'll certainly be able to by the time we're able to destructively scan people into simulations; if there's any secret sauce involved, we'll by then know it's there if not exactly what it is. If dualism turns out to win by then I'll gladly admit I was wrong; but if any evidence hasn't shown up by that time, it sounds an awful lot like all there is to fall back on is the failure mode in "But There's Still A Chance, Right?".
xxd-20

Here's one: Let's say that the world is a simulation AND that strongly godlike AI is possible. To all intents and purposes, even though the bible in the simulation is provably inconsistent, the existence of a being indistinguishable from the God in such a bible would not be ruled out because though the inhabitants of the world are constrained by the rules of physics in their own state machines or objects or whatever, the universe containing the simulation is subject to it's own set of physics and logic and therefore may vary even inside the simulation but not be detectable to you or I.

[This comment is no longer endorsed by its author]Reply
0jacob_cannell
Yes of course this is possible. So is the Tipler scenario. However, the simulation argument just as easily supports any of a vast number of god-theories, of which Christianity is just one of many. That being said, it does support judeo-xian type systems more than say Hindiusm or Vodun. There may even be economical reasons to create universes like ours, but that's a very unpopular position on LW.
0xxd
Come on. Don't vote me down without responding.
xxd10

"(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label "me"? What conceivable difference does it make whether we label both of those people "me""

Because we already have a legal precedent. Twins. Though their memories are very limited they are legally different people. My position is rightly so.

0TheOtherDave
Yes, we have two people after this process has completed... I said that in the first place. What follows from that? EDIT: Reading your other comments, I think I now understand what you're getting at. No, if we're talking about only the instant of duplication and not any other instant, then I would say that in that instant we have one person in two locations. But as soon as the person at those locations start to accumulate independent experiences, then we have two people. Similarly, if I create a static backup of a snapshot of myself, and create a dozen duplicates of that backup, I haven't created a dozen new people, and if I delete all of those duplicates I haven't destroyed any people. The uniqueness of experience is important.
2Nornagest
Identical twins, even at birth, are different people: they're genetically identical and shared a very close prenatal environment, but the actual fork happened sometime during the zygote stage of development, when neither twin had a nervous system let alone a mind-state. But I'm not sure why you're bringing this up in the first place: legalities don't help us settle philosophical questions. At best they point to a formalization of the folk solution. As best I can tell, you're trying to suggest that individual personhood is bound to a particular physical instance of a human being (albeit without actually saying so). Fair enough, but I'm not sure I know of any evidence for that proposition other than vague and usually implicitly dualist intuitions. I'm not a specialist in this area, though. What's your reasoning?
xxd30

Ha Ha. You're right. Thanks for reflecting that back to me.

Yes if you break apart my argument I'm saying exactly that though I hadn't broken it down to that extent before.

The last part I disagree with which is that I assume that I'm always better at detecting people than the AI is. Clearly I'm not but in my own personal case I don't trust it if it disagrees with me because of simple risk management. If it's wrong and it kills me then resurrects a copy then I have experienced total loss. If it's right then I'm still alive.

But I don't know the answer. And th... (read more)

1TheOtherDave
Well, I certainly agree that all else being equal we ought not kill X if there's a doubt about whether X is a person or not, and I support building AIs in such a way that they also agreed with that. But if for whatever reason I'm in a scenario where only one of X and Y can survive, and I believe X is a person and Y is not, and the AI says that Y is a person and X is not, and I'm the one who has to decide which of X and Y to destroy, then I need to decide whether I trust my own judgment more than the AI's judgment, or less. And obviously that's going to depend on the particulars of X, Y, me, and the AI... but it's certainly possible that I might in that situation update my beliefs and destroy X instead of Y.
xxd00

You're right. It is impossible to determine that the current copy is the original or not.

"Disturbing how?" Yes I would dismiss the person as being a fruitbar of course. But if the technology existed to destructively scan an individual and copy them into a simulation or even reconstitute them from different atoms after being destructively scanned I'd be really uncomfortable with it. I personally would strenously object to ever teleporting myself or copying myself by this method into a simulation.

"edges away slowly" lol. Not any more evil... (read more)

0Bugmaster
What if the reconstitution process was so flawless that there was no possible test your wife could run to determine whether or not you'd been teleported in this matter ? Would you still be uncomfortable with the process ? If so, why, and how does it differ from the reversed situation that we discussed previously ? Whoever that Phil guy is, I'm going to walk away briskly from him, as well. Walking backwards. So as not to break the line of sight. I haven't played that particular shooter, but I am reasonably certain that these NPCs wouldn't come anywhere close to passing the Turing Test. Not even the dog version of the Turing Test. I would say that, most likely, yes, it is murder.
xxd00

That's a point of philosophical disagreement between us. Here's why:

Take an individual.

Then take a cell from that individual. Grow it in a nutrient bath. Force it to divide. Rinse, wash, repeat.

You create a clone of that person.

Now is that clone the same as the original? No it is not. It is a copy. Or in a natural version of this, a twin.

Now let's say technology exists to transfer memories and mind states.

After you create the clone-that-is-not-you you then put your memories into it.

If we keep the original alive the clone is still not you. How does killing the original QUICKLY make the clone you?

1TheOtherDave
(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label "me"? What conceivable difference does it make whether we label both of those people "me"? If there is some X that differs between those people, such that the label "me" applies to one value of X but not the other value, then talking about which one is "me" makes sense. We might not be able to detect the difference, but there is a difference; if we improved the quality of our X-detectors we would be able to detect it. But if there is no such X, then for as long as we continue talking about which of those people is "me," we are not talking about anything in the world. Under those circumstances it's best to set aside the question of which is "me."
1APMason
I agree that the clone is not me until you write my brain-states onto his brain (poor clone). At that point it is me - it has my brain states. Both the clone and the original are identical to the one who existed before my brain-states were copied - but they're not identical to each other, since they would start to have different experiences immediately. "Identical" here meaning "that same person as" - not exact isomorphic copies. It seems obvious to me that personal identity cannot be a matter of isomorphism, since I'm not an exact copy of myself from five seconds ago anyway. So the answer to the question is killing the original quickly doesn't make a difference to the identity of a clone, but if you allow the original to live a while, it becomes a unique person, and killing him is immoral. Tell me if I'm not being clear.
1[anonymous]
Regardless of what you believe you're avoiding the interesting question: if you overwrite your clone's memories and personality with your own, is that clone the same person as you? If not, what is still different? I don't think anyone doubts that a clone of me without my memories is a different person.
xxd00

OK give me time to digest the jargon.

xxd00

But is it destroying people if the simulations are the same as the original?

2TheOtherDave
There are a few interesting possibilities here: 1) The AI and I agree on what constitutes a person. In that case, the AI doesn't destroy anything I consider a person. 2) The AI considers X a person, and I don't. In that case, I'm OK with deleting X, but the AI isn't. 3) I consider X a person, and the AI doesn't. In that case, the AI is OK with deleting X, but I'm not. You're concerned about scenario #3, but not scenario #2. Yes? But in scenario #2, if the AI had control, a person's existence would be preserved, which is the goal you seem to want to achieve. This only makes sense to me if we assume that I am always better at detecting people than the AI is. But why would we assume that? It seems implausible to me.
xxd00

Isn't doing anything for us...

xxd00

Really good discussion.

Would I believe? I think the answer would depend on whether I could find the original or not. I would, however, find it disturbing to be told that the copy was a copy.

And yes, if the beings are fully sentient then yes I agree it's ethically questionable. But since we cannot tell then it comes down to the conscience of the individual so I guess I'm evil then.

0Bugmaster
Finding the original, and determining that it is, in fact, the original, would constitute a test you could run to determine whether your current wife is a replica or not. Thus, under our scenario, finding the original would be impossible. Disturbing how ? Wouldn't you automatically dismiss the person who tells you this as a crazy person ? If not, why not ? Er... ok, that's good to know. edges away slowly Personally, if I encountered some beings who appeared to be sentient, I'd find it very difficult to force them to do my bidding (through brute force, or by overwriting their minds, or by any other means). Sure, it's possible that they're not really sentient, but why risk it, when the probability of this being the case is so low ?
xxd00

Agreed. It's the only way we have of verifying that it's a duck.

But is the destructively scanned duck the original duck even though it appears to be the same to all intents and purposes even though you can see the mulch that used to be the body of the original lying there beside the new copy?

1APMason
I'm not sure that duck identity works like personal identity. If I destroy a rock but make an exact copy of it ten feet to the east, whether or not the two rocks share identity just depends on how you want to define identity - the rock doesn't care, and I'm not convinced a duck would care either. Personal identity, however, is a whole other thing - there's this bunch of stuff we care about to do with having the right memories and the correct personality and utility function etc., and if these things aren't right it's not the same person. If you make a perfect copy of a person and destroy the original, then it's the same person. You've just teleported them - even if you can see the left over dust from the destruction. Being made of the "same" atoms, after all, has nothing to do with identity - atoms don't have individual identities.
xxd00

While I don't doubt that many people would be OK with this I wouldn't because of the lack of certainty and provability.

My difficulty with this concept goes further. Since it's not verifiable that the copy is you even though it seems to present the same outputs to any verifiable test then what is to prevent an AI getting round the restriction on not destroying humanity?

"Oh but the copies running in a simulation are the same thing as the originals really", protests the AI after all the humans have been destructively scanned and copied into a simulation...

0APMason
That shouldn't happen as long as the AI is friendly - it doesn't want to destroy people.
xxd20

You're determined to make me say LOL so you can downvote me right?

EDIT: Yes you win. OFF.

xxd00

Exactly.

So "friendly" is therefore a conflation of NOT(unfriendly) AND useful rather than just simply NOT(unfriendly) which is easier.

2dlthomas
Off. Do I win?
xxd10

Very good questions.

No I'd not particularly care if it was my car that was returned to me because it gives me utility and it's just a thing.

I'd care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I'd probably be creeped out but if I didn't know I'd just be so glad to have my "wife" back.

In the case of the simulated porn actress, I wouldn't really care if she was real because her utility for me would be similar ... (read more)

5Oligopsony
My primary concern in a situation like this is that she'd be kidnapped and presumably extremely not happy about that. If my partner were vaporized in her sleep and then replaced with a perfect simulacrum, well, that's just teleporting (with less savings on airfare.) If it were a known fact that sometimes people died and were replaced by cylons, finding out someone had been cyloned recently, or that I had, wouldn't particularly bother me. (I suppose this sounds bold, but I'm almost entirely certain that after teleporters or perfect destructive uploads or whatever were introduced, interaction with early adopters people had known before their "deaths" would rapidly swing intuitions towards personal identity being preserved. I have no idea how human psychology would react to there being multiple copies of people.)
4APMason
I find "if it walks like a duck and talks like a duck" to be a really good way of identifying ducks.
2Bugmaster
Right, but presumably, you would be unhappy if your Ferrari got stolen and you got a Yaris back. In fact, you might be unhappy even if your Yaris got stolen and you got a Ferrari back -- wouldn't you be ? If the copy was so perfect that you couldn't tell that it wasn't your wife, no matter what tests you ran, then would you believe anyone who told you that this being was in fact a copy, and not your wife at all ? I agree (I think), but then I am tempted to conclude that creating fully sentient beings merely for my own amusement is, at best, ethically questionable.
xxd00

Correct. I (unlike some others) don't hold the position that a destructive upload and then a simulated being is exactly the same being therefore destructively scanning the porn actresses would be killing them in my mind. Non destructively scanning them and them using the simulated versions for "evil purposes", however, is not killing the originals. Whether using the copies for evil purposes even against their simulated will is actually evil or not is debatable. I know some will take the position that the simulations could theoretically be sentien... (read more)

1APMason
Well, I would argue that if the computer is running a perfect simulation of a person, then the simulation is sentient - it's simulating the brain and is therefore simulating consciousness, and for the life of me I can't imagine any way in which "simulated consciousness" is different from just "consciousness". I disagree. Creating a not-friendly-but-harmless AGI shouldn't be any easier than creating a full-blown FAI. You've already had to do all the hard working of making it consistent while self-improving, and you've also had the do the hard work of programming the AI to recognise humans and to not do harm to them, while also acting on other things in the world. Here's Eliezer's paper.
3TimS
Why is it recursively self-improving if it isn't doing anything? If my end goal was not to do anything, I certainly don't need to modify myself in order to achieve that better than I could achieve it now.
xxd00

And I'd say that taking that step is a point of philosophy.

Consider this: I have a dodge durango sitting in my garage.

If I sell that dodge durango and buy an identical one (it passes all the same tests in exactly the same way) then is it the same dodge durango? I'd say no, but the point is irrelevant.

0Bugmaster
Why not, and why is it irrelevant ? For example, if your car gets stolen, and later returned to you, wouldn't you want to know whether you actually got your own car back ? I have to admit, your response kind of mystified me, so now I'm intrigued.
xxd00

"I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples' current volition without trying to extrapolate"

i.e. the device has to judge the usefulness by some metric and then decide to execute someone's volition or not.

That's exactly what my issue is with trying to define a utility function for the AI. You can't. And since some people will have their utility function denied by the AI then who is to choose who get's theirs executed?

I'd prefer to shoot for a NOT(UFAI) and then trade with ... (read more)

4APMason
Well, that's an easy question: if you've worked sixteen hour days for the last forty years and you're just six months away from curing cancer completely and you know you're going to get the Nobel and be fabulously wealthy etc. etc. and an alien shows up and offers you a cure for cancer on a plate, you take it, because a lot of people will die in six months. This isn't even different to how the world currently is - if I invented a cure for cancer it would be detrimental to all those others who were trying to (and who only cared about getting there first) - what difference does it make if an FAI helps me? I mean, if someone really wants to murder me but I don't want them to and they are stopped by the police, that's clearly an example of the government taking the side of my utility function over the murderer's. But so what? The murderer was in the wrong. Anyway, have you read Eliezer's paper on CEV? I'm not sure that I agree with him, but he does deal with the problem you bring up.
xxd00

"But an AI does need to have some utility function"

What if the "optimization of the utility function" is bounded like my own personal predilection with spending my paycheck on paperclips one time only and then stopping?

Is it sentient if it sits in a corner and thinks to itself, running simulations but won't talk to you unless you offer it a trade e.g. of some paperclips?

Is it possible that we're conflating "friendly" with "useful but NOT unfriendly" and we're struggling with defining what "useful" means?

2DSimon
If it likes sitting in a corner and thinking to itself, and doesn't care about anything else, it is very likely to turn everything around it (including us) into computronium so that it can think to itself better. If you put a threshold on it to prevent it from doing stuff like that, that's a little better, but not much. If it has a utility function that says "Think to yourself about stuff, but do not mess up the lives of humans in doing so", then what you have now is an AI that is motivated to find loopholes in (the implementation of) that second clause, because anything that can get an increased fulfilment of the first clause will give it a higher utility score overall. You can get more and more precise than that and cover more known failure modes with their own individual rules, but if it's very intelligent or powerful it's tough to predict what terrible nasty stuff might still be in the intersection of all the limiting conditions we create. Hidden complexity of wishes and all that jazz.
xxd00

Nice thought experiment.

No I probably would not consent to being non-destructively scanned so that my simulated version could be evilly manipulated.

Regardless of whether it's sentient or not provably so.

xxd00

A-Ha!

Therein lies the crux: you want the AI to do stuff for you.

EDIT: Oh yeah I get you. So it's by definition evil if I coerce the catgirls by mind control. I suppose logically I can't have my cake and eat it since I wouldn't want my own non-sentient simulation controlled by an evil AI either.

So I guess that makes me evil. Who would have thunk it. Well I guess strike my utility function of the list of friendly AIs. But then again I've already said that elsewhere that I wouldn't trust my own function to be the optimal.

I doubt, however, that we'd easily find a candidate function from a single individual for similar reasons.

1APMason
I think we've slightly misunderstood each other. I originally thought you were saying that you wanted to destructively upload porn actresses and then remove sentience so they did as they were told - which is obviously evil. But I now realise you only want to make catgirl copies of porn actresses while leaving the originals intact (?) - the moral character of which depends on things like whether you get the consent of the actresses involved. But yes! Of course I want the AGI to do something. If it doesn't do anything, it's not an AI. It's not possible to write code that does absolutely nothing. And while building AGI might be a fun albeit stupidly dangerous project to pursue just for the heck of it, the main motivator behind wanting the thing to be created (speaking for myself) is so that it can solve problems, like, say, death and scarcity.
xxd00

More friendly to you. Yes.

Not necessarily friendly in the sense of being friendly to everyone as we all have differing utility functions, sometimes radically differing.

But I dispute the position that "if an AI doesn't care about humans in the way we want them to, it almost certainly takes us apart and uses the resources to create whatever it does care about".

Consider: A totally unfriendly AI whose main goal is explicitly the extinction of humanity then turning itself off. For us that's an unfriendly AI.

One, however that doesn't kill any of us but... (read more)

0APMason
Well, no. If it ignores us I probably wouldn't call it "unfriendly" - but I don't really mind if someone else does. It's certainly not FAI. But an AI does need to have some utility function, otherwise it does nothing (and isn't, in truth, intelligent at all), and will only ignore humanity if it's explicitly programmed to. This ought to be as difficult an engineering problem as FAI - hence why I said it "almost certainly takes us apart". You can't get there by failing at FAI, except by being extremely lucky, and why would you want to go there on purpose? Yes, it would be a really bad idea to have a superintelligence optimise the world for just one person's utility function.
Load More