Comment author: luzr 16 December 2008 08:36:13AM 0 points [-]

"Errr.... luzr, why would I assume that the majority of GAIs that we create will think in a way I define as 'right'?"

It is not about what YOU define as right.

Anyway, considering that Eliezer is existing self-aware sentient GI agent, with obviously high intelligence and he is able to ask such questions despite his original biological programming makes me suppose that some other powerful strong sentient self-aware GI should reach the same point. I also *believe* that more general intelligence make GI converge to such "right thinking".

What makes me worry most is building GAI as non-sentient utility maximizer. OTOH, I *believe* that 'non-sentient utility maximizer' is mutually exclusive with 'learning' strong AGI system - in other words, any system capable of learning and exceeding human inteligence must outgrow non-sentience and utility maximizing. I migh be wrong, of course. But the fact that universe is not paperclipped yet makes me hope...

Comment author: xxd 27 January 2012 06:20:56PM 1 point [-]

Could reach the same point.

Said Eliezer agent is programmed genetically to value his own genes and those of humanity.

An artificial Elizer could reach the conclusion that humanity is worth keeping but is by no means obliged to come to that conclusion. On the contrary, genetics determines that at least some of us humans value the continued existence of humanity.

Comment author: Uni 30 March 2011 08:53:19AM 0 points [-]

I recommend reading this sequence.

Thanks for recommending.

Suffice it to say that you are wrong, and power does not bring with it morality.

I have never assumed that "power brings with it morality" if we with power mean limited power. Some superhuman AI might very well be more immoral than humans are. I think unlimited power would bring with it morality. If you have access to every single particle in the universe and can put it wherever you want, and thus create whatever is theoretically possible for an almighty being to create, you will know how to fill all of spacetime with the largest possible amount of happiness. And you will do that, since you will be intelligent enough to understand that that's what gives you the most happiness. (And, needless to say, you will also find a way to be the one to experience all that happiness.) Given hedonistic utilitarianism, this is the best thing that could happen, no matter who got the unlimited power and what was initially the moral standards of that person. If you don't think hedonistic utilitarianism (or hedonism) is moral, it's understandable that you think a world filled with the maximum amount of happiness might not be a moral outcome, especially if achieving that goal took killing lots of people against their will, for example. But that alone doesn't prove I'm wrong. Much of what humans think to be very wrong is not in all circumstances wrong. To prove me wrong, you have to either prove hedonism and hedonistic utilitarianism wrong first, or prove that a being with unlimited power wouldn't understand that it would be best for him to fill the universe with as much happiness as possible and experience all that happiness.

a happy person doesn't hate.

What is your support for this claim?

Observation.

Comment author: xxd 27 January 2012 06:07:11PM *  0 points [-]

This is a cliche and may be false but it's assumed true: "Power corrupts and absolute power corrupts absolutely".

I wouldn't want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it.

To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil.

My version of evil is the least evil I believe.

EDIT: Why did I get voted down for saying "power corrupts" - the corrollary of which is rejection of power is less corrupt whereas Eliezer gets voted up for saying exactly the same thing? Someone who voted me down should respond with their reasoning.

Comment author: xxd 27 January 2012 05:59:08PM 1 point [-]

Now this is the $64 google-illion question!

I don't agree that the null hypothesis: take the ring and do nothing with it is evil. My definition of evil is coercion leading to loss of resources up to and including loss of one's self. Thus absolute evil is loss of one's self across humanity which includes as one use case humanity's extinction (but is not limited to humanity's extinction obviously because being converted into zimboes isn't technically extinction..)

Nobody can argue that the likes of Gaddafi exist in the human population: those who are interested in being the total boss of others (even thought they add no value to the lives of others) to the extent that they are willing to kill to maintain their boss position.

I would define these people as evil or with evil intent. I would thus state that I would like under no circumstances somebody like this to grab the ring of power and thus I would be compelled to grab it myself.

The conundrum is that I fit the definition of evil myself. Though I don't seek power to coerce as an end in itself I would like the power to defend myself against involuntary coercion.

So I see a Gaddafi equivalent go to grab the ring and I beat him to it.

What do I do next?

Well I can't honestly say that I have the right to kill the millions of Gaddafi equivalent but I think that on average they add a net negative to the utility of humanity.

I'm left, however, with the nagging suspicion that under certain circumstances, Gaddafi type figures might be beneficial to humanity as a whole. Consider: crowdsourcing the majority of political decisions would probably satisfy the average utility function of humanity. It's fair but not to everybody. We have almost such a system today (even though it's been usurped by corporations). But in times of crisis such as during war, it's more efficient to have rapid decisions made by a small group of "experts" combined with those who need to make ruthless decisions so we can't kill the Gaddafis.

What is therefore optimal in my opinion? I reckon I'd take all the Gaddafis off planet and put them in simulations to be recalled only at times of need and leave sanitized nice people zimbo copies of them. Then I would destroy the ring of power and return to my previous life before I was tempted to torture those who have done me harm in the past.

Comment author: xxd 26 January 2012 10:42:57PM 0 points [-]

Xannon decides how much Zaire gets. Zaire decides how much Yancy gets. Yancy decides how much Xannon gets.

If any is left over they go through the process again for the remainder ad infinitum until an approximation of all of the pie has been eaten.

In response to comment by xxd on Welcome to Less Wrong!
Comment author: TheOtherDave 27 December 2011 08:35:27PM 2 points [-]

I agree that there is physical continuity from moment to moment in typical human existence, and that there is similar continuity with a slow transition to a nonhuman form. I agree that there is no such continuity with an instantaneous copy-and-destroy operation.

I understand that you consider that difference uniquely important, such that I continue living in the first case, and I don't continue living in the second case.

I infer that you believe in some uniquely important attribute to my self that is preserved by the first process, and not preserved by the second process.

I agree that if a person is being offered a choice, it is important for that person to understand the choice. I'm perfectly content to describe the choice as between the death of one body and the creation of another, on the one hand, and the continued survival of a single body, on the other. I'm perfectly content not to describe the latter process as the continuation of an existing life.

I endorse individuals getting to make informed choices about their continued life, and their continued existence as people, and the parameters of that existence. I endorse respecting both their stated wishes, and (insofar as possible) their volition, and I acknowledge that these can conflict given imperfect information about the world.

Do you have a problem with involuntary forced destructive scanning in order to upload individuals into some other substrate (or even a copied clone)?

Yes. As I say, I endorse respecting individuals' stated wishes, and I endorse them getting to make informed choices about their continued existence and the parameters of that existence; involuntary destructive scanning interferes with those things. (So does denying people access to destructive scanning.)

Do you consider a person who has lost much of their memory to be the same person?

It depends on what 'much of' means. If my body continues to live, but my memories and patterns of interaction cease to exist, I have ceased to exist and I've left a living body behind. Partial destruction of those memories and patterns is trickier, though; at some point I cease to exist, but it's hard to say where that point is.

What if such a person (who has lost much of their memory) then has a backed up copy of their memories from six months ago imprinted over top?

I am content to say I'm the same person now that I was six months ago, so if I am replaced by a backed-up copy of myself from six months ago, I'm content to say that the same person continues to exist (though I have lost potentially valuable experience). That said, I don't think there's any real fact of the matter here; it's not wrong to say that I'm a different person than I was six months ago and that replacing me with my six-month-old memories involves destroying a person.

What if it's someone else's memories: did they just die?

If I am replaced by a different person's memories and patterns of interaction, I cease to exist.

Scan a person destructively (with their permission). Keep their scan in storage on some static substrate. Then grow a perfectly identical clone of them (using "identical" to mean functionally indentical because we can't get exactly identical as discussed before). Copy the contents of the mindstates into that clone. How many deaths have taken place here?

Several trillion: each cell in my current body died. I continue to exist. If my clone ever existed, then it has ceased to exist.

Incidentally, I think you're being a lot more adversarial here than this discussion actually calls for.

Comment author: xxd 27 December 2011 09:31:01PM *  0 points [-]

Very Good response. I can't think of anything to disagree with and I don't think I have anything more to add to the discussion.

My apologies if you read anything adversarial into my message. My intention was to be pointed in my line of questioning but you responded admirably without evading any questions.

Thanks for the discussion.

In response to comment by xxd on Welcome to Less Wrong!
Comment author: TheOtherDave 22 December 2011 08:24:58PM 1 point [-]

There's a lot of decent SF on this theme. If you haven't read John Varley's Eight Worlds stuff, I recommend it; he has a lot of fun with this. His short stories are better than his novels, IMHO, but harder to find. "Steel Beach" isn't a bad place to start.

Comment author: xxd 27 December 2011 06:31:32PM 0 points [-]

Thanks for the suggestion. Yes I already have read it (steal beach). It was OK but didn't really touch much on our points of contention as such. In fact I'd say it steered clear from them since there wasn't really the concept of uploads etc. Interestingly, I haven't read anything that really examines closely whether the copied upload really is you. Anyways.

"I would also say that it doesn't matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren't identical to the cells that comprised me then."

OK I have to say that now I've thought it through I think this is a straw man argument that "you're not the same as you were yesterday" used as a pretext for saying that you're exactly the same from one moment to the next. It is missing the point entirely.

Although you are legally the same person, it's true that you're not exactly physically the same person today as you were yesterday and it's also true that you have almost none of the original physical matter or cells in you today as you had when you were a child.

That this is true in no way negates the main point: human physical existence at any one point in time does have continuity. I have some of the same cells I had up to about seven to ten years ago. I have some inert matter in me from the time I was born AND I have continual memories to a greater or lesser extent. This is directly analogous to my position that I posted before about a slow hybridizing transition to machine form before I had even clearly thought this out consciously.

Building a copy of yourself and then destroying the original has no continuity. It's directly analgous to budding asexually a new copy of yourself and then imprinting it with your memories and is patently not the same concept as normal human existence. Not even close.

That you and some others might dismiss the differences is fine and if you hypothetically wanted to take the position that killing yourself so that a copy of your mind state could exist indefinitely then I have no problem with that, but it's patently not the same as the process you, I and everyone else goes through on a day to day basis. It's a new thing. (Although it's already been tried in nature as the asexual budding process of bacteria).

I would appreciate, however, that if that is a choice being offered to others, that it is clearly explained to them what is happening. i.e. physical body death and a copy being resurrected, not that they themselves continue living, because they do not. Whether you consider it irrelevant is besides the point. Volition is very important, but I'll get to that later.

"I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)"

That's directly analogous to multi worlds interpretation of quantum physics which has multiple timelines. You could argue from that perspective that death is irrelevant because in an infintude of possibilities if one of your instances die then you go on existing. Fine, but it's not me. I'm mortal and always will be even if some virtual copy of me might not be. So you guessed correctly, unless we're using some different definition of "person" (which is likely I think) then the person did not survive.

"I agree that volition is important for its own sake, but I don't understand what volition has to do with what we've thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn't kill the original, then it doesn't, whether the original wants to die or not. It might be valuable to respect people's volition, but if so, it's for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)"

Volition has everything to do with it. While it's true that volition is independent of whether they have died or not (agreed), the reason it's important is that some people will likely take your position to justify forced destructive scanning at some point because it's "less wasteful of resources" or some other pretext.

It's also particularly important in the case of an AI over which humanity would have no control. If the AI decides that uploads via destructive scanning are exactly the same thing as the original, and it needs the space for it's purposes then there is nothing to stop it from just going ahead unless volition is considered to be important.

Here's a question for you: Do you have a problem with involuntary forced destructive scanning in order to upload individuals into some other substrate (or even a copied clone)?

So here's a scenario for you given that you think information is the only important thing: Do you consider a person who has lost much of their memory to be the same person? What if such a person (who has lost much of their memory) then has a backed up copy of their memories from six months ago imprinted over top. Did they just die? What if it's someone else's memories: did they just die?

Here's yet another scenario. I wonder if you have though about this one: Scan a person destructively (with their permission). Keep their scan in storage on some static substrate. Then grow a perfectly identical clone of them (using "identical" to mean functionally indentical because we can't get exactly identical as discussed before). Copy the contents of the mindstates into that clone.

Ask yourself this question: How many deaths have taken place here?

In response to comment by xxd on Welcome to Less Wrong!
Comment author: TheOtherDave 22 December 2011 06:58:00PM 1 point [-]

I can't personally say that I'm willing to come out and say definitively that the X is a red herring though it sounds like you are willing to do this.

I did not say the X is a red herring. If you believe I did, I recommend re-reading my comment.

The X is far from being a red herring; rather, the X is precisely what I was trying to elicit details about for a while. (As I said above, I no longer believe I can do so through further discussion.)

But I did say that identity of quantum states is a red herring.

As I said before, I conclude this from the fact that you believe you are the same person you were last year, even though your quantum states aren't identical. If you believe that X can remain unchanged while Y changes, then you don't believe that X depends on Y; if you believe that identity can remain unchanged while quantum states change, then you don't believe that identity depends on quantum states.

To put this another way: if changes in my quantum states are equivalent to my death, then I die constantly and am constantly replaced by new people who aren't me. This has happened many times in the course of writing this comment. If this is already happening anyway, I don't see any particular reason to avoid having the new person appear instantaneously in my mom's house, rather than having it appear in an airplane seat an incremental distance closer to my mom's house.

Other stuff:

  • Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn't matter that the parent cell died at the instant of budding.

  • I would also say that it doesn't matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren't identical to the cells that comprised me then.

  • I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)

  • I agree that volition is important for its own sake, but I don't understand what volition has to do with what we've thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn't kill the original, then it doesn't, whether the original wants to die or not. It might be valuable to respect people's volition, but if so, it's for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)

  • A question for you: if someone wants to stop existing, and they destructively scan themselves, am I violating their wishes if I construct a perfect duplicate from the scan? I assume your answer is "no," since the duplicate isn't them; they stopped existing just as they desired.

Comment author: xxd 22 December 2011 08:16:19PM 0 points [-]

Other stuff:

"Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn't matter that the parent cell died at the instant of budding."

OK good to know. I'll have other questions but I need to mull it over.

"I would also say that it doesn't matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren't identical to the cells that comprised me then." I agree with this but I don't think it supports your line of reasoning. I'll explain why after my meeting this afternoon.

"I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)" Interesting. I have a contrary line of argument which I'll explain this afternoon.

"I agree that volition is important for its own sake, but I don't understand what volition has to do with what we've thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn't kill the original, then it doesn't, whether the original wants to die or not. It might be valuable to respect people's volition, but if so, it's for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)" Disagree. Again I'll explain why later.

"A question for you: if someone wants to stop existing, and they destructively scan themselves, am I violating their wishes if I construct a perfect duplicate from the scan? I assume your answer is "no," since the duplicate isn't them; they stopped existing just as they desired." Maybe. If you have destructively scanned them then you have killed them so they now no longer exist so that part you have complied perfectly with their wishes from my point of view. But in order to then make a copy, have you asked their permission? Have they signed a contract saying they have given you the right to make copies? Do they even own this right to make copies? I don't know.

What I can say is that our differences in opinion here would make a superb science fiction story.

Comment author: wedrifid 21 June 2010 08:42:35AM 2 points [-]

It's as if people compartmentalize them and think about only one or the other at a time.

Or just disagree with a specific transhumanist moral (or interpretation thereof). If you are growing "too powerful too quickly" the right thing for an FAI (or, for that matter, anyone else) to do is to stop you by any means necessary. A recursively self improving PhilGoetz with that sort of power and growth rate will be an unfriendly singularity. Cease your expansion or we will kill you before it is too late.

Comment author: xxd 16 December 2011 12:29:09AM 0 points [-]

Although I disagree with your heartbreak position I agree with this.

Comment author: PhilGoetz 06 December 2011 10:16:03PM *  0 points [-]

Phil: an AI who is seeking resources to further it's own goals at the expense of everyone else is by definition an unfriendly AI.

The question is whether the PhilGoetz utility function, or the average human utility function, are better. Assume both are implemented in AIs of equal power. What makes the average human utility function "friendlier"? It would have you outlaw homosexuality and sex before marriage, remove all environmental protection laws, make child abuse and wife abuse legal, take away legal rights from women, give wedgies to smart people, etc.

Now consider this: I'd prefer the average of all human utility function over my maximized utility function even if it means I have less utility.

I don't think you understand utility functions.

Comment author: xxd 16 December 2011 12:27:54AM 0 points [-]

"The question is whether the PhilGoetz utility function, or the average human utility function, are better. "

That is indeed the question. But I think you've framed and stacked the the deck here with your description of what you believe the average human utility function is in order to attempt to take the moral high ground rather than arguing against my point which is this:

How do you maximize the preferred utility function for everyone instead of just a small group?

In response to comment by xxd on Building Weirdtopia
Comment author: wedrifid 15 December 2011 06:18:29PM *  0 points [-]

If it's not fear what is your objection to having your heart broken?

The same objection I have to someone cutting off my little toe. It is painful and means that I'll forever be missing a part of myself. Not a big deal - just a minor to moderate negative outcome.

And how can you possibly take upon yourself the right to decide for everybody else?

You are responding to a straw man again - and I am rather surprised that you have been rewarded for doing so since it is rather insulting to attribute silly beliefs to people without cause. This is a complete reversal of what Wedrifid_2010 said. He vehemetly rejected thepokeduck's proposal that everyone should have their heart broken - because he found the idea of someone deciding that everyone else should have their heart broken abhorrent and presumptive.

Then, in the very comment you replied to, Wedrifid_2010 said:

Sure, if they are into that sort of thing I don't particularly care.

That is explicitly declaring no inclination toward controlling other people's self-heart-breaking impulses.

Comment author: xxd 15 December 2011 08:26:36PM 0 points [-]

You're deliberately ignoring this comment of yours: "If the superhappys were going to remove our ability to have our hearts broken I wouldn't blow up earth to prevent it."

You are therefore at least slightly in favor of controlling other people and many would interpret your tongue-in-cheek comment to say you support it.

View more: Next