This is a cliche and may be false but it's assumed true: "Power corrupts and absolute power corrupts absolutely".
I wouldn't want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it.
To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil.
My version of evil is the least evil I believe.
EDIT: Why did I get voted down for saying "power corrupts" - the corrollary of which is rejection of power is...
Now this is the $64 google-illion question!
I don't agree that the null hypothesis: take the ring and do nothing with it is evil. My definition of evil is coercion leading to loss of resources up to and including loss of one's self. Thus absolute evil is loss of one's self across humanity which includes as one use case humanity's extinction (but is not limited to humanity's extinction obviously because being converted into zimboes isn't technically extinction..)
Nobody can argue that the likes of Gaddafi exist in the human population: those who are intereste...
Xannon decides how much Zaire gets. Zaire decides how much Yancy gets. Yancy decides how much Xannon gets.
If any is left over they go through the process again for the remainder ad infinitum until an approximation of all of the pie has been eaten.
Very Good response. I can't think of anything to disagree with and I don't think I have anything more to add to the discussion.
My apologies if you read anything adversarial into my message. My intention was to be pointed in my line of questioning but you responded admirably without evading any questions.
Thanks for the discussion.
Thanks for the suggestion. Yes I already have read it (steal beach). It was OK but didn't really touch much on our points of contention as such. In fact I'd say it steered clear from them since there wasn't really the concept of uploads etc. Interestingly, I haven't read anything that really examines closely whether the copied upload really is you. Anyways.
"I would also say that it doesn't matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren't identical to the cells that...
Other stuff:
"Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn't matter that the parent cell died at the instant of budding."
OK good to know. I'll have other questions but I need to mull it over.
"I would also say that it doesn't matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren't identical to the cells that comprised me then." I agree with this but I don't think it supports your line of reasoning. I'll explain why...
Of course I would do it because it would be better than nothing. My memories would survive. But I would still be dead.
Here's a thought experiment for you to outline the difference (whether you think it makes sense from your position whether you only value the information or not): Let's say you could slowly transfer a person into an upload by the following method: You cut out a part of the brain. That part of the brain is now dead. You replace it with a new part, a silicon part (or some computational substrate) that can interface directly with the remaining...
EDIT: Yes, you did understand though I can't personally say that I'm willing to come out and say definitively that the X is a red herring though it sounds like you are willing to do this.
I think it's an axiomatic difference Dave.
It appears from my side of the table that you're starting from the axiom that all that's important is information and that originality and/or physical existence including information means nothing.
And you're dismissing the quantum states as if they are irrelevant. They may be irrelevant but since there is some difference between th...
"Again, just to be clear, what I'm trying to understand is what you value that I don't. If data at these high levels of granularity is what you value, then I understand your objection. Is it?"
OK I've mulled your question over and I think I have the subtley of what you are asking down as distinct from the slight variation I answered.
Since I value my own life I want to be sure that it's actually me that's alive if you plan to kill me. Because we're basically creating an additional copy really quickly and then disposing of the original I have a hard...
I guess from your perspective you could say that the value of being the original doesn't derive from anything and it's just a primitive because the macro information is the same except for position (thought the quantum states are all different even at point of copy). But yes I value the original more than the copy because I consider the original to be me and the others to be just copies, even if they would legally and in fact be sentient beings in their own right.
Yes, if I woke up tomorrow and you could convince me I was just a copy then this is something I have already modeled/daydreamed about and my answer would be: I'd be disappointed that I wasn't the original but glad that I had existence.
Thanks Dave. This has been a very interesting discussion and although I think we can't close the gap on our positions I've really enjoyed it.
To answer your question "what do I value"? I think I answered it already, I valued not being killed.
The difference in our positions appears to be some version "but your information is still around" and my response is "but it's not me" and your response is "how is it not you?"
I don't know.
"What is it I value that you don't?" I don't know. Maybe I consider myself to be a...
I thought I had answered but perhaps I answered what I read into it.
If you are asking "will I prevent you from gradually moving everything to digital perhaps including yourselves" then the answer is no.
I just wanted to clarify that we were talking about with consent vs without consent.
Yes that's right.
I will not consent to being involuntarily destructively scanned and yes I will devote all of my resources to prevent myself from being involunarily destructively scanned.
That said, if you or anyone else wants to do it to themselves voluntarily it's none of my business.
If what you're really asking, however, is whether I will attempt to intervene if I notice a group of invididuals or an organization forcing destructive scanning on individuals I suspect that I might but we're not there yet.
You're basically asking why I should value myself over a separate in space exact copy of myself (and by exact copy we mean as close as you can get) and then superimposing another question of "isn't it the information that's important?"
Not exactly.
I'm concerned that I will die and I'm examining the hyptheses as to why it's not me that dies. Best as I can come up with the response is "you will die but it doesn't matter because there's another identical (or close as possible) copy still around.
As to what you value that I don't I don't have an ...
"If the information is different, and the information constitutes people, then it constitutes different people."
True and therein lies the problem. Let's do two comparisons: You have two copies. One the original, the other the copy.
Compare them on the macro scale (i.e. non quantum). They are identical except for position and momentum.
Now let's compare them on the quantum scale: Even at the point where they are identical on the macro scale, they are not identical on the quantum scale. All the quantum states are different. Just the simple act of obs...
This is a different point entirely. Sure it's more efficient to just work with instances of similar objects and I've already said elsewhere I'm OK with that if it's objects.
And if everyone else is OK with being destructively scanned then I guess I'll have to eke out an existence as a savage. The economy can have my atoms after I'm dead.
I understand that you value the information content and I'm OK with your position.
Let's do another tought experiment then: Say we're some unknown X number of years in the future and some foreign entity/government/whatever decided it wanted the territory of the United States (could be any country, just using the USA as an example) but didn't want the people. It did, however, value the ideas, opinions, memories etc of the American people. If said entity then destructively scanned the landmass but painstakingly copied all of the ideas, opinions, memories etc ...
Exactly. Reasonable assurance is good enough, absolute isn't necessary. I'm not willing to be destructively scanned even if a copy of me thinks it's me, looks like me, and acts like me.
That said I'm willing to accept the other stance that others take: they believe they are reasonably convinced that destructive scanning just means they will appear somewhere else a fraction of a second (or however long it takes). Just don't ask me to do it. And expect a bullet if you try to force me!
What do I make of his argument? Well I'm not a PHD in Physics though I do have a Bachelors in Physics/Math so my position would be the following:
Quantum physics doesn't scale up to macro. While swapping the two helium atoms in two billiard balls results in you not being able to tell which helium atom was which, the two billiard balls certainly can be distinguished from each other. Even "teleporting" one from one place to another will not result in an identical copy since the quantum states will all have changed just by dint of having been read by...
I think we're on the same page from a logical perspective.
My guess is the perspective taken is that of physical science vs compsci.
My guess is a compsci perspective would tend to view the two individuals as being two instances of the class of individual X. The two class instances are logically equivalent exception for position.
The physical science perspective is that there are two bunches of matter near each other with the only thing differing being the position. Basically the same scenario as two electrons with the same spin state, momentum, energy etc bu...
It matters to you if you're the original and then you are killed.
You are right that they are both an instance of person X but my argument is that this is not the equivalent to them being the same person in fact or even in law (whatever that means).
Also when/if this comes about I bet the law will side with me and define them as two different people in the eyes of the law. (And I'm not using this to fallaciously argue from authority, just pointing out I strongly believe I am correct - though willing to concede if there is ultimately some logical way to prov...
I understand completely your logic but I do not buy it because I do not agree that at the instant of the copying you have one person at two locations. They are two different people. One being the original and the other being an exact copy.
K here's where we disagree:
Original Copy A and new Copy B are indeed instances of person X but it's not a class with two instances as in CompSci 101. The class is Original A and it's B that is the instance. They are different people.
In order to make them the same person you'd need to do something like this: Put some kind of high bandwidth wifi in their heads which synchronize memories. Then they'd be part of the same hybrid entity. But at no point are they the same person.
Come on. Don't vote me down without responding.
Here's why I conclude a risk exists: http://lesswrong.com/lw/b9/welcome_to_less_wrong/5huo?context=1#5huo
I'm talking exactly about a process that is so flawless you can't tell the difference. Where my concern comes from is that if you don't destroy the original you now have two copies. One is the original (although you can't tell the difference between the copy and the original) and the other is the copy.
Now where I'm uncomfortable is this: If we then kill the original by letting Freddie Krueger or Jason do his evil thing then though the copy is still alive AND is/was indistinguishable from the original then the alternative hypothesis which I oppose states th...
Risk avoidance. I'm uncomfortable with taking the position that creating a second copy and destroying the original is the original simply because if it isn't then the original is now dead.
Here's one: Let's say that the world is a simulation AND that strongly godlike AI is possible. To all intents and purposes, even though the bible in the simulation is provably inconsistent, the existence of a being indistinguishable from the God in such a bible would not be ruled out because though the inhabitants of the world are constrained by the rules of physics in their own state machines or objects or whatever, the universe containing the simulation is subject to it's own set of physics and logic and therefore may vary even inside the simulation but not be detectable to you or I.
"(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label "me"? What conceivable difference does it make whether we label both of those people "me""
Because we already have a legal precedent. Twins. Though their memories are very limited they are legally different people. My position is rightly so.
Ha Ha. You're right. Thanks for reflecting that back to me.
Yes if you break apart my argument I'm saying exactly that though I hadn't broken it down to that extent before.
The last part I disagree with which is that I assume that I'm always better at detecting people than the AI is. Clearly I'm not but in my own personal case I don't trust it if it disagrees with me because of simple risk management. If it's wrong and it kills me then resurrects a copy then I have experienced total loss. If it's right then I'm still alive.
But I don't know the answer. And th...
You're right. It is impossible to determine that the current copy is the original or not.
"Disturbing how?" Yes I would dismiss the person as being a fruitbar of course. But if the technology existed to destructively scan an individual and copy them into a simulation or even reconstitute them from different atoms after being destructively scanned I'd be really uncomfortable with it. I personally would strenously object to ever teleporting myself or copying myself by this method into a simulation.
"edges away slowly" lol. Not any more evil...
That's a point of philosophical disagreement between us. Here's why:
Take an individual.
Then take a cell from that individual. Grow it in a nutrient bath. Force it to divide. Rinse, wash, repeat.
You create a clone of that person.
Now is that clone the same as the original? No it is not. It is a copy. Or in a natural version of this, a twin.
Now let's say technology exists to transfer memories and mind states.
After you create the clone-that-is-not-you you then put your memories into it.
If we keep the original alive the clone is still not you. How does killing the original QUICKLY make the clone you?
OK give me time to digest the jargon.
But is it destroying people if the simulations are the same as the original?
Isn't doing anything for us...
Really good discussion.
Would I believe? I think the answer would depend on whether I could find the original or not. I would, however, find it disturbing to be told that the copy was a copy.
And yes, if the beings are fully sentient then yes I agree it's ethically questionable. But since we cannot tell then it comes down to the conscience of the individual so I guess I'm evil then.
Agreed. It's the only way we have of verifying that it's a duck.
But is the destructively scanned duck the original duck even though it appears to be the same to all intents and purposes even though you can see the mulch that used to be the body of the original lying there beside the new copy?
While I don't doubt that many people would be OK with this I wouldn't because of the lack of certainty and provability.
My difficulty with this concept goes further. Since it's not verifiable that the copy is you even though it seems to present the same outputs to any verifiable test then what is to prevent an AI getting round the restriction on not destroying humanity?
"Oh but the copies running in a simulation are the same thing as the originals really", protests the AI after all the humans have been destructively scanned and copied into a simulation...
You're determined to make me say LOL so you can downvote me right?
EDIT: Yes you win. OFF.
Exactly.
So "friendly" is therefore a conflation of NOT(unfriendly) AND useful rather than just simply NOT(unfriendly) which is easier.
Very good questions.
No I'd not particularly care if it was my car that was returned to me because it gives me utility and it's just a thing.
I'd care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I'd probably be creeped out but if I didn't know I'd just be so glad to have my "wife" back.
In the case of the simulated porn actress, I wouldn't really care if she was real because her utility for me would be similar ...
Correct. I (unlike some others) don't hold the position that a destructive upload and then a simulated being is exactly the same being therefore destructively scanning the porn actresses would be killing them in my mind. Non destructively scanning them and them using the simulated versions for "evil purposes", however, is not killing the originals. Whether using the copies for evil purposes even against their simulated will is actually evil or not is debatable. I know some will take the position that the simulations could theoretically be sentien...
And I'd say that taking that step is a point of philosophy.
Consider this: I have a dodge durango sitting in my garage.
If I sell that dodge durango and buy an identical one (it passes all the same tests in exactly the same way) then is it the same dodge durango? I'd say no, but the point is irrelevant.
"I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples' current volition without trying to extrapolate"
i.e. the device has to judge the usefulness by some metric and then decide to execute someone's volition or not.
That's exactly what my issue is with trying to define a utility function for the AI. You can't. And since some people will have their utility function denied by the AI then who is to choose who get's theirs executed?
I'd prefer to shoot for a NOT(UFAI) and then trade with ...
"But an AI does need to have some utility function"
What if the "optimization of the utility function" is bounded like my own personal predilection with spending my paycheck on paperclips one time only and then stopping?
Is it sentient if it sits in a corner and thinks to itself, running simulations but won't talk to you unless you offer it a trade e.g. of some paperclips?
Is it possible that we're conflating "friendly" with "useful but NOT unfriendly" and we're struggling with defining what "useful" means?
Nice thought experiment.
No I probably would not consent to being non-destructively scanned so that my simulated version could be evilly manipulated.
Regardless of whether it's sentient or not provably so.
A-Ha!
Therein lies the crux: you want the AI to do stuff for you.
EDIT: Oh yeah I get you. So it's by definition evil if I coerce the catgirls by mind control. I suppose logically I can't have my cake and eat it since I wouldn't want my own non-sentient simulation controlled by an evil AI either.
So I guess that makes me evil. Who would have thunk it. Well I guess strike my utility function of the list of friendly AIs. But then again I've already said that elsewhere that I wouldn't trust my own function to be the optimal.
I doubt, however, that we'd easily find a candidate function from a single individual for similar reasons.
More friendly to you. Yes.
Not necessarily friendly in the sense of being friendly to everyone as we all have differing utility functions, sometimes radically differing.
But I dispute the position that "if an AI doesn't care about humans in the way we want them to, it almost certainly takes us apart and uses the resources to create whatever it does care about".
Consider: A totally unfriendly AI whose main goal is explicitly the extinction of humanity then turning itself off. For us that's an unfriendly AI.
One, however that doesn't kill any of us but...
Could reach the same point.
Said Eliezer agent is programmed genetically to value his own genes and those of humanity.
An artificial Elizer could reach the conclusion that humanity is worth keeping but is by no means obliged to come to that conclusion. On the contrary, genetics determines that at least some of us humans value the continued existence of humanity.