There seems to be a lot of assumptions in the poll but one in particular jumps out at me. I'm curious why there is no way to express that the creation of a copy might have negative value.
It seems to me that, for epistemic balance, there should be poll options which contemplates the idea that making a copy might be the "default" outcome unless some amount of work was done to specifically avoid the duplication - and then how much work would someone do to to save a duplicate of yourself from the hypothetical harm of coming into existence.
Why is there no option like that?
I'm not sure. The first really big thing that jumped out at me was the total separateness issue. The details of how this is implemented would matter to me and probably change my opinion in dramatic ways. I can imagine various ways to implement a copy (physical copy in "another dimension", physical copy "very far away", with full environmental detail similarly copied out to X kilometers and the rest simulated or changed, with myself as an isolated boltzman brain, etc, etc). Some of them might be good, some might be bad, and some might require informed consent from a large number of people.
For example, I think it would be neat to put a copy of our solar system ~180 degrees around the galaxy so that we (and they) have someone interestingly familiar with whom to make contact thousands of years from now. That's potentially a kind of "non-interacting copy", but my preference for it grows from the interactions I expect to happen far away in time and space. Such copying basically amounts to "colonization of space" and seems like an enormously good thing from that perspective.
I think simulationist metaphysics grows out of intuitions from dreamin...
I think I might end up disappointing because I have almost no actual data...
By an instrument I meant a psychological instrument, probably initially just a quiz and if that didn't work then perhaps some stroop-like measurements of millisecond delay when answering questions on a computer.
Most of my effort went into working out a strategy for iterative experimental design and brainstorming questions for the very first draft of the questionnaire. I didn't really have a good theory about what pre-existing dispositions or "mental contents" might correlate with dispositions one way or the other.
I thought it would be funny if people who "believed in free will" in the manner of Martin Gardner (an avowed mysterian) turned out to be mechanically predictable on the basis of inferring that they are philosophically confused in ways that lead to two-boxing. Gardner said he would two box... but also predicted that it was impossible for anyone to successfully predict that he would two box.
In his 1974 "Mathematical Games" article in Scientific American he ended with a question:
...But has either side really done more than just repeat its case "loudly and slowly&q
Would I sacrifice a day of my life to ensure that (if that could be made to mean something) a second version of me would live a life totally identical to mine?
No. What I value is that this present collection of memories and plans that I call "me" should, in future, come to have novel and pleasant experiences.
Further, using the term "copy" as you seem to use it strikes me as possibly misleading. We make a copy of something when we want to preserve it against loss of the original. Given your stipulations of an independently experienced wo...
"It all adds up to normality."
Only where you explain what's already normal. Where you explain counterintuitive unnatural situations, it doesn't have to add up to normality.
I went straight to the poll without a careful enough reading of the post before seeing "non-interacting" specified.
My first interpretation of this is completely non-interacting which has no real value to me (things I can't interact with don't 'exist' for my definition of exist); a copy that I would not interact with on a practical level might have some value to me.
Anyway I answered the poll based on an interactive interpretation so there is at least one misnomer of a result, depending on how you plan to interpret all this.
The mathematical details vary too much with the specific circumstances for me to estimate in terms of days of labor. Important factors to me include risk mitigation and securing a greater proportion of the negentropy of the universe for myself (and things I care about). Whether other people choose to duplicate themselves (which in most plausible cases will impact on neg-entropy consumption) would matter. Non-duplication would then represent a cooperation with other potential trench diggers.
What about using compressibility as a way of determining the value of the set of copies?
In computer science, there is a concept known as deduplication (http://en.wikipedia.org/wiki/Data_deduplication) which is related to determining the value of copies of data. Normally, if you have 100MB of uncompressable data (e.g. an image or an upload of a human), it will take up 100MB on a disk. If make a copy of that file, a standard computer system will require a total of 200MB to track both files on disk. A smart system that uses deduplication will see that they ar...
This strikes me as being roughly similar to peoples' opinions of the value of having children who outlive them. As the last paragraph of the OP points out, it doesn't really matter if it's a copy of me or not, just that it's a new person whose basic moral motivations I support, but whom I cannot interact with
Having their child hold to moral motivations they agree with is a major goal of most parents. Having their child outlive one them is another (assuming they don't predict a major advance in lifespan-extending technology soon), and that's where the non-i...
I would place 0 value on a copy that does not interact with me. This might be odd, but a copy of me that is non-interacting is indistinguishable from a copy of someone else that is non-interacting. Why does it matter that it is a copy of me?
It seems everyone who commented so far isn't interested in copies at all, under the conditions stipulated (identical and non-interacting). I'm not interested myself. If anyone is interested, could you tell us about it? Thanks.
economist's question: "compared to what?"
If they can't interact with each other, just experience something, I'd rather have copies of me than of most other people. If we CAN interact, then a mix of mes and others is best - diversity has value in that case.
If the copies don't diverge their value is zero.
They are me. We are one person, with one set of thoughts, one set of emotions etc.
I don't think I would place more value on lock-step copies. I would love to have lots of copies of me, because then we could all do different things, and I'd not have to wonder whether I could have been a good composer, or writer, or what have you. And we'd probably form a commune and buy a mansion and have other fun economies of scale. I have observed that identical twins seem to get a lot of value out of having a twin.
As to the "value" of those copies, this depends on whether I'm speaking of "value" in the social sense, or the pers...
I'm still tentatively convinced that existence is what mathematical possibility feels like from the inside, and that creating an identical non-interacting copy of oneself is (morally and metaphysically) identical to doing nothing. Considering that, plus the difficulty* of estimating which of a potentially infinite number of worlds we're in, including many in which the structure of your brain is instantiated but everything you observe is hallucinated or "scripted" (similar to Boltzmann brains), I'm beginning to worry that a fully fact-based conseq...
The question is awfully close to the reality juice of many worlds. We seem to treat reality juice as probability for decision theory, and thus we should value the copies linearly, if they are as good as the copies of QM.
I want at least 11 copies of myself with full copy-copy / world-world interaction. This is a way of scaling myself. I'd want the copies to diverge -- actually that's the whole point (each copy handles a different line of work.) I'm mature enough, so I'm quite confident that the copies won't diverge to the point when their top-level values / goals would become incompatible, so I expect the copies to cooperate.
As for how much I'm willing to work for each copy, that's a good question. A year of pickaxe trench-digging seems to be way too cheap and easy for a f...
It depends on external factors, since it would primarily be a way of changing anthropic probabilities (I follow Bostrom's intuitions here). If I today committed to copy myself an extra time whenever something particularly good happened to me (or whenever the world at large took a positive turn), I'd expect to experience a better world from now on.
If I couldn't use copying in that way, I don't think it would be of any value to me.
This question is no good. Would you choose to untranslatable-1 or untranslatable-2? I very much doubt that reliable understanding of this can be reached using human-level philosophy.
But the stipulation as stated leads to major problems - for instance:
each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction
implies that I'm copying the entire world full of people, not just me. That distorts the incentives.
Edit: And it also implies that the copy will not be useful for backup, as whatever takes me out is likely to take it out.
No value at all: to answer "how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)"
Existence of worlds that are not causally related to me should not influence my decisions (I learn from the past and I teach the future: my world cone is my responsibility). I decide by considering whether the world that I create/allow my copy (or child) to exist in is better off (according to myself -- my...
With this kind of question I like to try to disentangle 'second-order effects' from the actual core of what's being asked, namely whether the presence of these copies is considered valuable in and of itself.
So for instance, someone might argue that "lock-step copies" in a neighboring galaxy are useful as back-ups in case of a nearby gamma-ray burst or some other catastrophic system crash. Or that others in the vicinity who are able to observe these "lock-step copies" without affecting them will nevertheless benefit in some way (so, the ...
Can you specify if the copy of me I'm working to create is Different Everett-Branch Me or Two Days In The Future Me? That will effect my answer, as I have a bit of a prejudice. I know it's somewhat inconsistent, but I think I'm a Everett-Branch-ist
It's a difficult question to answer without context. I would certainly work for some trivial amount of time to create a copy of myself, if only because there isn't such a thing already. It would be valuable to have a copy of a person, if there isn't such a thing yet. And it would be valuable to have a copy of myself, if there isn't such a thing yet. After those are met, I think there are clearly diminishing returns, at least because you can't cash in on the 'discovery' novelty anymore.
If my copies can make copies of themselves, then I'm more inclined to put in a year's work to create the first one. Otherwise, I'm no altruist.
In the future, it may be possible for you to scan your own brain and create copies of yourself. With the power of a controllable superintelligent AI, it may even be possible to create very accurate instances of your past self (and you could take action today or in the near future to make this easier by using lifelogging tools such as these glasses).
So I ask Less Wrong: how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)
For example, would you endure a day's hard labor to create an extra self-copy? A month? A year? Consider the hard labor to be digging a trench with a pickaxe, with a harsh taskmaster who can punish you if you slack off.
Do you think having 10 copies of yourself made in the future is 10 times as good as having 1 copy made? Or does your utility in copies drop off sub-linearly?
Last time I spoke to Robin Hanson, he was extremely keen on having a lot of copies of himself created (though I think he was prepared for these copies to be emulant-wage-slaves).
I have created a poll for LW to air its views on this question, then in my next post I'll outline and defend my answer, and lay out some fairly striking implications that this has for existential risk mitigation.
For those on a hardcore-altruism trip, you may substitute any person or entity that you find more valuable than your own good self: would you sacrifice a day of this entity's life for an extra copy? A year? etc.
UPDATE: Wei Dai has asked this question before, in his post "The moral status of independent identical copies" - though his post focuses more on lock-step copies that are identical over time, whereas here I am interested in both lock-step identical copies and statistically identical copies (a statistically identical copy has the same probability distribution of futures as you do).