There seems to be a lot of assumptions in the poll but one in particular jumps out at me. I'm curious why there is no way to express that the creation of a copy might have negative value.
It seems to me that, for epistemic balance, there should be poll options which contemplates the idea that making a copy might be the "default" outcome unless some amount of work was done to specifically avoid the duplication - and then how much work would someone do to to save a duplicate of yourself from the hypothetical harm of coming into existence.
Why is there no option like that?
I'm not sure. The first really big thing that jumped out at me was the total separateness issue. The details of how this is implemented would matter to me and probably change my opinion in dramatic ways. I can imagine various ways to implement a copy (physical copy in "another dimension", physical copy "very far away", with full environmental detail similarly copied out to X kilometers and the rest simulated or changed, with myself as an isolated boltzman brain, etc, etc). Some of them might be good, some might be bad, and some might require informed consent from a large number of people.
For example, I think it would be neat to put a copy of our solar system ~180 degrees around the galaxy so that we (and they) have someone interestingly familiar with whom to make contact thousands of years from now. That's potentially a kind of "non-interacting copy", but my preference for it grows from the interactions I expect to happen far away in time and space. Such copying basically amounts to "colonization of space" and seems like an enormously good thing from that perspective.
I think simulationist metaphysics grows out of intuitions from dreamin...
I think I might end up disappointing because I have almost no actual data...
By an instrument I meant a psychological instrument, probably initially just a quiz and if that didn't work then perhaps some stroop-like measurements of millisecond delay when answering questions on a computer.
Most of my effort went into working out a strategy for iterative experimental design and brainstorming questions for the very first draft of the questionnaire. I didn't really have a good theory about what pre-existing dispositions or "mental contents" might correlate with dispositions one way or the other.
I thought it would be funny if people who "believed in free will" in the manner of Martin Gardner (an avowed mysterian) turned out to be mechanically predictable on the basis of inferring that they are philosophically confused in ways that lead to two-boxing. Gardner said he would two box... but also predicted that it was impossible for anyone to successfully predict that he would two box.
In his 1974 "Mathematical Games" article in Scientific American he ended with a question:
...But has either side really done more than just repeat its case "loudly and slowly&q
Would I sacrifice a day of my life to ensure that (if that could be made to mean something) a second version of me would live a life totally identical to mine?
No. What I value is that this present collection of memories and plans that I call "me" should, in future, come to have novel and pleasant experiences.
Further, using the term "copy" as you seem to use it strikes me as possibly misleading. We make a copy of something when we want to preserve it against loss of the original. Given your stipulations of an independently experienced wo...
"It all adds up to normality."
Only where you explain what's already normal. Where you explain counterintuitive unnatural situations, it doesn't have to add up to normality.
I went straight to the poll without a careful enough reading of the post before seeing "non-interacting" specified.
My first interpretation of this is completely non-interacting which has no real value to me (things I can't interact with don't 'exist' for my definition of exist); a copy that I would not interact with on a practical level might have some value to me.
Anyway I answered the poll based on an interactive interpretation so there is at least one misnomer of a result, depending on how you plan to interpret all this.
The mathematical details vary too much with the specific circumstances for me to estimate in terms of days of labor. Important factors to me include risk mitigation and securing a greater proportion of the negentropy of the universe for myself (and things I care about). Whether other people choose to duplicate themselves (which in most plausible cases will impact on neg-entropy consumption) would matter. Non-duplication would then represent a cooperation with other potential trench diggers.
What about using compressibility as a way of determining the value of the set of copies?
In computer science, there is a concept known as deduplication (http://en.wikipedia.org/wiki/Data_deduplication) which is related to determining the value of copies of data. Normally, if you have 100MB of uncompressable data (e.g. an image or an upload of a human), it will take up 100MB on a disk. If make a copy of that file, a standard computer system will require a total of 200MB to track both files on disk. A smart system that uses deduplication will see that they ar...
This strikes me as being roughly similar to peoples' opinions of the value of having children who outlive them. As the last paragraph of the OP points out, it doesn't really matter if it's a copy of me or not, just that it's a new person whose basic moral motivations I support, but whom I cannot interact with
Having their child hold to moral motivations they agree with is a major goal of most parents. Having their child outlive one them is another (assuming they don't predict a major advance in lifespan-extending technology soon), and that's where the non-i...
I would place 0 value on a copy that does not interact with me. This might be odd, but a copy of me that is non-interacting is indistinguishable from a copy of someone else that is non-interacting. Why does it matter that it is a copy of me?
It seems everyone who commented so far isn't interested in copies at all, under the conditions stipulated (identical and non-interacting). I'm not interested myself. If anyone is interested, could you tell us about it? Thanks.
economist's question: "compared to what?"
If they can't interact with each other, just experience something, I'd rather have copies of me than of most other people. If we CAN interact, then a mix of mes and others is best - diversity has value in that case.
If the copies don't diverge their value is zero.
They are me. We are one person, with one set of thoughts, one set of emotions etc.
I don't think I would place more value on lock-step copies. I would love to have lots of copies of me, because then we could all do different things, and I'd not have to wonder whether I could have been a good composer, or writer, or what have you. And we'd probably form a commune and buy a mansion and have other fun economies of scale. I have observed that identical twins seem to get a lot of value out of having a twin.
As to the "value" of those copies, this depends on whether I'm speaking of "value" in the social sense, or the pers...
I'm still tentatively convinced that existence is what mathematical possibility feels like from the inside, and that creating an identical non-interacting copy of oneself is (morally and metaphysically) identical to doing nothing. Considering that, plus the difficulty* of estimating which of a potentially infinite number of worlds we're in, including many in which the structure of your brain is instantiated but everything you observe is hallucinated or "scripted" (similar to Boltzmann brains), I'm beginning to worry that a fully fact-based conseq...
The question is awfully close to the reality juice of many worlds. We seem to treat reality juice as probability for decision theory, and thus we should value the copies linearly, if they are as good as the copies of QM.
I want at least 11 copies of myself with full copy-copy / world-world interaction. This is a way of scaling myself. I'd want the copies to diverge -- actually that's the whole point (each copy handles a different line of work.) I'm mature enough, so I'm quite confident that the copies won't diverge to the point when their top-level values / goals would become incompatible, so I expect the copies to cooperate.
As for how much I'm willing to work for each copy, that's a good question. A year of pickaxe trench-digging seems to be way too cheap and easy for a f...
It depends on external factors, since it would primarily be a way of changing anthropic probabilities (I follow Bostrom's intuitions here). If I today committed to copy myself an extra time whenever something particularly good happened to me (or whenever the world at large took a positive turn), I'd expect to experience a better world from now on.
If I couldn't use copying in that way, I don't think it would be of any value to me.
This question is no good. Would you choose to untranslatable-1 or untranslatable-2? I very much doubt that reliable understanding of this can be reached using human-level philosophy.
But the stipulation as stated leads to major problems - for instance:
each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction
implies that I'm copying the entire world full of people, not just me. That distorts the incentives.
Edit: And it also implies that the copy will not be useful for backup, as whatever takes me out is likely to take it out.
No value at all: to answer "how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)"
Existence of worlds that are not causally related to me should not influence my decisions (I learn from the past and I teach the future: my world cone is my responsibility). I decide by considering whether the world that I create/allow my copy (or child) to exist in is better off (according to myself -- my...
With this kind of question I like to try to disentangle 'second-order effects' from the actual core of what's being asked, namely whether the presence of these copies is considered valuable in and of itself.
So for instance, someone might argue that "lock-step copies" in a neighboring galaxy are useful as back-ups in case of a nearby gamma-ray burst or some other catastrophic system crash. Or that others in the vicinity who are able to observe these "lock-step copies" without affecting them will nevertheless benefit in some way (so, the ...
Can you specify if the copy of me I'm working to create is Different Everett-Branch Me or Two Days In The Future Me? That will effect my answer, as I have a bit of a prejudice. I know it's somewhat inconsistent, but I think I'm a Everett-Branch-ist
It's a difficult question to answer without context. I would certainly work for some trivial amount of time to create a copy of myself, if only because there isn't such a thing already. It would be valuable to have a copy of a person, if there isn't such a thing yet. And it would be valuable to have a copy of myself, if there isn't such a thing yet. After those are met, I think there are clearly diminishing returns, at least because you can't cash in on the 'discovery' novelty anymore.
If my copies can make copies of themselves, then I'm more inclined to put in a year's work to create the first one. Otherwise, I'm no altruist.
I think I might end up disappointing because I have almost no actual data...
By an instrument I meant a psychological instrument, probably initially just a quiz and if that didn't work then perhaps some stroop-like measurements of millisecond delay when answering questions on a computer.
Most of my effort went into working out a strategy for iterative experimental design and brainstorming questions for the very first draft of the questionnaire. I didn't really have a good theory about what pre-existing dispositions or "mental contents" might correlate with dispositions one way or the other.
I thought it would be funny if people who "believed in free will" in the manner of Martin Gardner (an avowed mysterian) turned out to be mechanically predictable on the basis of inferring that they are philosophically confused in ways that lead to two-boxing. Gardner said he would two box... but also predicted that it was impossible for anyone to successfully predict that he would two box.
In his 1974 "Mathematical Games" article in Scientific American he ended with a question:
But has either side really done more than just repeat its case "loudly and slowly"? Can it be that Newcombe's paradox validates free will by invalidating the possibility, in principle, of a predictor capable of guessing a person's choice between two equally rational actions* with better than 50% probability?
In his post script to the same article, reprinted in The Night Is Large he wrote:
It is my view that Newcomb's predictor, even if accurate only 51% of the time, forces a logical contradiction that makes such a prediction, like Russell's barber, impossible. We can avoid the contradiction arising from two different "shoulds" (should you take one or two boxes?) by stating the contradiction as follows. One flawless argument implies that the best way to maximize your reward is to take only the closed box. Another flawless argument implies that the best way to maximize your reward is to take both boxes. Because the two conclusions are contradictory, the prediction cannot be even probably valid. Faced with a Newcomb decision, I would share the suspicions of Max Black and others that I was either the victim of a hoax or of a badly controlled experiment that had yielded false data about the predictor's accuracy. On this assumption, I would take both boxes.
This obviously suggests a great opportunity for falsification by rolling up one's sleeves and just doing it. But I didn't get very far...
One reason I didn't get very far is that I was a very poor college student and I had a number of worries about ecological validity if there wasn't really some money on the line which I couldn't put up.
A quick and dirty idea I had to just get moving was to just get a bunch of prefab psych instruments (like the MMPI and big five stuff but I tried tracking down other things like religious belief inventories and such) and then also make up a Newcomb's quiz of my own, that explained the situation, had some comprehension questions, and then asked for "what would you do".
The Newcomb's quiz would just be "one test among many", but I could score the quizes and come back to give the exact same Newcomb's quiz a second time with a cover sheet explaining that the answer to the final question was actually going to determine payoffs for the subject. All the other tests would give a plausible reason that the prediction might be possible, act as a decoy (soliciting an unvarnished Newcomb's opinion because it wouldn't leap out), and provide fascinating side material to see what might be correlated with opinions about Newcomb's paradox.
This plan foundered on my inability to find any other prefab quizes. I had thought, you know?... science? ...openness? But in my context at that time and place (with the internet not nearly as mature as it is now, not having the library skills I now have, and so on) all my attempts to acquire such tests were failures.
One of the things I realized is that the claim about the accuracy might substantially change the behavior of the subject so I potentially had a chick and egg problem - even nonverbals could influence things as I handed the second set of papers claiming success rates of 1%, 50%, or 99% to help explore the stimulus-reaction space... it would be tricky. I considered eventually bringing in some conformity experiment stuff, like with confederates who one-box or two-box in a way the real subject could watch and maybe be initially fooled by, but that was just getting silly, given my resources.
Another issue is that, if a subject of prediction isn't sure what the predictor may have determined about your predicted action, it seems plausible that the very first time you faced the situation you might have a unique opportunity to do something moderately creative like flipping a coin, having it tell you to two box, and coming out with both prizes, so one of the questions I wanted to stick in was something to very gently probe the possibility that the person would "go random" like this and optimize over this possibility. Do you test for this propensity in advance? How? How without suggesting the very possibility?
This also raises the interesting question about iterated prediction. Its one thing to predict that a smart 12 year old who has just been introduced to the paradox will do, and a different thing to give the test to people who have publicly stated what they would do, and still a different thing to run someone through the system five times in a row so the system and the subject started gaining mutually reflective information (for example, the results on the fifth attempt would probably wash out any information from confederates and giving the subject first hand experience with the success rate, but it creates all kinds of opportunities for the instrument to be hacked by a clever subject).
Or another angle, what about doing the experiment on a group of people where they get to talk about it and watch each other as they go through. Do people influence each other on this subject? Would the instrument have to know what the social context was to predict subject behavior?
This leads to the conclusion that I'd probably have to add some metadata to my instrument so that it could ask the question "how many times have you seen this specific questionnaire?" or "when was the first time you heard about Newcomb's paradox?" and possibly also have the person giving the questionnaire fill in some metadata about recent history of previous attempts and/or the experimenter's subjective estimate of the answers the person should give to familiarity questions.
Another problem was simply finding people with the patience to take the quizes :-P
I ended up never having a final stable version of the quiz and a little bit after that I became way more interested in complex system theory and then later more practical stuff like machine learning and bioinformatics and business models and whatnot - aiming for domain expertise in AI and nano for "save with world" purposes in the coming decades.
I think I did the right thing by leaving the academic psych track, but I was so young and foolish then that I'm still not sure. In any case, I haven't seriously worked on Newcomb's paradox in almost a decade.
Nowadays, I have a sort of suspicion that upon seeing Newcomb's paradox, some people see that the interaction with the predictor is beneficial to them and that it would be good if they could figure out some way to get the most possible benefit from the situation, which would involve at least being predicted to one-box. Then its an open question as to whether its possible (or moral) to "cheat" and two-box on top of that.
So the suspicious part of this idea is that a lot of people's public public claims in this area are a very very geeky form of signaling, with one-box being a way to say "Its simple: I want to completely win... but I won't cheat". I think the two-box choice is also probably a matter of signaling, and it functions as a way to say something like "I have put away childish aspirations of vaguely conceived get rich quick schemes, accepted that in the real world everyone can change their mind at any moment, and am really trustworthy because I'm neither lean nor hungry nor trying to falsely signal otherwise".
Like... I bet C.S. Lewis would have two-boxed. The signaling theory reminds me of his version of moral jujitsu, though personally, I still one box - I want to win :-P
In the future, it may be possible for you to scan your own brain and create copies of yourself. With the power of a controllable superintelligent AI, it may even be possible to create very accurate instances of your past self (and you could take action today or in the near future to make this easier by using lifelogging tools such as these glasses).
So I ask Less Wrong: how valuable do you think creating extra identical, non-interacting copies of yourself is? (each copy existing in its own computational world, which is identical to yours with no copy-copy or world-world interaction)
For example, would you endure a day's hard labor to create an extra self-copy? A month? A year? Consider the hard labor to be digging a trench with a pickaxe, with a harsh taskmaster who can punish you if you slack off.
Do you think having 10 copies of yourself made in the future is 10 times as good as having 1 copy made? Or does your utility in copies drop off sub-linearly?
Last time I spoke to Robin Hanson, he was extremely keen on having a lot of copies of himself created (though I think he was prepared for these copies to be emulant-wage-slaves).
I have created a poll for LW to air its views on this question, then in my next post I'll outline and defend my answer, and lay out some fairly striking implications that this has for existential risk mitigation.
For those on a hardcore-altruism trip, you may substitute any person or entity that you find more valuable than your own good self: would you sacrifice a day of this entity's life for an extra copy? A year? etc.
UPDATE: Wei Dai has asked this question before, in his post "The moral status of independent identical copies" - though his post focuses more on lock-step copies that are identical over time, whereas here I am interested in both lock-step identical copies and statistically identical copies (a statistically identical copy has the same probability distribution of futures as you do).