wstrinz comments on Poll: What value extra copies? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (136)
You said pretty much what I was thinking. My (main) motivation for copying myself would be to make sure there is still a version of the matter/energy pattern wstrinz instantiated in the world in the event that one of us gets run over by a bus. If the copy has to stay completely separate from me, I don't really care about it (and I imagine it doesn't really care about me).
As with many uploading/anthropics problems, I find abusing Many Worlds to be a good way to get at this. Does it make me especially happy that there's a huge number of other me's in other universes? Not really. Would I give you anything, time or money, if you could credibly claim to be able to produce another universe with another me in it? probably not.
Yep, I gave the same answer. I only care about myself, not copies of myself, high-minded rationalizations notwithstanding. "It all adds up to normality."
Only where you explain what's already normal. Where you explain counterintuitive unnatural situations, it doesn't have to add up to normality.
Should I take it as an admission that you don't actually know whether to choose torture over dust specks, and would rather delegate this question to the FAI?
All moral questions should be delegated to FAI, whenever that's possible, but this is trivially so and doesn't address the questions.
What I'll choose will be based on some mix of moral intuition, heuristics about the utilitarian shape of morality, and expected utility estimates. But that would be a matter of making the decision, not a matter of obtaining interesting knowledge about the actual answers to the moral questions.
I don't know whether torture or specks are preferable, I can offer some arguments that torture is better, and some arguments that specks are better, but that won't give much hope for eventually figuring out the truth, unlike with the more accessible questions in natural science, like the speed of light. I can say that if given the choice, I'd choose torture, based on what I know, but I'm not sure it's the right choice and I don't know of any promising strategy for learning more about which choice is the right one. And thus I'd prefer to leave such questions alone, so long as the corresponding decisions don't need to be actually made.
I don't see what these thought experiments can teach me.
As it happened several times before, you seem to take as obvious some things that I don't find obvious at all, and which would make nice discussion topics for LW.
How can you tell that some program is a fair extrapolation of your morality? If we create a program that gives 100% correct answers to all "realistic" moral questions that you deal with in real life, but gives grossly unintuitive and awful-sounding answers to many "unrealistic" moral questions like Torture vs Dustspecks or the Repugnant Conclusion, would you force yourself to trust it over your intuitions? Would it help if the program were simple? What else?
I admit I'm confused on this issue, but feel that our instinctive judgements about unrealistic situations convey some non-zero information about our morality that needs to be preserved, too. Otherwise the FAI risks putting us all into a novel situation that we will instinctively hate.
This is the main open question of FAI theory. (Although FAI doesn't just extrapolate your revealed reliable moral intuitions, it should consider at least the whole mind as source data.)
I don't suppose agreeing on more reliable moral questions is an adequate criterion (sufficient condition), though I'd expect agreement on such questions to more or less hold. FAI needs to be backed by solid theory, explaining why exactly its answers are superior to moral intuition. That theory is what would force one to accept even counter-intuitive conclusions. Of course, one should be careful not to be fooled by a wrong theory, but being fooled by your own moral intuition is also always a possibility.
Maybe they do, but how much would you expect to learn about quasars from observations made by staring at the sky with your eyes?
We need better methods that don't involve relying exclusively on vanilla moral intuitions. What kinds of methods would work, I don't know, but I do know that moral intuition is not the answer. FAI refers to successful completion of this program, and so represents the answers more reliable than moral intuition.
If by "solid" you mean "internally consistent", there's no need to wait - you should adopt expected utilitarianism now and choose torture. If by "solid" you mean "agrees with our intuitions about real life", we're back to square one. If by "solid" you mean something else, please explain what exactly. It looks to me like you're running circles around the is-ought problem without recognizing it.
How could I possibly mean "internally consistent"? Being consistent conveys no information about a concept, aside from its non-triviality, and so can't be a useful characteristic. And choosing specks is also "internally consistent". Maybe I like specks in others' eyes.
FAI theory should be reliably convincing and verifiable, preferably on the level of mathematical proofs. FAI theory describes how to formally define the correct answers to moral questions, but doesn't at all necessarily help in intuitive understanding of what these answers are. It could be a formalization of "what we'd choose if we were smarter, knew more, had more time to think", for example, which doesn't exactly show how the answers look.
Then the FAI risks putting us all in a situation we hate, which we'd love if only we were a bit smarter.
Seconding Vladimir_Nesov's correction - for context, the original quote, in context:
The phrase was used in the novel multiple times, and less confusingly so on other occasions. For example:
I apologize - I haven't read the book.