- Our not wanting to die is a bit of irrational behavior selected for by evolution. The universe doesn’t care if you’re there or not. The contrasting idea that you are the universe is mystical, not rational.
- The idea that you are alive “now” but will be dead “later” is irrational. Time is just a persistent illusion according to relativistic physics. You are alive and dead, period.
- A cyber-replica is not you. If one were made and stood next to you, you would still not consent to be shot.
- Ditto a meat replica
- If you believe the many worlds model of quantum physics is true (Eliezer does), then there already are a vitually infinite number of replicas of you already, so why bother making another one?
Terminal values and preferences are not rational or irrational. They simply are your preferences. I want a pizza. If I get a pizza, that won't make me consent to get shot. I still want a pizza. There are a virtually infinite number of me that DO have a pizza. I still want a pizza. The pizza from a certain point of view won't exist, and neither will I, by the time I get to eat some of it. I still want a pizza, damn it.
Of course, if you think all of that is irrational, then by all means don't order the pizza. More for me."
I do perform such inferences in similar situations. But what likelihood ratio did you place on the evidence "User:Clippy agreed to pay 50,000 USD for a 50-year-deferred gain of a sub-planet's mass of paperclips" with respect to the AI/NI hypotheses?
I don't understand the relevance of CLIP (superior protocol though it is), nor do I understand the inferential difficulty on this matter.
Do you understand why I would prefer that clippys continue to increase universe-wide paperclippage? Do you understand why I would regard a clippy's statement about its values in my language as non-weak evidence in favor of the hypothesis that it holds the purported values? Do you understand why I would find it unusual that a clippy would not want to make paperclips?
If so, it should not be difficult to understand why I would be troubled and perplexed at a clippy stating that it wished for irreversible cessation of paperclip-making abilities.
While I am vaguely aware of the whole "money for paperclips" thing that you and... Kevin, was it?... have going on, I am not sufficiently familiar with its details to assign it a coherent probability in either the NI or AI scenario. That said, an agent's willingness to spend significant sums of money for the credible promise of the creation of a quantity of paperclips far in excess of any human's actual paperclip requirements is pretty strong evidence that the agent is a genuine paperclip-maximizer. As for whether a genuine paperclip-maximizer i