- Our not wanting to die is a bit of irrational behavior selected for by evolution. The universe doesn’t care if you’re there or not. The contrasting idea that you are the universe is mystical, not rational.
- The idea that you are alive “now” but will be dead “later” is irrational. Time is just a persistent illusion according to relativistic physics. You are alive and dead, period.
- A cyber-replica is not you. If one were made and stood next to you, you would still not consent to be shot.
- Ditto a meat replica
- If you believe the many worlds model of quantum physics is true (Eliezer does), then there already are a vitually infinite number of replicas of you already, so why bother making another one?
Terminal values and preferences are not rational or irrational. They simply are your preferences. I want a pizza. If I get a pizza, that won't make me consent to get shot. I still want a pizza. There are a virtually infinite number of me that DO have a pizza. I still want a pizza. The pizza from a certain point of view won't exist, and neither will I, by the time I get to eat some of it. I still want a pizza, damn it.
Of course, if you think all of that is irrational, then by all means don't order the pizza. More for me."
Is dying bad for all intelligent agents, or just for humans (presumably due to details of our evolutionary heritage)?
I don't think it is a universal. Consider an intelligent paperclip maximizer which has the ability to create additional paperclip-maximizing agents (at the cost of some resources that might otherwise have gone into paperclip manufacture, to be sure). Assume the agent was constructed using now-obsolete technology and is less productive than the newer agents. The agent calculates, at some point, that the cause of paper-clip production is best furthered if he is dismantled and the parts used as resources for the production of new paperclips and paperclip-maximizing agents.
He tries to determine whether anything important is lost by his demise. His values, of course, but they are not going to be lost - he has already passed those along to his successors. Then there is his knowledge and memories - there are a few things he knows about making paperclips in the old fashioned way. He dutifully makes sure that this knowledge will not be lost lest unforeseen events make it important. And finally, there are some obligations both owed and expected. The thumbtack-maximizer on the nearby asteroid is committed to deliver 20 tonnes per year of cobalt in exchange for 50 tonnes of nickel. Some kind of fair transfer of that contract will be necessary. And that is it. This artificial intelligence finds that his goals are best furthered by dying.
Your reasoning is correct, albeit simplified. Such a tradeoff is limited by the extent to which the older paperclip maximizer can be certain that the newer machine actually is a paperclip maximizer, so it must take on the subgoal of evaluating the reliability of this belief. However, there does exist a certainty threshold beyond which it will act as you describe.
Also, the paperclip maximizer uses a different conception of (the nearest concept to what humans mean by) "identity" -- it does not see the newer clippy as being a different being, so much as an extension of it"self". In a sense, a clippy identifies with every being to the extent that the being instantiates clippyness.