Nick_Tarleton comments on Shock Level 5: Big Worlds and Modal Realism - Less Wrong

15 [deleted] 25 May 2010 11:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (140)

You are viewing a single comment's thread. Show more comments above.

Comment author: Yvain 26 May 2010 04:50:49PM 4 points [-]

Yes, but utility isn't linear across saved lives and maybe it even shouldn't be. I would be willing to give many more resources to save the lives of the last fifty pandas in the world, saving pandas from extinction, than I would be to save fifty pandas if total panda population was 100,000 threatening to go down to 99,950.

Now it's true that human utility is more linear than panda utility because I care much more about humans for their own sake versus for the sake of my preference for there being humans, but I still think saving the last eight billion humans is more important than saving eight billion out of infinity.

Comment author: Nick_Tarleton 26 May 2010 08:27:24PM 7 points [-]

You're an equivalence class. You don't save the last eight billion humans, you save eight billion humans in each of the infinitely many worlds in which your decision algorithm is instantiated.

Comment author: Yvain 29 May 2010 10:14:30PM 1 point [-]

Why is that significant? No matter how many worlds I'm saving eight billion humans in, there are still humans left over who are saved no matter what I do or don't do. So the "reward" of my actions still gets downgraded from "preventing human extinction" to "saving a bunch of people, but humanity will be safe no matter what".

In fact...hmm...any given human will be instantiated in infinitely many worlds, so you don't actually save any lives. You just increase those lives' measure, which is sort of hard to get excited about.