Eliezer_Yudkowsky comments on Optimal Strategies for Reducing Existential Risk - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (21)
Also, suppose that there are and will be 1000 singularitarian activists who can, together, increase the probability of a positive singularity outcome from 0.1 to 0.2, and you are average amongst them. The benefit that accrues to you if you spend time working with the singularitarian movement is then delta U * 0.1/10,000 = 10^(-5) delta U, where delta U is the utility difference between the expected utility of the life you will live conditional upon existential disaster (which won't occur for quite a while - at least 15 years from today) and the utility of the life you will live conditional upon a positive singularity outcome.
I doubt that anyone really has a utility function that supports a delta U of 100,000 times the typical utility differences in our everyday lives, e.g. 100,000 times the utility difference of spending money on a nice house, an expensive family, etc. Therefore the goodness of a post positive singularity outcome cannot incentivize the individual to bring it about, to the singularitarian movement has to rely upon people whose personal notion of goodness comes from being the kind of person who puts others before themselves, even in the face of criticism and ostracism from those others.
That is, unless there is some kind of reward/punishment precommitment going on.
You appear to assume that rationalists are selfish? Or that our "real selves" are exclusively sub-deliberative systems that can't multiply benefits to others?