Kawoomba comments on Effective Altruism Through Advertising Vegetarianism? - Less Wrong

20 Post author: peter_hurford 12 June 2013 06:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (551)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kawoomba 13 June 2013 11:30:58AM 5 points [-]

I want the result of me having existed, as compared to an alternative universe where I did not exist, to be less overall suffering in the world.

That's probably the abridged version, because if that were the actual goal, a doomsday machine would do the trick.

Comment author: [deleted] 14 June 2013 09:08:02PM 1 point [-]

If you count pleasure as negative suffering...

Comment author: Kaj_Sotala 13 June 2013 12:24:48PM 0 points [-]

That's probably the abridged version

Yes.

Comment author: Kawoomba 13 June 2013 05:38:41PM *  0 points [-]

Do you have a fleshed-out version formulated somewhere? *tries to hide iron fireplace poker behind his back*

Comment author: Kaj_Sotala 13 June 2013 07:49:11PM 1 point [-]

No. The "fleshed-out version" is rather complex, incomplete, and constantly-changing, as it's effectively the current compromise that's been forged between the negative utilitarian, positive utilitarian, deontological, and purely egoist factions within my brain. It has plenty of inconsistencies, but I resolve those on a case-by-case basis as I encounter them. I don't have a good answer to the doomsday machine, because I currently don't expect to encounter a situation where my actions would have considerable influence on the creation of a doomsday machine, so I haven't needed to resolve that particular inconsistency.

Of course, there is the question of x-risk mitigation work and the fact that e.g. my work for MIRI might reduce the risk of a doomsday machine, so I have been forced to somewhat consider the question. My negative utilitarian faction would consider it a good thing if all life on Earth were eradicated, with the other factions strongly disagreeing. The current compromise balance is based around the suspicion that most kinds of x-risk would probably lead to massive suffering in the form of an immense death toll and then a gradual reconstruction that would eventually bring Earth's population back to its current levels, rather than all life on the planet going extinct. (Even for AI/Singularity scenarios there is great uncertainty and a non-trivial possibility for such an outcome.) All my brain-factions agree on this being a Seriously Bad scenario to happen, so there is currently an agreement that work aimed at reducing the outcome of this scenario is good, even if it indirectly influences the probability of an "everyone dies" scenario in one way or another. The compromise is only possible because we are currently very unsure of what would have a very strong effect on the probability of an "everyone dies" scenario.

I am unsure of what would happen if we had good evidence of it really being possible to strongly increase or decrease the probability of an "everyone dies" scenario: with the current power balances, I expect that we'd just decide not to do anything either way, with the negative utilitarian faction being strong enough to veto attempts to save humanity, but not strong enough to override everyone else's veto when it came to attempts to destroy humanity. Of course, this assumes that humanity would basically go on experiencing its current levels of suffering after being saved: if saving humanity would also involve a positive Singularity after which it was very sure that nobody would need to experience involuntary suffering anymore, then the power balance would very strongly shift to favor saving humanity.