tog comments on 2014 Survey of Effective Altruists - Less Wrong

27 Post author: tog 05 May 2014 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (148)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 05 May 2014 06:25:43PM 0 points [-]

No big systematic overview, though several comments and posts of mine touch upon different parts of them. Is there anything in particular that you're interested in?

Comment author: tog 06 May 2014 11:40:58PM *  0 points [-]

If I could ask two quick questions, it'd be whether you're a realist and whether you're a cognitivist. The preponderance of those views within EA is what I've heard debated most often. (This is different from what first made me ask, but I'll drop that.)

I know Jacy Anthis - thebestwecan on LessWrong - has an argument that realism combined with the moral beliefs about future generations typical among EAs suggests that smarter people in the future will work out a more correct ethics, and that this should significantly affect our actions now. He rejects realism, and think this is a bad consequence. I think it actually doesn't depend on realism, but rather on most forms of cognitivism, for instance ones on which our coherent extrapolated view is correct. He plans to write about this.

Comment author: Kaj_Sotala 07 May 2014 07:27:54AM *  0 points [-]

Definitely not a realist. I haven't looked at the exact definitions of these terms very much, but judging from the Wikipedia and SEP articles that I've skimmed, I'd call myself an ethical subjectivist (which apparently does fall under cognitivism).

Comment author: thebestwecan 07 May 2014 12:05:43AM *  0 points [-]

I believe the prevalence of moral realism within EA is risky and bad for EA goals for several reasons. One of which is that moral realists tend to believe in the inevitability of a positive far-future (since smart minds will converge on the "right" morality), which tends to make them focus on ensuring the existence of the far future at the cost of other things.

If smart minds will converge on the "right" morality, this makes sense, but I severely doubt that is true. It could be true, but that possibility certainly isn't worth sacrificing other goals of improvement.

And I think trying to figure out the "right" morality is a waste of resources for similar reasons. CEA has expressed the views I argue against here, which has other EAs and me concerned.