RichardKennaway comments on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (74)
In the section on EA, you include discussion of AGI, existential risk, and the existential risk of an AGI, which seem to me different subjects. Can you clarify what you see as the relation between these things and EA?
My picture of EA is distributing anti-malarial bed nets, or trying to improve clean water supplies. While some in the EA movement may judge existential risk or AGI to be the area they should direct their vocation towards (whether because of their rating of the risk itself or their own comparative advantage), they are not listed among, for example, Givewell's recommended charities.
EA is an intensional movement.
http://effective-altruism.com/ea/j7/effective_altruism_as_an_intensional_movement/
I concur, with many other people that when you start of from a wide sample of aggregative consequentialist values and try to do the most good, you bump into AI pretty soon. As I told Stuart Russell a while ago to explain why a Philosopher Anthropologist was auditing his course:
That's how I see it anyway. Most of the arguments for it are in "Superintelligence" if you disagree with that, then you probably do disagree with me.
Not particularly disagreeing, I just found it odd in comparison to other EA writings. Thanks for the clarification.
It's actually fairly common in EA circles by now to acknowledge AI as an issue. The disagreements tend to be more about whether there are useful things to be done about it, or whether there are specific nonprofits worth supporting. (Givewell has a blogpost in that direction)