You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

RichardKennaway comments on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. - Less Wrong Discussion

15 Post author: diegocaleiro 28 November 2015 11:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (74)

You are viewing a single comment's thread.

Comment author: RichardKennaway 29 November 2015 09:37:10AM 1 point [-]

In the section on EA, you include discussion of AGI, existential risk, and the existential risk of an AGI, which seem to me different subjects. Can you clarify what you see as the relation between these things and EA?

My picture of EA is distributing anti-malarial bed nets, or trying to improve clean water supplies. While some in the EA movement may judge existential risk or AGI to be the area they should direct their vocation towards (whether because of their rating of the risk itself or their own comparative advantage), they are not listed among, for example, Givewell's recommended charities.

Comment author: diegocaleiro 29 November 2015 10:33:34AM *  0 points [-]

EA is an intensional movement.

http://effective-altruism.com/ea/j7/effective_altruism_as_an_intensional_movement/

I concur, with many other people that when you start of from a wide sample of aggregative consequentialist values and try to do the most good, you bump into AI pretty soon. As I told Stuart Russell a while ago to explain why a Philosopher Anthropologist was auditing his course:

My PHD will likely be a book on altruism, and any respectable altruist these days is worried about AI at least 30% of his waking life.

That's how I see it anyway. Most of the arguments for it are in "Superintelligence" if you disagree with that, then you probably do disagree with me.

Comment author: RichardKennaway 29 November 2015 01:37:22PM 0 points [-]

Not particularly disagreeing, I just found it odd in comparison to other EA writings. Thanks for the clarification.

Comment author: Raemon 29 November 2015 06:04:25PM 1 point [-]

It's actually fairly common in EA circles by now to acknowledge AI as an issue. The disagreements tend to be more about whether there are useful things to be done about it, or whether there are specific nonprofits worth supporting. (Givewell has a blogpost in that direction)