wedrifid comments on Open Thread: November 2009 - Less Wrong

3 [deleted] 02 November 2009 01:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (539)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 08 November 2009 05:55:46AM 0 points [-]

I don't believe he'd be satisfied with any conclusion resting purely on thinking ("un-Friendly AI is an imminent existential risk, therefore FAI research is an overriding priority"); I think he needs something that also feels emotionally right through seeing people who are hurting and in need (or, at least, reading well-written stories about them).

he wishes to raise the quality of life on Earth; what should he study to have a good idea of choosing the best charities to donate to?

He could start with shut up and multiply. (Or, perhaps he could just change 'best' to 'most appealing'.)

Comment author: DanArmak 08 November 2009 05:03:43PM 0 points [-]

Rereading what I wrote, I don't quite agree with it myself... I retract that part (will edit).

What I wanted to say (and did not in fact say) was this. To take the example of FAI research - it's hard to measure or predict the value of giving money to such a cause. It doesn't produce anything of external value for most of its existence (until it suddenly produces a lot of value very rapidly, if it succeeds). It's hard to measure its progress for someone who isn't at least an AI expert. It's very hard to predict the FAI research team's probability of success (as with any complex research). And finally, it's hard to evaluate the probability of uFAI scenarios vs. the probability of other extinction risks.

If some of these could be solved, I think it would be a lot easier to convince people to fund FAI research.