You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

wedrifid comments on New Q&A by Nick Bostrom - Less Wrong Discussion

12 Post author: Stuart_Armstrong 15 November 2011 11:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 16 November 2011 05:17:03PM 1 point [-]

"I'd rather live with a good question than a bad answer." -- Aryeh Frimer

I am not sure how to interpret your comment:

  • I gave a bad answer to a good question.
  • You'd rather support the FHI exclusively as they are asking the right questions, whereas the SI might give a bad answer.

I'll comment on the first interpretation that I deem most likely.

To fix complex problems you have to solve many other problems at the same time, problems that are either directly relevant to the bigger problem or necessitated by other needs.

That the Singularity Institute might be best equipped to solve the friendly AI problem does not mean that they are the best choice to research general questions about existential risks. That risks from AI are the most urgent existential risk does not mean that it would be wise to abandon existential risk research until friendly AI is solved.

By contributing to the Singularity Institute you are supporting various activities that you might not equally value. If you thought that they knew better than you how to distribute your money among those activities, you wouldn't mind. But that they are good at doing one thing does not mean that they are good at doing another.

Now you might argue that even less of your money would be spend on the activity you value the most if you were going to distribute it among different charities. But that's not relevant here. Existential risk research is something you have to do anyway, something you have to invest a certain amount of resources into while pursuing your main objective, just like eating and drinking. If the Singularity Institute isn't doing that for you then you have to it yourself, or, in the case of existential risk research, pay others to do it for you who are better at it.

Comment author: wedrifid 16 November 2011 05:33:53PM 1 point [-]

Just that calling charities departments doesn't make them a single charity. They are two damn charities! Nothing more than that.