wedrifid comments on New Q&A by Nick Bostrom - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (23)
Four!
"I'd rather live with a good question than a bad answer." -- Aryeh Frimer
I am not sure how to interpret your comment:
I'll comment on the first interpretation that I deem most likely.
To fix complex problems you have to solve many other problems at the same time, problems that are either directly relevant to the bigger problem or necessitated by other needs.
That the Singularity Institute might be best equipped to solve the friendly AI problem does not mean that they are the best choice to research general questions about existential risks. That risks from AI are the most urgent existential risk does not mean that it would be wise to abandon existential risk research until friendly AI is solved.
By contributing to the Singularity Institute you are supporting various activities that you might not equally value. If you thought that they knew better than you how to distribute your money among those activities, you wouldn't mind. But that they are good at doing one thing does not mean that they are good at doing another.
Now you might argue that even less of your money would be spend on the activity you value the most if you were going to distribute it among different charities. But that's not relevant here. Existential risk research is something you have to do anyway, something you have to invest a certain amount of resources into while pursuing your main objective, just like eating and drinking. If the Singularity Institute isn't doing that for you then you have to it yourself, or, in the case of existential risk research, pay others to do it for you who are better at it.
The second quote mentions the number four; wedrifid was referring to that, not the number four.
Aha! I didn't even read the other quotes and just went straight to quote number four.
I don't think that suggesting new definitions for words is problematic if it helps. In the case of calling a tail a leg it would deprive the word leg of most of its meaning. But the case of calling two charities departments of a single charity highlights a problem with Steven Landsburg's advice for charitable giving:
This disregards the fact that problems like cancer, heart disease or hunger consist of a huge amount of sub-problems, many of which need to be tackled at the same time to make the main objective technically feasible.
What if you were able to assign weight to the various problems that need to be solved in order to reach the charity's overall goal? You would do so if you didn't believe that the charity itself was efficiently distributing its money among its various sub-goals.
Take for example the case of the Singularity Institute. If people could weight various of the SI's projects by defining how their money should be used, some people wouldn't support the idea of rationality camps.
And here it is useful to view the SI and FHI as two departments of the same charity. They both pursue goals that either support each other or that need to be solved at the same time.
If you were to follow Landsburgs argumentation, if you were interested in defeating hunger, you might just contribute to a project that researches certain genetic modification of useful plants. Or why not contribute to the company that tries to engineer better DNA sequencers?
My point is that the concept of a charity is an artificially created black box with the label "No User Serviceable Parts Inside" and Landsburg's argument makes it sound like we should draw a line at that point and don't try to give even more efficiently. I don't see that, I am saying that in certain cases you can as well view one charity as many and two charities as one.
Just that calling charities departments doesn't make them a single charity. They are two damn charities! Nothing more than that.