multifoliaterose comments on An Outside View on Less Wrong's Advice - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (159)
Yes, it's from "The Hollow Men"
But Goertzel and Kurzweil are speakers at the Singularity Summit! :-) I agree that the talks by SIAI staff at the Singularity Summit which address AI risk reduce x-risk, but it's not clear to me that the Singularity Summit is positive on balance.
Even if nuclear deproliferation is overfunded on aggregate there may be particular organizations which are especially effective and in need of room for more funding (the philanthropic world isn't very efficient). I agree that a priori it looks as though SIAI has a stronger case for room for more funding thanorganizations working against nuclear war but also think that the matter warrants further investigation.
I agree that uncertainty as to which strategies work drives the expected value down, but not to zero.
I agree that the best people thinking about AI x-risk are at SIAI. This doesn't imply that their efforts are strong enough for them to make a meaningful dent in the problem (nature doesn't grade on a curve, etc.).
I'm presently inclined to agree that the immediate effect of nuclear war is unlikely to be extinction (although I've heard smart people express views to the contrary). But plausibly nuclear war would leave humanity in a much worse position to address other x-risks (e.g. political & economic instability seem more likely to be conducive to unfriendly AI than political & economic stability). Furthermore, even if nuclear war doesn't cause human extinction it could still cause astronomical waste on account of crippling civilization to the point that it couldn't yield an intelligence explosion.
Some of your arguments apply to some of the risks but not all of the arguments apply to all of the risks. In particular, none of the arguments seem to apply to asteroid strike risk.
This is definitely a point in favor of focus on FAI but it's not clear to me that it's a strong enough.
(a) The existence of any x-risk / catastrophic risk charity with room for more funding suggests that donating money is highly cost-effective.
(b) Donating money is not the only way to reduce x-risk. One can work against one of the risks oneself (e.g. work for SIAI as a volunteer, work for a government agency working on one of the relevant x-risks). One can also try to influence the donors of others.
(c) Regarding your discomfort with the lifestyle that your reasoning seems to lead you to, see paragraphs 2, 3, and 4 of Carl Shulman's comment here.
Personally I gave up trying to take into account such considerations. Otherwise I would have to weigh the positive and negative effects of comments similar to yours according to influence they might have on existential risks. This quickly leads to chaos theoretic considerations like the butterfly effect which in turn leads to scenarios resembling Pascal's Mugging where tiny probabilities are being outweighed by vast utilities. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I decided to neglect small probability events.
Whoa- I've been parsing it as a chemical name all along (and subconsciously suppressing the second i). Eliot's one of my favorites, but I never made the connection.
Good points, thx for the link to Carl Shulman's comment, I love his reasoning.
Just for the record: The reason why I don't like the conclusion of working in finance to earn money in order to donate is that I guess I can't do it. I simply hate finance too much and I know I'm too selfish. Just wearing a suit is probably more I could bear;) I will respond to the rest of your comment in private.
Please consider posting your reply here, I would be interested in reading it!
I wrote you a PM.