aaronsw comments on A cynical explanation for why rationalists worry about FAI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (179)
Let's take the outside view for a second. After all, if you want to save the planet from AIs, you have to do a lot of thinking! You have to learn all sorts of stuff and prove it and just generally solve a lot of eye-crossing philosophy problems which just read like slippery bullshit. But if you want to save the planet from asteroids, you can conveniently do the whole thing without ever leaving your own field and applying all the existing engineering and astronomy techniques. Why, you even found a justification for NASA continuing to exist (and larding out pork all over the country) and better yet, for the nuclear weapons program to be funded even more (after all, what do you think you'll be doing when the Shuttle gets there?).
Obviously, this isn't any sort of proof that anti-asteroid programs are worthless self-interested rent-seeking government pork.
But it sure does seem suspicious that continuing business as usual to the tune of billions can save the entire species from certain doom.
Yes, I agree that if a politician or government official tells you the most effective thing you can do to prevent asteroids from destroying the planet is "keep NASA at current funding levels and increase funding for nuclear weapons research" then you should be very suspicious.
I think you're missing the point; I actually do think NASA is one of the best organizations to handle anti-asteroid missions and nukes are a vital tool since the more gradual techniques may well take more time than we have.
Your application of cynicism proves everything, and so proves nothing. Every strategy can be - rightly - pointed out to benefit some group and disadvantage some other group.
The only time this wouldn't apply is if someone claiming a particular risk is higher than estimated and was doing absolutely nothing about it whatsoever and so couldn't benefit from attempts to address it. And in that case, one would be vastly more justified in discounting them because they themselves don't seem to actually believe it rather than believing them because this particular use of Outside View doesn't penalize them.
(Or to put it another more philosophical way: what sort of agent believes that X is a valuable problem to work on, and also doesn't believe that whatever Y approach he is taking is the best approach for him to be taking? One can of course believe that there are better approaches for other people - 'if I were a mathematical genius, I could be making more progress on FAI than if I were an ordinary person whose main skills are OK writing and research' - or for counterfactual selves with stronger willpower, but for oneself? This is analogous to Moore's paradox or the epistemic question, what sort of agent doesn't believe that his current beliefs are the best for him to hold? "It's raining outside, but I don't believe it is." So this leads to a remarkable result: for every agent which is trying to accomplish something, we can cynically say 'how very convenient that the approach you think is best is the one you happen to be using! How awfully awfully convenient! Not.' And since we can say it for every agent equally, the argument is entirely useless.)
Incidentally:
I think you badly overstate your case here. Most armchair rationalists seem to much prefer activities like... saving the world by debunking theism (again). How many issues have Skeptic or Skeptical Inquirer devoted to discussing FAI?
There's a much more obvious reason why many LWers would find FAI interesting other than the concept being some sort of attractive death spiral for armchair rationalists in general...
FHI, for what it's worth, does say that simulation shutdown is underestimated but doesn't suggest doing anything.
My suspicion isn't because the recommended strategy has some benefits, it's because it has no costs. It would not be surprising if an asteroid-prevention plan used NASA and nukes. It would be surprising if it didn't require us to do anything particularly hard. What's suspicious about SIAI is how often their strategic goals happen to be exactly the things you might suspect the people involved would enjoy doing anyway (e.g. writing blog posts promoting their ideas) instead of difficult things at which they might conspicuously fail.