My friend, hearing me recount tales of LessWrong, recently asked me if I thought it was simply a coincidence that so many LessWrong rationality nerds cared so much about creating Friendly AI. "If Eliezer had simply been obsessed by saving the world from asteroids, would they all be focused on that?"
Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.
After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.
Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn't even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.
Obviously this isn't any sort of proof that working on FAI is irrational, but it does seem awfully suspicious that people who really like to spend their time thinking about ideas have managed to persuade themselves that they can save the entire species from certain doom just by thinking about ideas.
What is the "outside view" on how much of an existential risk asteroids are? You know, the one you get when you look at how often asteroid impacts at or near the level that can cause mass extinctions happen? Answer: very damn low.
"The Outside View" isn't just a slogan you can chant to automatically win an argument. Despite the observational evidence from common usage the phrase doesn't mean "Wow! You guys who disagree with me are nerds. Sophisticated people think like I do. If you want to be cool you should agree with me to". No, you actually have to look at what the outside view suggests and apply it consistently to your own thinking. In this post you are clearly not doing so.
Something being difficult (or implausible) is actually a good reason not to do it (on the margin).
What the? Where on earth are you getting the idea that building an FAI isn't hard work? Or that it doesn't require building stuff and solving gritty engineering problems?
@aaronsw:
I'd like to reinforce this point. If it isn't hard work, please point us all at whatever solution any random mathematician and/or programmer could come up with on how to concretely implement Löb's Theorem within an AI to self-prove that a modification will not cause systematic breakdown or change the AI's behavior in an unexpected (most likely fatal to the human race, if you r... (read more)