dxu comments on Stupid Questions May 2015 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (263)
Well, ChaosMote already gave part of the answer, but another reason is the idea of comparative advantage. Normally I'd bring up someone like Scott Alexander/Yvain as an example (since he's repeatedly claimed he's not good at math and blogs more about politics/general rationality than about AI), but this time, you can just look at yourself. If, as you claim,
then your comparative advantage lies less in theory and more in popularization. Technically, theory might be more important, but if you can net bigger gains elsewhere, then by all means you should do so. To use a (somewhat strained) analogy, think about expected value. Which would you prefer: a guaranteed US $50, or a 10% chance at getting US $300? The raw value of the $300 prize might be greater, but you have to multiply by the probabilities before you can do a comparison. It's the same here. For some LWers, working on AI is the way to go, but for others who aren't as good at math, maybe raising money is the best way to do things. And then there's the even bigger picture: AI might be the most important risk in the end, but what if (say) nuclear war occurs first? A politically-oriented person might do better to go into government or something of the sort, even if that person thinks AI is more important in the long run.
So while it might look somewhat strange that not every LWer is working frantically on AI at first, if you look a little deeper, there's actually a good reason. (And then there's also scope insensitivity, hyperbolic discounting, and all that good stuff ChaosMote brought up.) In a sense, you answered your own question when you asked your second.