This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.
It really isn't. One of the reasons for the founding of this forum, yes. But what this forum is meant to be for is advancing the art of human rationality. If compelling evidence comes along that AI safety research is useless and AI research is vanishingly unlikely to have the sort of terrible consequences feared by the likes of MIRI, then "this forum" should be very much in the business of advocating against AI safety research.
You're right, but.
The whole story goes like this: Eliezer founded this forum to advancing the art of human rationality, so that people would stop making silly objections to the issue of AI safety like "intelligence would surely bring about morality" and things like that.
The focus of LW is human rationality and of MIRI is AI safety, but as far as I can tell, we still haven't found any valid objections to the orthogonality thesis. On the contrary, the issue of autonomous agents safety is gaining traction and recognition.
I do agree that if we found a strong objections we should change perspective, but we still haven't and indeed we are seeing more and more worrisome examples.