Seeing as we are talking about speculative dangers coming from a speculative technology that has yet to be developed, it seems pretty understandable.
I am pretty sure, that as soon as first AGI's arrive on the market, people would start to take possible dangers more seriously.
I am pretty sure, that as soon as first AGI's arrive on the market, people would start to take possible dangers more seriously.
And it will be quite likely at that point that we are much closer to having an AGI that will foom than to having an AI that won't kill us and that it is too late.
If you want people to ask you stuff reply to this post with a comment to that effect.
More accurately, ask any participating LessWronger anything that is in the category of questions they indicate they would answer.
If you want to talk about this post you can reply to my comment below that says "Discussion of this post goes here.", or not.