What changed your mind on the latter?
When I seen circulating notion of serious AI danger, without details, I guess I assumed it originated from better/more relevant arguments.
What I see instead is arguments of general difficulty of some aspects of AI (such as real world motivation) crafted as to suggest update of unlikelihood of only "friendly AI that genuinely cares about mankind" but not the general unlikelihood of real world motivation on AI, because the one making the arguments tells you that you should update the former but doesn't tell about the latter.
This is combined with ...
Admitting to being wrong isn't easy, but it's something we want to encourage.
So ... were you convinced by someone's arguments lately? Did you realize a heated disagreement was actually a misunderstanding? Here's the place to talk about it!