Reasons given:
My analysis:
There might be a lack of "updating on expected evidence" here. Looking back to 2007 or something, it would have been silly to predict with a lot of confidence that machine learning wouldn't make any progress over the next decade. In fact it would probably be sensible to think of some default amount of progress per decade, and to think of 'no progress' as an outlier.
Then if given that progress you think you'll take AI risk seriously, well maybe you should start taking it seriously right now.
Something similar is true for famous people like Elon Musk and Stephen Hawking taking the idea seriously. "Hey, here's an important and correct new insight about the world, but no famous people have jumped on the bandwagon yet, so I won't take it seriously". But again it seems reasonable to believe that famous, smart people will eventually take important and correct insights seriously. It's not even without precedent - Theil was an early SIAI donor.
Having said that, I have personally been surprised by how much mainstream attention AI risk has got. Looking back at 2007, I expected AI risk to get a lot of attention eventually, but I was thinking 5 decades rather than one.