XiXiDu comments on AALWA: Ask any LessWronger anything - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (611)
I'm a Research Associate at MIRI. I became a supporter in late 2005, then contributed to research and publication in various ways. Please, AMA.
Opinions I express here and elsewhere are mine alone, not MIRI's.
To be clear, as an Associate, I am an outsider to the MIRI team (who collaborates with them in various ways).
My question is similar to the one that Apprentice posed below. Here are my probability estimates of unfriendly and friendly AI, what are yours? And more importantly, where do you draw the line, what probability estimate would be low enough for you to drop the AI business from your consideration?
Even a fairly low probability estimate would justify effort on an existential risk.
And I have to admit, a secondary, personal, reason for being involved is that the topic is fascinating and there are smart people here, though that of course does not shift the estimates of risk and of the possibilities of mitigating it.