Can you explain how a self-modifying AI would be risky in this regard (a citation is fine, you do not need to repeat a well known argument I am simply ignorant of).
I'm not sure that you can avoid picking it up, just by being on this site. http://www.yudkowsky.net/singularity/ai-risk/
which seems silly to me - the SSA seems to be wrong on the face of it
You clearly know something I don't.
Ah - I'd seen the link, but the widget just spun. I'll go look at the PDF. The below is before I have read it - it could be amusing and humility inducing if I read it and it makes me change my mind on the below (and I will surely report back if that happens).
As for the SSA being wrong on the face of it - the DA wiki page says "The doomsday argument relies on the self-sampling assumption (SSA), which says that an observer should reason as if they were randomly selected from the set of observers that actually exist." Assuming this is true (I do no...
A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.
Plausible?