AI is not an instant-win condition, but it would be fairly quick. There could be drama with the AI trying to develop nanotech, (running up against physical speed constraints rather than mental) before some sort of disaster hits, although this does remove agency from the humans who would mostly be following the AI's commands.
I think AI can still be part of a story, provided it's kept towards the final chapter. Developing true self-improving superhuman AI is rather like throwing the ring into Mt Doom - all that remains it to crown the king, mourn the (non-recoverable) dead, and write that everyone lived happily ever after.
Apologies for the self-obsessed diversion, but this is on topic: I'm writing a story which involves not AI but recursively self-improving IA, and I'm beginning to think that this might have been a bad idea for this sort of reason. In my story the situation is somewhat improved by the presence of a good reason why multiple entities begin self-improvement at approximately the same time, which means that conflicts remain. I can't write superintelligent dialogue, but I've handwaved this as saying that most of the character's mental energy is going towards other activities, leaving their verbal IQ within normal human ranges. The remaining problem is that the other characters become rapidly sidelined.
New chapter!
This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 102.
There is a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically: