In the vein of the Harry Potter and the Methods of Rationality discussion threads this is the place to discuss anything relating to Alicorn's Twilight fanfic Luminosity. The fanfic is also archived on Alicorn's own website <strike>(warning: white text on black background)</strike>.
Previous discussion is hidden so deeply within the first Methods of Rationality thread that it's difficult to find even if you already know it exists.
Similar to how Eliezer's fanfic popularizes material from his sequences Alicorn is using the insights from her Luminosity sequence.
Spoilers for the fanfic itself as well as the original novels need and should not be hidden, but spoiler protection still applies for any other works of fiction, except for Harry Potter and the Methods of Rationality chapters more than a week old so we can freely discuss similarities and differences.
EDIT: Post-ginormous-spoiler discussion should go to the second thread. (If you have any doubt on whether you have reached the spoiler in question you have not.)
Exploiting causal loops to solve NP problems does not involve checking all candidates in sequence and then transporting the answer back. Rather, it involves checking only one candidate, but deciding which candidate to check in such a way that the situation is self-consistent if and only if that one candidate is the correct answer. In context, this depends on being able to foresee the outcome of a simple firmly decided conditional strategy, where the events you plan to condition on are the contents of the vision itself.
So if the visions are generated by a computationally unbounded process that extrapolates from inexact snapshots of the present (which include plans and dispositions but not some of the other contents of minds), then the NP trick could work: The dependency of the future on Alice's reaction to the vision is well-defined and available to the extrapolation process. Or it could just give her a headache; that's self-consistent too.
If the vision generator refuses to hypothesize any visions within the extrapolation process, or if it doesn't care whether extrapolated-Alice gets false visions, or if it's computationally bounded and only iterates towards a fixed point at a limited rate, then the trick would fail.
And if it's not extrapolation-based, then I dunno, but I can't think of any interpretations that would be incompatible with a headache.
But Alice's power doesn't work like that. It predicts the future conditional on Alice not having seen the prediction.