As far as I can tell, the standard rebuttal to Roko's idea is based on the strategy of simply ignoring acausal blackmail. But what if you consider the hypothetical situation that there is no blackmail and acausal deals involved, and the Basilisk scenario is how things just are? Simply put, if we accept the Computational Theory of Mind, we need to accept that our experiences could in principle be a simulation by another actor. Then, if you ever find yourself entertaining the question:
Q1: Should I do everything I can to ensure the creation/simulation of an observer that is asking itself the same question as I am now, and will suffer a
... (read 343 more words →)
That strategy might work as deterrence, although actually implementing it would still be ethically...suboptimal, as you would still need to harm simulated observers. Sure, they would be Rogue AIs instead of poor innocent humans, but in the end, you would be doing something rather similar to what you blame them for in the first place: creating intelligent observers with the explicit purpose of punishing them if they act the wrong way.