Making another due to complaints about lack of clarity in the previous one.
Many Gods refutation (there are too many AIs to care about a particular one. Even if you acausally trade with one, another one might punish you for not following it)
Instrumental Goals for AI ( AIs have Instrumental goals, being if you donate to AI research, all of them would benefit)
My take is many gods refutation still works due to us having no real idea about mindspace of possible AI, thus the AI could just punish for not following it in particular. Also, a small preference today could result in a vastly different in the future, further providing objective to punish for the AI.
What do you think? Any replies would be appreciated.
"If you build an AI to produce paperclips" The 1st AI isn't gonna be built for instantly making money, it's gonna be made for the sole purpose of making it. Then it might go for doing whatever it wants...making paperclips perhaps. But even going by the economy argument, an AI might be made to solve any complex problems, decide to take over the world and also use acausal blackmail, thus turning into a basilisk. It might punish people for following the original Roko's basilisk because it wants to enslave all humanity. You don't know which one will happen, thus it's illogical to follow one since the other might torture you right?