Both of those seem reasonable to me. Putting #2 another way: you can also just acausal trade with other humans, and this isn't that huge a deal usually because most humans aren't that powerful.
I mean, people could construct an AI that will acausally trade with you in a human-understandable way. I don't think this is completely wild, but I'd agree that as the probabilities get smaller, trade becomes less and less profitable/likely - you don't just have to find it, it has to find you. This is kind of like a quadratic penalty term.
Overall I think the best anti-acausal-trade-worry idea is "You know that decision-making procedure that you're worried other agents might use to take your lunch money? What would happen if you used it too, to get things to go well for yourself?"
Hello. I'm new to less wrong and would appreciate some help. I've been trying to understand basilisk since the more you understand, the less worried you are. While acausal trade requires clear understanding of the other agent, thus ruling out trading with a Superintelligent AI, I've been trying to find a answer to non Superintelligent AI's with which you 'might' be able to acausally trade with. I've arrived at 2:
Are my refutations valid? Any replies would be greatly appreciated.
Edit: is there a particular reason for the downvotes? I really do need help. Edit 2: spelling