The standard account of causality depends on the idea of intervention: The question of what follows if X not just naturally occurs, but we bring it about artificially independently of its usual causes. This doesn't go well with embedded agency. If the agent is part of the world, then its own actions will always be caused by the past state of the world, and so it couldnt know if the apparent effect of its interventions isn't just due to some common cause. There is a potential way out of this if we limit the complexity of causal dependencies.
Classically, X is dependent on Y iff the conditional distribution of X on Y is different from the unconditional distribution on X. In slightly different formulation, there needs to be a program that takes Y as an input and outputs adjustments to our unconditional distribution on X, and where those adjustments improve prediction.
Now we could limit which programms we consider admissible. This will be by computational complexity of the program with respect to the precision of Y. For example, I will say that X is polynomially dependent on Y iff there is a programm running in polynomial time that fullfills these conditions. (Note that dependence in this new sense needn't be symmetrical anymore.)
Unlike with the unlimited dependence, there's nothing in principle impossible about the agents actions being polynomially independent from an entire past worldstate. This can form a weakened sense of intervention, and the limited-causal consequences of such interventions can be determined from actually observed frequencies.
Now, if we're looking at things where all dependencies are within a certain complexity class, and we analyse it with something stronger than that, this will end up looking just like ordinary causality. It also explains the apparent failure of causality in Newcombs problem: We now have a substantive account of what it is to act in intervention (to be independent in a given complexity class). In general, this requires work by the agent. It needs to determine its actions in such a way that they dont show this dependence. Omega is constructed to make its computational resources insufficient for that. So the agent fails to make itself independent of Omega's prediction. The agent would similarly fail for "future" events it causes where the dependence is in the complexity class. In a sense, this is what puts the events into its subjective future - that it cannot act independently of them.
This post is a mixture of two questions: "interventions" from an agent which is part of the world, and restrictions
The first is actually a problem, and is closely related to the problem of how to extract a single causal model which is executed repeatedly from a universe in which everything only happens once. Pearl's answer, from IIRC Chapter 7 of Causality, which I find 80% satisfying, is about using external knowledge about repeatability to consider a system in isolation. The same principle gets applied whenever a researcher tries to shield an experiment from outside interference.
The second is about limiting allowed interventions. This looks like a special case of normality conditions, which are described in Chapter 3 of Halpern's book. Halpern's treatment of normality conditions actually involves a normality ordering on worlds, though this can easily be massaged to imply a normality ordering on possible interventions. I don't see any special mileage here out of making the normality ordering dependent on complexity, as opposed to any other arbitrary normality ordering, though someone may be able to find some interesting interaction between normality and complexity.
Speaking more broadly, this is part of the broader problem that our current definitions of actual causation are extremely model-sensitive, which I find a serious problem. I don't see a mechanistic resolution, but I did find this essay extremely thought provoking, which posits considering interventions in all possible containing models: http://strevens.org/research/expln/MacRules.pdf
Causal inference has long been about how to take small assumptions about causality and turn them into big inferences about causality. It's very bad at getting causal knowledge from nothing. This has long been known.
For the first: Well, yep, that's why I said I was only 80% satisfied.
For the second: I think you'll need to give a concrete example, with edges, probabilities, and functions. I'm not seeing how to apply thinking about complexity to a type causality setting, where it's assumed you have actual probabilities on co-occurrences.