When we anticipate the future, we the opportunity to inhibit our behaviours which we anticipate will lead to counterfactual outcomes. Those of us with sufficiently low latencies in our decision cycles may recursively anticipate the consequences of counterfactuating (neologism) interventions to recursively intervene against our interventions.

This may be difficult for some. Try modelling that decision cycle as a nano-scale approximation of time travel. One relevant paradox from popular culture is the farther future paradox described in the tv cartoon called Family Guy.

Watch this clip: https://www.youtube.com/watch?v=4btAggXRB_Q

Relating the satire back to our abstraction of the decision cycle, one may ponder:

What is a satisfactory stopping rule for the far anticipation of self-referential consequence?

That is:

(1) what are the inherent harmful implications of inhibiting actions in and of themselves: stress?

(2) what are their inherent merits: self-determination?

and (3) what are the favourable and disfavourable consequences as x point into the future given y number of points of self reference at points z, a, b and c?

see no ready solution to this problem in terms of human rationality, and see no corresponding problem in artificial intelligence, where it would also apply. Given the relevance to MIRI (since CFAR doesn't seem work on open-problems in the same way)

I would like to also take this opportunity to open this as an experimental thread for the community to generate a list of ''open-problems'' in human rationality that are otherwise scattered across the community blog and wiki. 

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 1:31 AM

open problem: external validity of rationality techniques. No body of standard methodology for tests to establish it.

There is the... traditional methodology: If you're so smart, how come you ain't rich?

Noisy, but yes. If people using rationality techniques don't outperform educational attainment matched controls that would be some evidence against them. Of course there is the selection effect of the sort of person who is drawn to rationality techniques vs not. Does the LW census gather income data? Is LW mostly underperformers?

educational attainment matched controls

IQ-matched controls would be much better as "educational attainment" is often just a proxy for IQ. You would also need to control for things like age and, relevant to LW, being in grad school.

I was under the impression that the effect of IQ on income mostly disappears once educational attainment is included.

Well, it depends on what you're interested in. Educational attainment and IQ are correlated, so if you control for one, any of the two, the effect of the other one lessens.

I would argue that IQ is more important since it causes the educational attainment and not vice versa.

Interesting thought, but if you can model the further further (...) future well enough to get any value from that model, you should just do so up front, rather than bothering with the intermediary interventions.

In other words, unroll the recursion into a simple loop.

Aside from a silly youtube example, can you provide a case where this could happen in planning stages? As I see it, you anticipate some ensemble of possible scenarios for each possible action, and you can weigh those ensembles against each other. The 'go back and try again' is just part of that, but if one of them says 'go back and try again' and aims at something you've already done, you can use the precomputed result and move on to something else.