A phenomenon I have often encountered when thinking about things is when everything seems to collapse to tautology. This is hard to define precisely, but I’ll give you some examples:

  • Bounded rationality: Bounded rationality can be thought of as “rationality conditional on some given algorithmic information” (in contrast to perfect Bayes-rationality, which is “rationality conditional on some given statistical information”), where “algorithmic information” is a more general version of “logical information”. But “algorithmic information” is fundamentally just arbitrary programs/heuristics, including the heuristic that estimates the argmax of the utility. So everything seems to unwind, and you could as well just say every program is boundedly rational — it’s “doing the best it can with the algorithmic information it has (itself)”, and bounded rationality becomes empty of meaning.
  • Efficient Market Hypothesis: Basically the same as above; the Efficient Market Hypothesis is only valid when you read it as “The market is efficient conditional on all the algorithmic information it has”, but the “algorithmic information” just refers to the traders in existence and the market-maker.
  • Market dynamics: Markets must take some time to compute equilibrium (it’s a complex computational problem), but also in some sense the market is “always” at equilibrium conditional on the information possessed, and market dynamics is really just a problem of information propagation.

… but deep down, you know there has to be something there, there has to be some kernel of meaning and non-nihilism. There has to be some meaning to these terms, because at the end of the day — we want boundedly rational agents, we want efficient economic institutions.

The Kernel of Meaning in these examples is a bit hard to grok at — they’re probably all related, and have to do something with “learning” (Russell, “Rationality and Intelligence: An Update”) and “Algorithmic Bayesian Epistemology” (Eric Neyman) — there’s a certain recursive sense in which markets (and boundedly rational agents) are rational about updating their heuristics, about updating these meta-heuristics, and so on.

But I can give you one other example of apparent collapse to tautology where I know exactly how to defeat this nihilism: property rights. Where you might want to answer questions like “isn’t any government policy acceptable, if you just regard the government as the owner of the entire country?” and “can’t crime or bad government just be regarded as aspects of the natural world, and efforts to mitigate them just costs of production, so we’re already living in an ideal market?”—

Q: Why is theft bad?

A: Because it disincentivizes production. If your stuff will just be taken from you, you won’t make it.

Q: But aren’t property rights arbitrary? E.g. you could make a factory pay a pigouvian tax to air-breathers for polluting the air (implying that the air-breathers own the air), or you could make the air-breathers pay the factory for not polluting (implying that the factory owns the air). Both rights structures will lead to an efficient outcome. What makes this different? Why can’t you say the thief owns your property?

A: Sure, you could say the thief owns the stuff you will make. In which case, you not making the stuff should be seen as a violation of his rights.

Q: So economic efficiency is indifferent between “ban theft” and “make you a literal slave to the thief”? The difference is essentially a conflict of interest, i.e. about how to divide the cake, not about how to maximize the size of the cake …

Aside: Correct. You could a priori determine who is capable of making what, and punish those who do not reach their potential (because their potential is thief property, and they’re wasting it). In fact, if you replace “thief property” with “public property”, you get a radical form of Georgism. See Posner & Weyl in Radical Markets for how this could be achieved.

… so, the scenario where the thief takes your stuff is acceptable, so long as he would still have the right to your stuff if you didn’t make it. In other words, economic efficiency is attained by any rights structure — so long as the rights structure is independent of agent actions.

New Comment
6 comments, sorted by Click to highlight new comments since:

To make it intuitively more obvious, replace "thief" with "king". From certain perspective, the king owns the entire country, including you, including your hypothetical maximum production. If the king could predict your potential productivity, he could set your tax to be "your hypothetical maximum production, minus the minimum you need to survive", and then punish you if you fail to pay the tax.

[-]TAG20

everything seems to collapse to tautology

Successful explanation makes things seem less arbitrary, more predictable, more obvious. A tautology is the ultimate in non arbitrary obviousness.

everything seems to collapse to tautology

I noticed that too. Intense discussions among co-students often seemed to end up with some tautology or trivially true statement.

An example I remember specifically: We were discussing if and under which circumstances one should sacrifice themselves for someone else or a case. One of us was arguing against all examples. After drilling down, it ended up with them saying that they wouldn't sacrifice themselves. And that was it. Every other argument was downstream of that. I think that's a fair position to hold, but it was overgeneralized to everybody.

Another one: Most specific change proposals, if you take a larger view, are effectively irrelevant. Examples: Plastic straw bans, your favorite tax exception. And you can always take an even larger view until that is true.   

A loosely related phenomenon is children's "Why?" chains, which also seem to always end with "The sun" or "I/you want it."

I think it is a result of building more and more abstract world models until the highest level model becomes so general as to be true by construction.

I think EY once mentioned it in the context of self-awareness or free will or something, and called it something like "complete epistemological panic".

I assume you refer to 

he highest level model becomes so general as to be true by construction.

Interesting! Can you find the reference? I'd like to see what the "panic" is about.

I think it is a good exercise. It makes clear that all models are wrong not just those at the top and they have to prove their value by being useful, i.e., make useful predictions. 

The thief's preferences bear no cost as well. Leaving his preference structure to change arbitrarily to whichever one will drive the highest utility for them. Preference falsification is an undesirable property for a system to incentivize, see strategic voting gameable systems as causing outcomes no-one prefers.