[Epistemic Status: Highly speculative/experimental. Playing around with ideas on rationality. Extremely interested in input, criticism, or other perspectives]

 

 

 


I.)

Throughout the 20th century the University of Chicago school of thought seems to have generated the most positive rational analysis, which branched out across the social sciences among economist types. These guys took a positive view as a given. Their research interests were in studying how these agents would form equilibrium, or lead to suboptimal outcomes in politics or markets, rather than commenting on what an individual ought to do.


While their analysis was based on positive rationality, these guys clearly had preferences for how the world should run that made its way into their analysis. And these preferences rest gently upon the view that humans are capable of identifying, solving, and building themselves a more perfect world. Philosopher of Science, Karl Popper, thought it was a disturbing fact that even the most abstract fields seemed to be 'motivated and unconsciously inspired by political hopes and by Utopian dreams.'


I remember during my grad degree talking to my British Political Game Theory professor during his office hours. I wanted to know how he built his models. They were theoretical and abstract models, but creating them required inspiration from reading books, or the news, or staring out your window at the sky. Wouldn't that make them empirical then? I know people get annoyed at logical positivists saying “Sure, sure, those chemical bond equations work today but can you prove they will work tomorrow?” Unfortunately, I don't think political game theory has this same predictive grasp on reality to dismiss those concerns are boring.


It was a few years later when I was studying Neural Nets, that I began thinking back towards game theory. If all humans were glorified computers, then game theory modelling meant using our brains to capture information from reality and mapping out the structure of a game and the preferences of an agent. Our brains were the estimator in this model, which meant all game theory meant to model reality was estimated. It was just estimated using our neural network. It seems trivial when I write it out, but the current methods for evaluating these types of models still seem to be based on a non-predictive intuition.


When I revisited the Chicago Economists I could see their estimation, it was implied in everything they wrote. Their models all embedded in them a view that communism, fascism, and paternalism aren’t just morally bad, but are by definition utility destroying and irrational. Books like “Capitalism and Freedom” and “The Road to Serfdom” by Friedman and Hayek, took that positive rational framework and applied it to human interaction. (Books and Economists I still deeply respect)


Gary Becker even presented a famous argument on racial discrimination, which showed how competitive markets would tend towards a non-discriminatory equilibrium. And, look this might be the true model of reality in all its parsimony, but as a counter-factual could you imagine this positive view of rationality receiving any acceptance If Becker argued that different tiers of races was the true equilibrium? At the very least there is a predictive complexity here that is missing (not to dismiss Becker's research, it's great).


And while purely theoretical utility models and pure mathematical game theory sort of avoid this, the moment you bring it back down to earth to inform reality, you're estimating the model. Only this model doesn't have data to fit it to information from reality mapped to numbers.


For all the difficulties, these guys are at least still taking measurement issues seriously. While a truly positive view of rationality might not exist in reality, aspiring towards one imposes a structure on their work. This structure requires they decompose their observations thoughtfully, map out the most important aspects (preferences, utilities, institutions, games), and use mathematical equilibrium refinements.


II.)


One question that keeps me up at night though is under what conditions should we call rationality positive instead of normative? I've never been able to get a handle on a clear distinction. I think it's an intractable problem, and not one that can be solved with more philosophical classifications and words like 'deontology.' Despite not being a perfect model, they are still useful distinctions. EY's sequences outline a robust view of positive rationality. Thomas Carlyle's neoreactionary blog posts outline a clear normative view of the world. But what about guys like Paul Krugman or Peter Thiel's views on the world? Are they based on positive analysis? Or do we write them off as normative? (And if we did, what would that mean?)


This is my guess to explain a small part of this discrepancy:


The Normative tends to fixate less on the imposed scientific structure the positivists aspire towards. Or focuses a little less on mapping that situations to clear, well defined, structures. That’s all. There is no discrete shift, no jump to a new dimension of analysis, no regime shift. It’s the same thing done in a way we consider less rigorous. I wish this would imply it is all strictly worse, less predictive, and can be clearly identified. That would make life way easier, but that’s not necessarily true.


We all have a brain-state. It’s based on our brain structure, how it’s been programmed, the information we have observed. We then use that brain-state to run a simulation of society. As of now we don’t have sophisticated science to code up these simulations. We will someday.


As an example, we can simulate a prisoner's dilemma pretty easily in our own brain. Up until a certain point you can solve most game theory problems by guessing what you would do in the game. For me it was the strangest feeling knowing the answer to a game, but being unable to prove formally why it was the right answer. Eventually this strategy stops working for complex equilibrium refinements.

We also know that unsupervised neural networks can find nonlinear dynamics in any set of information. We know that words are best modeled by neural nets. And as far as positive economics goes, while I haven’t tried exhaustively, most games and strategies that are explained mathematically can also be explained using words. We also seem to have evolved to be much better at absorbing vast amounts of information through words as opposed to mathematical models -- even if they make us subject to far more information transfer problems and biases.


What this means is that it is plausible, if not trivially true, that the right words strung together could portray a much more predictive and accurate model than the positivist emphasis on well defined structure. Even the smartest economists and social scientists seem to take to their blog now and again to explain, in words, how the world actually works and how it should work.


Everything Marx wrote is in a sense an incredibly complex simulation, which no human could follow if it was mapped into a mathematical structure. It seems hard to believe that a human could so simply observe the world and then, in their own brain, simulate an entirely different alternative.


(It would be a weird simulation, since if he left out a few crucial words the whole thing could crash, or be nonsensical. Or maybe the words he chose could imply multiple different simulations, some of which make sense and some which don't. I find thinking about it this way makes a lot of sense in explaining how a few books can be debated for centuries. It's not because they are brilliant, but because they aren't well defined and are highly variant to reader assumptions.)


The fact that our brains can attempt to estimate problems, which to properly test would require insane scifi counterfactual worlds, is pretty cool. My own personal theory, which I also can't prove, is that humans find this deeply unsatisfying. We hate this uncertainty, and would rather belong to a tribe advocating for a certain utopia.


III.)


Eventually the dark side seems to pull in some rationalists as they search for the Holy Grail. I bet we have all felt that pull in some form. We all have to make a choice on how strong we allow for our views of the most optimal world. What do we think humans are capable of achieving? The more sophisticated view of an optimal society you form, the farther you walk away from what seems like a clear predictive and positivist view.


What we call positive rationality anchors itself more on structure. For trivial structures, like testing cognitive biases in a lab using a counter-factual framework, it's robust. In social science fields it can start to blur the farther you walk from counter-factual science. The more you rely on words to map your argument, the more dimensions and variance you add to your positive argument. The more dimensions in your simulation of the world, the harder it becomes to meaningfully test, while also allowing consideration of far more information.


Once this simulated argument is sufficiently high-dimensional, complex, and based on fragments of information absorbed throughout a humans life, we start to call it 'normative,' because we can't meaningfully find a way to map the information to a structure to easily communicate and explain our estimation to one another.

 

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 9:02 AM

I... don't really understand the problems you're having. There is a distinction between empirical and normative -- it looks pretty clear-cut to me. You are either describing reality as it is (well, as you see it) or you are saying what should be and might specify an intervention to change the world to your liking. Of course in a single text you could be doing both, but these two classes of statements could (and usually should) be disentangled.

Similarly, when you are building models, there is a difference between explanatory models -- which aim to correctly represent the causal structure of what's happening -- and predictive models which aim to generate good predictions. They are not necessarily the same (see e.g. this).

The question of how well-defined a model can you build is social sciences is indeed a very interesting one, but it seems to me the answers will be very context-dependent. Economists will use more numbers, historians will use more words, but what kind of a general answer to this question do you think is possible?

I think you're right that the distinction is typically clear cut and useful to make. What I want to avoid (although I'm not sure I was successful) is simply being nihilistic and making a refined version of the boring argument "what do words even mean?!".

The area I'm interested in is when that distinction grows blurry. Normative arguments always have to embed an accurate representation of reality, and a correct prediction that they will actually work. And positive arguments of reality frequently imply a natural or optimal result.

For example, some guy like Marx says "I've been thinking for a few decades, I have predicted the optimal state of human interaction. This map of the world clearly suggests we should move towards it." He then writes a manifesto to encourage it. The normative part of his argument seems to come trivially from the positive explanation of the world. So to that extent it's not like I can agree with his positive argument, but think his normative takes it too far, they are both equally wrong in the same way.

Or another way to say it, I think it's very rare that people share the same positive view of the world, but disagree normatively. Our normative disagreements often always come from a different map of the world, not from the same map but different preferences. Obviously I can't prove, or even test this, so I'm posting it here as an uncertain though. Not something I'm going to strongly defend. I know Aumann sort of proved it with his agreement theorem, well he modeled two Bayesian agents. So everything his model can't explain could be called normative I guess?

In reality it's still a useful distinction. As I said, I don't want to be annoyingly nihilistic or anything.

Note: Will read the rest of that paper later. Looks very interesting and relevant though, so thanks for sharing.

Normative arguments always have to embed an accurate representation of reality, and a correct prediction that they will actually work.

They only have to claim this. Many merely imply this without bothering to provide arguments.

For example, some guy like Marx says "I've been thinking for a few decades, I have predicted the optimal state of human interaction.

And that's precisely the point where the disentangling of the empirical and the normative rears up and shouts: Hold on! What is this "optimal" thing? Optimal for whom, how, and according to which values?

The normative part of his argument seems to come trivially from the positive explanation of the world.

I don't think so. Marx thought the proletarian revolution to be inevitable and that is NOT a normative statement. He also thought it to be a good thing which is normative, but those are two different claims.

I think it's very rare that people share the same positive view of the world, but disagree normatively.

Oh, I think it happens all the time: Should we go eat now or in an hour? Alice: Now. Bob: In an hour. That's a normative disagreement without any sign of different empirics.

In more extended normative arguments people usually feel obliged to present a biased picture of the world to support their conclusions, but if you drill down it's not uncommon to find that two different people agree on what the world is, but disagree about the ways it should be... adjusted.