[Link]Rationalization is Superior to Rationality

1 FrameBenignly 18 December 2015 08:30PM

Philosophy and the practice of Bayesian statistics

 

This is a 2012 paper by Andrew Gelman and Cosma Rohilla Shalizi on what they view as a misuse of Bayesian statistics in scientific reasoning. I found this interesting because their definition of hypothetico-deductivism closely matches up with Eliezer Yudkowsky's definition of rationalization, and their definition of inductive inference closely matches up with his definition of rationality. The definitions:

 

Eliezer Yudkowsky:

Rationality - Starting from evidence, and then crunching probability flows, in order to output a probable conclusion.

Rationalization - Starting from a conclusion, and then crunching probability flows, in order to output evidence apparently favoring that conclusion.

 

Andrew Gelman and Cosma Rohilla Shalizi:

Inductive Inference - An accretion of evidence is summarized by a posterior distribution, and scientific process is associated with the rise and fall in the posterior probabilities of various models.

Hypothetico-Deductivism - Scientists devise hypotheses, deduce implications for observations from them, and test those implications. Scientific hypotheses can be rejected (i.e., falsified), but never really established or accepted in the same way.

 

Now, what's interesting about the paper is that in contrast to Eliezer Yudkowsky's view they argue that rationalization (hypothetico-deductivism) is the correct analytic method, and rationality as Eliezer Yudkowsky defined it is wrong. They make the following argument:

 

Social-scientific data analysis is especially salient for our purposes because there is general agreement that, in this domain, all models in use are wrong – not merely falsifiable, but actually false. With enough data – and often only a fairly moderate amount – any analyst could reject any model now in use to any desired level of confidence. Model fitting is nonetheless a valuable activity, and indeed the crux of data analysis. To understand why this is so, we need to examine how models are built, fitted, used and checked, and the effects of misspecification on models.

 

They also argue Popper made multiple errors, but that his fundamental view is closer to correct than Kuhn's, and that correct science is about attempting to falsify hypotheses. They simply disagree with how Popper went about doing it.

 

Another interesting issue to me is that if you look at the main post Against Rationalization, Adirian and Vladimir_Nesov both suggested that both forms of analysis are acceptable, but TheAncientGeek was the only one who argued rationalization over rationality, and his comment received multiple downvotes. This also appears to me to have been a major concept central to many parts of the sequences. Andrew Gelman and Eliezer Yudkowsky had a bloggingheads.tv conversation together, b̶̶̶u̶̶̶t̶̶̶ ̶̶̶I̶̶̶'̶̶̶m̶̶̶ ̶̶̶n̶̶̶o̶̶̶t̶̶̶ ̶̶̶s̶̶̶u̶̶̶r̶̶̶e̶̶̶ ̶̶̶i̶̶̶f̶̶̶ ̶̶̶t̶̶̶h̶̶̶i̶̶̶s̶̶̶ ̶̶̶p̶̶̶a̶̶̶r̶̶̶t̶̶̶i̶̶̶c̶̶̶u̶̶̶l̶̶̶a̶̶̶r̶̶̶ ̶̶̶t̶̶̶o̶̶̶p̶̶̶i̶̶̶c̶̶̶ ̶̶̶e̶̶̶v̶̶̶e̶̶̶r̶̶̶ ̶̶̶c̶̶̶a̶̶̶m̶̶̶e̶̶̶ ̶̶̶u̶̶̶p̶̶̶.̶̶̶

 

Thoughts?

 

Edit - Andrew Gelman and Eliezer Yudkowsky discuss this issue at the end of the bloggingheads video.  Click on "The difference between Eliezer and Nassim" for their take.  I also fixed a link.

How do you choose areas of scientific research?

5 FrameBenignly 07 November 2015 01:15AM

I've been thinking lately about what is the optimal way to organize scientific research both for individuals and for groups. My first idea: research should have a long-term goal. If you don't have a long-term goal, you will end up wasting a lot of time on useless pursuits. For instance, my rough thought process of the goal of economics is that it should be “how do we maximize the productive output of society and distribute this is in an equitable manner without preventing the individual from being unproductive if they so choose?”, the goal of political science should be “how do we maximize the government's abilities to provide the resources we want while allowing individuals the freedom to pursue their goals without constraint toward other individuals?”, and the goal of psychology should be “how do we maximize the ability of individuals to make the decisions they would choose if their understanding of the problems they encounter was perfect?” These are rough, as I said, but I think they go further than the way most researchers seem to think about such problems.

 

Political science seems to do the worst in this area in my opinion. Very little research seems to have anything to do with what causes governments to make correct decisions, and when they do research of this type, their evaluation of correct decision making often is based on a very poor metric such as corruption. I think this is a major contributor to why governments are so awful, and yet very few political scientists seem to have well-developed theories grounded in empirical research on ways to significantly improve the government. Yes, they have ideas on how to improve government, but they're frequently not grounded in robust scientific evidence.

 

Another area I've been considering is search parameters of moving through research topics. An assumption I have is that the overwhelming majority of possible theories are wrong such that only a minority of areas of research will result in something other than a null outcome. Another assumption is that correct theories are generally clustered. If you get a correct result in one place, there will be a lot more correct results in a related area than for any randomly chosen theory. There seems like two major methods for searching through the landscape of possibilities. One method is to choose an area where you have strong reason to believe there might be a cluster nearby that fits with your research goals and then randomly pick isolated areas of that research area until you get to a major breakthrough, then go through the various permutations of that breakthrough until you have a complete understanding of that particular cluster area of knowledge. Another method would be to take out large chunks of research possibilities, and to just throw the book at it basically. If you come back with nothing, then you can conclude that the entire section is empty. If you get a hit, you can then isolate the many subcomponents and figure out what exactly is going on. Technically I believe the chunking approach should be slightly faster than the random approach, but only by a slight amount unless the random approach is overly isolated. If the cluster of most important ideas are at 10 to the -10th power, and you isolate variables at 10 to the -100th power, then time will be wasted going back up to the correct level. You have to guess what level of isolation will result in the most important insights.

 

One mistake I think is to isolate variables, and then proceed through the universe of possibilities systematically one at a time. If you get a null result in one place, it's likely true that very similar research will also result in a null result. Another mistake I often see is researchers not bothering to isolate after they get a hit. You'll sometimes see thousands of studies on the exact same thing without any application of reductionism eg the finding that people who eat breakfast are generally healthier. Clinical and business researchers seem to most frequently make this mistake of forgetting reductionism.

 

I'm also thinking through what types of research are most critical, but haven't gotten too far in that vein yet. It seems like long-term research (40+ years until major breakthrough) should be centered around the singularity, but what about more immediate research?

Thinking like a Scientist

5 FrameBenignly 19 July 2015 02:43PM
I've been often wondering why scientific thinking seems to be so rare.  What I mean by this is dividing problems into theory and empiricism, specifying your theory exactly then looking for evidence to either confirm or deny the theory, or finding evidence to later form an exact theory.

This is a bit narrower than the broader scope of rational thinking.  A lot of rationality isn't scientific.  Scientific methods don't just allow you to get a solution, but also to understand that solution.

For instance, a lot of early Renaissance tradesmen were rational, but not scientific.  They knew that a certain set of steps produced iron, but the average blacksmith couldn't tell you anything about chemical processes.  They simply did a set of steps and got a result.

Similarly, a lot of modern medicine is rational, but not too scientific.  A doctor sees something and it looks like a common ailment with similar symptoms they've seen often before, so they just assume that's what it is.  They may run a test to verify their guess.  Their job generally requires a gigantic memory of different diseases, but not too much knowledge of scientific investigation.

What's most damning is that our scientific curriculum in schools don't teach a lot of scientific thinking.

What we get instead is mostly useless facts.  We learn what a cell membrane is, or how to balance a chemical equation.  Learning about, say, the difference between independent and dependent variables is often left to circumstance.  You learn about type I and type II errors when you happen upon a teacher who thinks it's a good time to include that in the curriculum, or you learn it on your own.  Some curriculums include a required research methods course, but the availability and quality of this course varies greatly between both disciplines and colleges.  Why there isn't a single standardized method of teaching this stuff is beyond me.  Even math curriculums are structured around calculus instead of the much more useful statistics and data science placing ridiculous hurdles for the typical non-major that most won't surmount.

It should not be surprising then that so many fail at even basic analysis.  I have seen many people make basic errors that they are more than capable of understanding but simply were never taught.  People aren't precise with their definitions.  They don't outline their relevant variables.  They construct far too complex theoretical models without data.  They come to conclusions based on small sample sizes.  They overweight personal experiences, even those experienced by others, and underweight statistical data.  They focus too much on outliers and not enough on averages.  Even professors, who do excellent research otherwise, often suddenly stop thinking analytically as soon as they step outside their domain of expertise.  And some professors never learn the proper method.

Much of this site focuses on logical consistency and eliminating biases.  It often takes this to an extreme; what Yvain refers to as X-Rationality.  But eliminating biases barely scratches the surface of what is often necessary to truly understand a problem.  This may be why it is said that learning about rationality often reduces rationality.  An incomplete, slightly improved, but still quite terrible solution may generate a false sense of certainty.  Unbiased analysis won't fix a lousy dataset.  And it seems rather backwards to focus on what not to do (biases) rather than what to do (analytic techniques).

 

True understanding is often extremely hard.  Good scientific analysis is hard.  It's disappointing that most people don't seem to understand even the basics of science.