Operating outside of ideology is extremely hard, if not impossible. Even groups that see themselves as non-ideological, still seem to end up operating within an ideology of some sort.

Take for example Less Wrong. It seems to operate within a few assumptions:

  1.  That studying rationality will provide use with a greater understanding of the world. 
  2. That studying rationality will improve you as a person.
  3. That science is one of our most important tools for understanding the world.

...

These assumptions are also subject to some criticisms. Here's one criticism for each of the previous points:

  1. But will it or are we dealing with problems that are simply beyond our ability to understand (see epistemic learned helplessness)? Do we really understand how minds work well enough to know whether a mind uploaded would still be "you"?
  2. But religious people are happier.
  3. Hume's critique of induction

I could continue discussing assumptions and possible criticisms, but that would be a distraction from the core point, which is that there are advantages to having a concrete ideology that is aware of it's own limitations, as opposed to an implicit ideology that is beyond all criticism.

Self-conscious ideologies also have other advantages:

  • Quick and easy to write since you don't have to deal with all of the special cases.
  • Easy to share and explain. Imagine trying to explain to someone, "Rationality gives us a better understanding of the world, except when it does not". Okay, I'm exaggerating, epistemic humility typically isn't explained that badly, but it certainly complicates sharing.
  • Easier for people to adopt the ideology as a lens through which to examine the world, without needing to assume that it is literally true.
I wrote this post so that people can create self-conscious ideologies and have something to link to so as to avoid having to write up an explanation themselves. Go out into the world and create =P.
New Comment
13 comments, sorted by Click to highlight new comments since: Today at 3:23 PM

My favorite self-conscious ideology is Continental Rationalism. The main idea is that the world is built on logic and harmony which can be understood by an individual human mind. It was born from religious mysticism (Descartes, Leibniz) but somehow became very fruitful in science. Competing ideologies like "the world is built on chance" or "all understanding is social" don't bear nearly as much fruit, though they sound more sophisticated. Maybe it's because they discourage you from trying to understand things by reason, while CR encourages it. Heck, I think even LW ideology loses out to CR, because self-improvement feels like a grind, while understanding the world feels like a quest. Maybe if LW focused for a while on understanding things instead of defeating akrasia and such, it'd be a happier place.

Maybe if LW focused for a while on understanding things instead of defeating akrasia and such, it'd be a happier place.

Completely agree. (Username is not by chance.)

Can you add any more detail of what precisely Continental Rationalism is? Or, even better, if you have time it's probably writing up a post on this.

The main idea is that the world is built on logic and harmony which can be understood by an individual human mind. It was born from religious mysticism (Descartes, Leibniz)

Erm, Pythagoras was around a lot earlier than the likes of Descartes or Leibniz. Even the competing ideas that "the world is built on chance" or else that "all understanding is social" (or, to put it another way, "man is the measure of all things") are of comparable antiquity and not really more 'sophisticated' in any relevant way - except perhaps in an overly literal sense, being more conducive to "sophistry"!

I think it would be very worthwhile to study which assumptions are actually shared. We could have a poll where we list 50 assumptions and everyone states on a Likert state to what extent they agree.

It would also be interesting to see whether there are other clusters besides a basic "rational ideology" cluster.

If you take a random set of people, they will have various beliefs, and some of those will be more common than others. Calling that an ideology sems unfair. By the way, all beliefs have criticisms and yet some beliefs are more correct than others.

Also, "it's likely that some of the beliefs I hold are wrong" is already one rationalist assumption, or at least it should be. What are you adding to that?

It's not about fairness.

Being self-conscious of the peoples that one has and that one uses to operate is useful.

You reminded me of a tangentially related post idea I want someone to steal: "Ideologies as Lakatosian Research Programmes".

Just as people doing science can see themselves as working within a scientific research programme, people doing politics can see themselves as working within a political research programme. Political research programmes are scientific/Lakatosian research programmes generalized to include normative claims as well as empirical ones.

I expect this to have some (mildly) interesting implications, but I haven't got round to extracting them.

You've already been scooped. The "research programme" that Lakatos talks about was designed to synthesize the views of Kuhn and Popper, but Kuhn himself modeled his revolutionary science after constitutional crises, and his paradigm shifts after political revolutions (and, perhaps more annoyingly to scientists, religious conversions). Also, part of what was so controversial (at the time) about Kuhn, was the prominence he gave to non-epistemic (normative, aesthetic, and even nationalistic) factors in the history of science.

Did Kuhn (or Popper or Lakatos) spell out substantial implications of the analogy? A lot of the interest would come from that, rather than the fact of the analogy in itself.

I think Eliezer once wrote something about things becoming clearer when you think about how you would program a computer to do it, as opposed to e.g. just throwing some applause lights to a human. So, how specifically would you implement this kind of belief in a computer?

Also, should we go meta and say: "'Rationality gives us a better understanding of the world, except when it does not' is a good ideology, except when it is worse" et cetera?

What exactly would that actually mean? (Other than verbally shielding yourself from criticism by endless "but I said 'except when not'".) Suppose a person A believes "there is a 80% probability it will rain tomorrow", but a person B believes "there is a 80% probability it will rain tomorrow, except if it is some different probability". I have an idea about how A would bet about tomorrow's weather, but how would B?

I think Eliezer once wrote something about things becoming clearer when you think about how you would program a computer to do it, as opposed to e.g. just throwing some applause lights to a human. So, how specifically would you implement this kind of belief in a computer?

First solve natural language...

No one has used eliezer's technique much and there may be a reason for that.

"'Rationality gives us a better understanding of the world, except when it does not"

I provided this as an exaggerated example of how aiming for absolute truth can mean that you produce an ideology that is hard to explain. More realistically, someone would write something along the lines of, rationality gives us a better understanding of the world, except in cases a), b), c)... but if there are enough of these cases and these cases are complex enough, then in practise people round it off to "X is true, except when it is not", ie. they don't really understand what is going on as you've pointed out.

The point was that there are advantages of creating a self-conscious ideology that isn't literally true, but has known flaws, such as it becoming much easier to actually explain so that people don't end up being confused as above.

In other words, as far as I can tell, it doesn't seem that your comment isn't really responding to what I wrote.