My favorite self-conscious ideology is Continental Rationalism. The main idea is that the world is built on logic and harmony which can be understood by an individual human mind. It was born from religious mysticism (Descartes, Leibniz) but somehow became very fruitful in science. Competing ideologies like "the world is built on chance" or "all understanding is social" don't bear nearly as much fruit, though they sound more sophisticated. Maybe it's because they discourage you from trying to understand things by reason, while CR encourages it. Heck, I think even LW ideology loses out to CR, because self-improvement feels like a grind, while understanding the world feels like a quest. Maybe if LW focused for a while on understanding things instead of defeating akrasia and such, it'd be a happier place.
Maybe if LW focused for a while on understanding things instead of defeating akrasia and such, it'd be a happier place.
Completely agree. (Username is not by chance.)
Can you add any more detail of what precisely Continental Rationalism is? Or, even better, if you have time it's probably writing up a post on this.
The main idea is that the world is built on logic and harmony which can be understood by an individual human mind. It was born from religious mysticism (Descartes, Leibniz)
Erm, Pythagoras was around a lot earlier than the likes of Descartes or Leibniz. Even the competing ideas that "the world is built on chance" or else that "all understanding is social" (or, to put it another way, "man is the measure of all things") are of comparable antiquity and not really more 'sophisticated' in any relevant way - except perhaps in an overly literal sense, being more conducive to "sophistry"!
I think it would be very worthwhile to study which assumptions are actually shared. We could have a poll where we list 50 assumptions and everyone states on a Likert state to what extent they agree.
It would also be interesting to see whether there are other clusters besides a basic "rational ideology" cluster.
If you take a random set of people, they will have various beliefs, and some of those will be more common than others. Calling that an ideology sems unfair. By the way, all beliefs have criticisms and yet some beliefs are more correct than others.
Also, "it's likely that some of the beliefs I hold are wrong" is already one rationalist assumption, or at least it should be. What are you adding to that?
It's not about fairness.
Being self-conscious of the peoples that one has and that one uses to operate is useful.
You reminded me of a tangentially related post idea I want someone to steal: "Ideologies as Lakatosian Research Programmes".
Just as people doing science can see themselves as working within a scientific research programme, people doing politics can see themselves as working within a political research programme. Political research programmes are scientific/Lakatosian research programmes generalized to include normative claims as well as empirical ones.
I expect this to have some (mildly) interesting implications, but I haven't got round to extracting them.
You've already been scooped. The "research programme" that Lakatos talks about was designed to synthesize the views of Kuhn and Popper, but Kuhn himself modeled his revolutionary science after constitutional crises, and his paradigm shifts after political revolutions (and, perhaps more annoyingly to scientists, religious conversions). Also, part of what was so controversial (at the time) about Kuhn, was the prominence he gave to non-epistemic (normative, aesthetic, and even nationalistic) factors in the history of science.
Did Kuhn (or Popper or Lakatos) spell out substantial implications of the analogy? A lot of the interest would come from that, rather than the fact of the analogy in itself.
I think Eliezer once wrote something about things becoming clearer when you think about how you would program a computer to do it, as opposed to e.g. just throwing some applause lights to a human. So, how specifically would you implement this kind of belief in a computer?
Also, should we go meta and say: "'Rationality gives us a better understanding of the world, except when it does not' is a good ideology, except when it is worse" et cetera?
What exactly would that actually mean? (Other than verbally shielding yourself from criticism by endless "but I said 'except when not'".) Suppose a person A believes "there is a 80% probability it will rain tomorrow", but a person B believes "there is a 80% probability it will rain tomorrow, except if it is some different probability". I have an idea about how A would bet about tomorrow's weather, but how would B?
I think Eliezer once wrote something about things becoming clearer when you think about how you would program a computer to do it, as opposed to e.g. just throwing some applause lights to a human. So, how specifically would you implement this kind of belief in a computer?
First solve natural language...
No one has used eliezer's technique much and there may be a reason for that.
"'Rationality gives us a better understanding of the world, except when it does not"
I provided this as an exaggerated example of how aiming for absolute truth can mean that you produce an ideology that is hard to explain. More realistically, someone would write something along the lines of, rationality gives us a better understanding of the world, except in cases a), b), c)... but if there are enough of these cases and these cases are complex enough, then in practise people round it off to "X is true, except when it is not", ie. they don't really understand what is going on as you've pointed out.
The point was that there are advantages of creating a self-conscious ideology that isn't literally true, but has known flaws, such as it becoming much easier to actually explain so that people don't end up being confused as above.
In other words, as far as I can tell, it doesn't seem that your comment isn't really responding to what I wrote.
Operating outside of ideology is extremely hard, if not impossible. Even groups that see themselves as non-ideological, still seem to end up operating within an ideology of some sort.
Take for example Less Wrong. It seems to operate within a few assumptions:
...
These assumptions are also subject to some criticisms. Here's one criticism for each of the previous points:
I could continue discussing assumptions and possible criticisms, but that would be a distraction from the core point, which is that there are advantages to having a concrete ideology that is aware of it's own limitations, as opposed to an implicit ideology that is beyond all criticism.
Self-conscious ideologies also have other advantages: