Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Inscrutable Ideas
Comment author: Viliam 05 August 2017 02:30:32PM *  4 points [-]

Here is my attempt to summarize "what are the meta-rationalists trying to tell to rationalists", as I understood it from the previous discussion, this article, and some articles linked by this article, plus some personal attempts to steelman:

1) Rationalists have a preference for living in far mode, that is studying things instead of experiencing things. They may not endorse this preference explicitly, they may even verbally deny it, but this is what they typically do. It is not a coincidence that so many rationalists complain about akrasia; motivation resides in near mode, which is where rationalists spend very little time. (And the typical reaction of a rationalist facing akrasia is: "I am going to read yet another article or book about 'procrastination equation'; hopefully that will teach me how to become productive!" which is like trying to become fit by reading yet another book on fitness.) At some moment you need to stop learning and start actually doing things, but rationalists usually find yet another excuse for learning a bit more, and there is always something more to learn. They even consider this approach a virtue.

Rationalists are also more likely to listen to people who got their knowledge from studying, as opposed to people who got their knowledge by experience. Incoming information must at least pretend to be scientific, or it will be dismissed without second thought. In theory, one should update on all available evidence (although not equally strongly), and not double-count any. In practice, one article containing numbers or an equation will always beat unlimited amounts of personal experience.

2) Despite admitting verbally that a map is not the territory, rationalists hope that if they take one map, and keep updating it long enough, this map will asymptotically approach the territory. In other words, that in every moment, using one map is the right strategy. Meta-rationalists don't believe in the ability to update one map sufficiently (or perhaps just sufficiently quickly), and intentionally use different maps for different contexts. (Which of course does not prevent them from updating the individual maps.) As a side effect of this strategy, the meta-rationalist is always aware that the currently used map is just a map; one of many possible maps. The rationalist, having invested too much time and energy into updating one map, may find it emotionally too difficult to admit that the map does not fit the territory, when they encounter a new part of territory where the existing map fits poorly. Which means that on the emotional level, rationalists treat their one map as the territory.

Furthermore, meta-rationalists don't really believe that if you take one map and keep updating it long enough, you will necessarily asymptotically approach the territory. First, the incoming information is already interpreted by the map in use; second, the instructions for updating are themselves contained in the map. So it is quite possible that different maps, even after updating on tons of data from the territory, would still converge towards different attractors. And even if, hypothetically, given infinite computing power, they would converge towards the same place, it is still possible that they will not come sufficiently close during one human life, or that a sufficiently advanced map would fit into a human brain. Therefore, using multiple maps may be the optimal approach for a human. (Even if you choose "the current scientific knowledge" as one of your starting maps.)

3) There is an "everything of everythings", exceeding all systems, something like the highest level Tegmark multiverse only much more awesome, which is called "holon", or God, or Buddha. We cannot approach it in far mode, but we can... somehow... fruitfully interact with it in near mode. Rationalists deny it because their preferred far-mode approach is fruitless here. But you can still "get it" without necessarily being able to explain it by words. Maybe it is actually inexplicable by words in principle, because the only sufficiently good explanation for holon/God/Buddha is the holon/God/Buddha itself. If you "get it", you become the Kegan-level-5 meta-rationalist, and everything will start making sense. If you don't "get it", you will probably construct some Kegan-level-4 rationalist verbal argument for why it doesn't make sense at all.

How well did I do here?

In response to comment by Viliam on Inscrutable Ideas
Comment author: gworley 06 August 2017 12:47:22AM 3 points [-]

Thanks for taking the time to engage in this way! I'm really enjoying discussing these ideas here lately.

1 and 2 are spot on as long as we are keeping in mind we're talking about patterns in the behavior of LW rationalists and not making categorical claims about ideal rationalists. I don't think you are; it's just an important distinction to note that has proven helpful to me to point out so I don't end up interpreted as saying ridiculous things like "yes, every last damn rationalist is X".

Your take on 3 is about as good as I've got. I continue to try to figure this out because all I've got now is a strong hunch that there is some kind of interconnectedness of things that runs so deep you can't escape it, but my strongest evidence in favor is that it looks like we have to conclude this because all other metaphysics fail to fully account for reality as we find it. But I could easily be wrong because I expect there are more things I don't notice I'm assuming or there is reason I've made that is faulty.

[Link] Inscrutable Ideas

1 gworley 04 August 2017 08:55PM
Comment author: MrMind 31 July 2017 03:06:16PM *  1 point [-]

But if meta-rationality is unscrutable for rationality, how do you know it even exists? At least, Bayesian rationalists have some solace in Cox's theorem, or the coherence theorem, or the Church-Turing thesis. What stops me from declaring there's a sigma-rationality, which is unscrutable by all n-rationality below them? What does meta-rationality even imply, for the real world?

Comment author: gworley 31 July 2017 09:08:25PM 0 points [-]

But if meta-rationality is unscrutable for rationality, how do you know it even exists?

You see the holes rationality doesn't fill and the variables it doesn't constrain and then you go looking for how you could fill them in and constrain them.

What stops me from declaring there's a sigma-rationality, which is unscrutable by all n-rationality below them?

Nothing. We are basically saying we're in the position of applying a theory of types to ontology and meta-rationality is just one layer higher than rationality. We could of course go on forever, but being bounded that's not an option. There is of course some kind of meta-meta-rationality ontological type and on up for any n, but working with it is another matter.

But once you realize you're in this position you notice that type theory doesn't work so well and maybe you want something else instead. Maybe the expressive power of self-referential theories isn't so bad after all, although working with these theories it's pretty helpful if you can work out a few layers of self-reference before trying to collapse it because otherwise you definitely can't hope to notice when you've switched between consistency and completeness.

Comment author: Viliam 29 July 2017 04:28:40PM 0 points [-]

I'm not sure we can really hope to make meta-rationality scrutable.

How about making an "ideological Turing test"? If rationalists could successfully pretend to be meta-rationalists, would that count as a refutation of the claim that meta-rationalists understand things that are beyond understanding of mere rationalists?

Or is even this just a rationalist-level reasoning that from a meta-rationalist point of view makes about as much sense as a hypothetical pre-rationalist asking rationalists to produce a superior horoscope?

Comment author: gworley 31 July 2017 08:59:38PM 0 points [-]

At first I was going to say "yes" to your idea, but with the caveat that the only folks I'd trust to judge this are other folks we'd agree are meta-rationalists. But then this sort of defeats the point, doesn't it, because I already believe rationalists couldn't do this and if they did it would in fact be evidence that even if they don't call themselves meta-rationalists I would say they have thought processes similar to those who do call themselves meta-rationalists.

"Rationalist" and "meta-rationalist" are mostly categories for describing stochastic categories around the complexity of thinking people do. No one properly is or is not a rationalist or meta-rationalist, but instead can at best be sufficiently well described as one.

I don't mean this to be wily: I think what you are asking for (and the entire idea of an "ideological Turning test" itself) confounds causality in ways that make it only seem to work from rationalist-level reasoning. From my perspective the taking on of another's perspective in this test is already incorporated into meta-rationalist-level reasoning and so is not really a test of meta-rationality in the same way a "logical argument test" would be meaningless to a rationalist but a powerful tool for more complex thought for the pre-rationalist.

Comment author: Viliam 29 July 2017 04:16:12PM *  1 point [-]

So there's no "quantum leap", that is promised by meta-rationalists, or am I missing something?

There is no such thing as a too little molehill to make a mountain out of. But there are at least two things I noticed you missed here:

First, your description of rationalists is too charitable. On meta-rationalist websites they are typically described as unable to reason about systems, not understanding that their map is not the territory, prone to wishful thinking, and generally as what we call "Vulcan rationalists". (Usually with a layer of plausible deniability, e.g. on one page it is merely said that rationalists are a subset of "eternalists", with a hyperlink to other page that describes "eternalists" as having the aforementioned traits. Each of these claims can be easily defended separately, considering that "eternalists" is a made-up word.) With rationalists defined as this, it is easy to see how the other group is superior.

Second, you miss the implication that people disagreeing with meta-rationality are just immature children. There is a development scale from 0 to 5, where meta-rationalists are at level 5, rationalists are at level 4, and everyone else is at some of the lower levels.

Another way to express this is the concept of fluidity/nebulosity/whatever, which works like this: You make a map, and place everyone you know as some specific point on this map. (You can then arrange them into groups, etc.) The important part is that you refuse to place yourself on this map; instead you insist that you are always freely choosing the appropriate point to use in given situation, this getting all the advantages and none of the disadvantages; while everyone else is just hopelessly stuck at their one point. This obviously makes you the coolest guy in the town -- of course until someone else comes with their map, where you get stuck at one specific point, and they get to be the one above the map. (In some sense, this is what Eliezer also tried with his "winning" and "nameless virtue", only to get reduced to "meh, Kegan level 4" regardless.)

Comment author: gworley 31 July 2017 08:43:52PM 0 points [-]

While I am sad you've gotten this impression of what we're here calling meta-rationality, I also don't have a whole lot to say to convince you otherwise. We have often been foolish when first exploring these ideas and write about them in ways that do have status implications and I think we've left a bad taste in everyone's mouths over it, plus there's an echo of the second-hand post-modernists' tendency to view themselves as better than everyone else (although to be fair this is nothing new in intellectualism; just the most recent version of it that has a similar form).

That said, I do want to address one point you bring up because it might be a misunderstanding of the meta-rationalist position.

The important part is that you refuse to place yourself on this map; instead you insist that you are always freely choosing the appropriate point to use in given situation, this getting all the advantages and none of the disadvantages; while everyone else is just hopelessly stuck at their one point.

I'm not sure who thinks they have this degree of freedom, but the genesis of the meta-rationalist epistemology is that the map is part of the territory and is thus the map is constrained by the territory and not by an external desire for correspondence or anything else. Thus where we are in the territory greatly influences the kind of map we can draw, to the point that we cannot even hope to draw what we might call an ideal map because all maps will necessarily carry assumptions imposed by the place of observation.

This doesn't mean that we can always choose whatever perspective to use in a given situation, but rather that we must acknowledge the non-primacy of any particular perspective (unless we impose a purpose against which to judge) and can then, from our relatively small part of the territory from which we can observe to draw our map, use information provided to us by the map to reasonably simulate how the map would look if we could view the territory from a different place and then update our map based on this implied information.

To me it seems rationalists/scientists/theologians/etc. are the ones who have the extra degree of freedom because, although from the inside they restrict themselves to a particular perspective judged on some desirable criteria, those criteria are chosen without being fully constrained, and thus between individuals there is no mechanism of consensus if their preferences disagree. But I understand that from the rationalist perspective this probably looks reversed because by taking the thing that creates different perspectives and puts it in the map a seemingly fundamental preference disagreement becomes part of the perspective.

(In some sense, this is what Eliezer also tried with his "winning" and "nameless virtue", only to get reduced to "meh, Kegan level 4" regardless.)

I think there are plenty of things in LW rationality that point to meta-rationality, and I think that's why we're engaged with this community and many people have come to the meta-rationality position through LW rationality (hence even why it's being called that among other names like post-rationality). That said, interacting with many rationalists (or if we were all being more humble what we might call aspiring rationalists) and talking to them they express having at most episteme of ideas around "winning" and "nameless virtue" and not gnosis. The (aspiring) meta-rationalists are claiming they do have gnosis here, though to be fair we're mostly offering doxia as evidence because we're still working on having episteme ourselves.

This need not be true of all self-identified rationalists, of course, but if we are trying to make a distinction between views people seem to hold within the rationalist discourse and "rationalist" is the self-identification term used by many people on one side the the distinction, then choosing another name for those of us who wish to identify on the other side seems reasonable. I myself now try to avoid categorization of people and instead focus on categorization of thought in the language I use to describe these ideas, although I've not done that here to remain anchored on the terms already in use in this discussion. I instead like to talk about people thinking in particular ways that the limits those ways of thinking have since we don't make our thinking, so to speak, but our thinking makes us. This better reflects the way I actually think about these concepts, but unfortunately the most worked out ideas in meta-rational discourse are not evenly distributed yet.

Comment author: kvas 28 July 2017 02:30:32PM 2 points [-]

I've read the article and then also A first lesson in meta-rationality but I must confess I still have no idea what he's talking about. The accusations of inscrutability seem to be spot on.

Perhaps I should read more about meta-rationality to get it, but just to keep me motivated, can anyone explain in simple terms what the deal is about, or perhaps give me an example of meta-rationalist belief that rationalists don't share?

Comment author: gworley 28 July 2017 06:14:08PM 1 point [-]

I'd say the biggest different you'll notice that affects the most things is the change in epistemology.

Rationalist epistemology and the epistemology of other similar "rational" systems of thought (cf. scientism, theology) assumes there is a single correct way of understanding the world, which rationalists perhaps having the high ground in viewing the project as finding the correct epistemology regardless of what it implies.

The meta-rationalists/post-modern position is that this is not possible because epistemology necessarily influences ontology so we cannot possibly have a single "correct" understanding of the world. In this view an epistemology and the ontology it produces can at best be useful to some telos (purpose) but we cannot assign one the prime position as the "correct" ontology for metaphysical reality because we have no way to decide what "correct" is that is independent of the epistemology in which we develop our understanding of "correct". Thus the epistemology of rationality, which seems to target most accurately predicting reality based on known information, is but one useful way of understanding the world within the meta-rationalist/post-modern view, and others may be more useful for serving other purposes.

Both stand in contrast to the pre-rational approach to epistemology which does not assume everything is knowable and will accept mystery where explanation is not available.

Not sure if that really achieves the "simple terms" aim, so maybe I can put it like this:

The pre-rational person can't know some things. The rational person doesn't know some things. The meta-rational person knows they can't know some things.

Comment author: phonypapercut 26 July 2017 06:55:28AM 2 points [-]

I'd go further, and say it's grossly narcissistic and hypocritical. The framing of nerds vs. non-nerds is itself an example of the described mode of communication.

Comment author: gworley 27 July 2017 05:45:37PM 3 points [-]

I read both this comment and the parent comment to be taking the OP in bad faith. Bound_up has taken the time to share their thinking with us and, while it may be there is an offensive interpretation of the post, it violates the discourse norms I'd at least like to see here to outright dismiss something as "bad". Some of the other comments under the parent comment make this a bit clearer, but even the most generous interpretations I can find of many of these comments lack much more content than "shut up OP".

Comment author: gworley 27 July 2017 05:25:10PM 0 points [-]

(cross-posting my comment on this from the original because i think it might be of more interest here)

I might write a more detailed response along these lines depending on where my thinking takes me, but I've previously thought about this issue and after thinking about it more since reading this yesterday it still seems to me that meta-rationality is specifically inscrutable because it needs meta-rationality to explain itself.

In fairness this is also a problem for rationality, too, because it can't really explain itself in terms of pre-rationality, and from what I can tell we actually don't know that well how to teach rationality either. STEM education mostly seems to teach some of the methods of rationality, like how to use logic to manipulate symbols, but tends to do so in a way that ends up domain restricted. Most STEM graduates are still pre-rational thinkers in most domains of their lives, though they may dress up their thoughts in the language of rationality, and this is specifically what projects like LessWrong are all about: getting people to at least be actually rational rather than pre-rational in rationalist garb.

But even with CFAR and other efforts LW seems to be only marginally more successful than most because I know a lot of LW/CFAR folks who have read, written, and thought about rationality a lot and they still struggle with many of the basics to not only adopt the rationalist world view but to also at least stop using the pre-rationalist world view and instead notice they don't understand something. To be fair marginal success is all LW needed to achieve to satisfy its goals of producing a supply of people capable of doing AI safety research, but I think it's telling that even such a project so directed to making rationality learnable has only been marginally successful and from what I can tell not by making rationality scrutable but by creating lots of opportunities for enlightenment.

Given that we don't even have a good model of how to make rationality truly scrutable, I'm not sure we can really hope to make meta-rationality scrutable. What seems to me more likely is that we can work to find ways of not explaining meta-rationality but training people into it. Of course this is already what you're doing with Meaningness, but it's also for this reason I'm not sure we can do more than what Meaningness has so far been working to accomplish.

Comment author: gworley 27 July 2017 05:30:28PM 0 points [-]

Maybe the short version of this is: meta-rationalists can't do what rationalists ask, but that's okay because neither can rationalists perform the analogous task for pre-rationalists, so asking meta-rationality to be explicable in terms of rationality is epistemically unfair and asking for too much proof.

Comment author: gworley 27 July 2017 05:25:10PM 0 points [-]

(cross-posting my comment on this from the original because i think it might be of more interest here)

I might write a more detailed response along these lines depending on where my thinking takes me, but I've previously thought about this issue and after thinking about it more since reading this yesterday it still seems to me that meta-rationality is specifically inscrutable because it needs meta-rationality to explain itself.

In fairness this is also a problem for rationality, too, because it can't really explain itself in terms of pre-rationality, and from what I can tell we actually don't know that well how to teach rationality either. STEM education mostly seems to teach some of the methods of rationality, like how to use logic to manipulate symbols, but tends to do so in a way that ends up domain restricted. Most STEM graduates are still pre-rational thinkers in most domains of their lives, though they may dress up their thoughts in the language of rationality, and this is specifically what projects like LessWrong are all about: getting people to at least be actually rational rather than pre-rational in rationalist garb.

But even with CFAR and other efforts LW seems to be only marginally more successful than most because I know a lot of LW/CFAR folks who have read, written, and thought about rationality a lot and they still struggle with many of the basics to not only adopt the rationalist world view but to also at least stop using the pre-rationalist world view and instead notice they don't understand something. To be fair marginal success is all LW needed to achieve to satisfy its goals of producing a supply of people capable of doing AI safety research, but I think it's telling that even such a project so directed to making rationality learnable has only been marginally successful and from what I can tell not by making rationality scrutable but by creating lots of opportunities for enlightenment.

Given that we don't even have a good model of how to make rationality truly scrutable, I'm not sure we can really hope to make meta-rationality scrutable. What seems to me more likely is that we can work to find ways of not explaining meta-rationality but training people into it. Of course this is already what you're doing with Meaningness, but it's also for this reason I'm not sure we can do more than what Meaningness has so far been working to accomplish.

[Link] Ignorant, irrelevant, and inscrutable (rationalism critiques)

1 gworley 25 July 2017 09:57PM

View more: Next