# Meta-rationality

-13 10 October 2012 02:21AM

I've seen there's discussion on LW about rationality, namely, about what it means. I don't think a satisfactory answer can be found without defining what rationality is not. And this seems to be a problem. As far as I know, rationality on LW does not include systematic methods for categorizing and analyzing irrational things. Instead, the discussion seems to draw a circle around rationality. Everyone on LW is excepted to be inside this circle - think of it as a set in a Venn diagram. On the border of the circle there is a sign saying: "Here be dragons". And beyond the circle there is irrationality.

How can we differentiate the irrational from the rational, if we do not know what the irrational is?

But how can we approach the irrational, if we want to be rational?

It seems to me there is no way to give a satisfactory account of rationality from within rationality itself. If we presuppose rationality is the only way to attain justification, and then try to find justification for rationalism (the doctrine according to which we should strive for rationality), we are simply making a circular argument. We already presupposed rationalism before trying to find justification for doing so.

Therefore it seems to me we ought to make a metatheory of rationality in order to find out what is rational and what is irrational. The metatheory itself has to be as rational as possible. That would include having an analytically defined structure, which permits us to at least examine whether the metatheory is logically consistent or inconsistent. This would also allow us to also examine whether the metatheory is mathematically elegant, or whether the same thing could be expressed in a simpler form. The metatheory should also correspond with our actual observations so that we could figure out whether it contradicts empirical findings or not.

How much interest is there for such a metatheory?

Sort By: Best
Comment author: 10 October 2012 04:00:39AM *  5 points [-]

As luck would have it, I always land on the following page when I start typing "less..." in my browser. http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/

I find it useful consider epistemic rationality a subtype of instrumental rationality, and identify other types of instrumental rationality such as social rationality.

Comment author: 10 October 2012 01:25:03PM -3 points [-]

Yudkowsky says:

So if you understand what concept we are generally getting at with this word "rationality", and with the sub-terms "epistemic rationality" and "instrumental rationality", we have communicated: we have accomplished everything there is to accomplish by talking about how to define "rationality". What's left to discuss is not what meaning to attach to the syllables "ra-tio-na-li-ty"; what's left to discuss is what is a good way to think.

With that said, you should be aware that many of us will regard as controversial - at the very least - any construal of "rationality" that makes it non-normative:

For example, if you say, "The rational belief is X, but the true belief is Y" then you are probably using the word "rational" in a way that means something other than what most of us have in mind. (E.g. some of us expect "rationality" to be consistent under reflection - "rationally" looking at the evidence, and "rationally" considering how your mind processes the evidence, shouldn't lead to two different conclusions.) Similarly, if you find yourself saying "The rational thing to do is X, but the right thing to do is Y" then you are almost certainly using one of the words "rational" or "right" in a way that a huge chunk of readers won't agree with.

A normative belief in rationality is, as far as I can tell, not possible for someone who does not have a clear concpetion of what rationality is. I am trying to present tools for forming such a conception. The theory I am presenting is, most accurately, a rationally constructed language, not a prescriptive theory on whether it is moral to be rational. The merit of this language is that it should allow you to converse about rationality with mysticists or religious people so that you both understand what you are talking about. It seems to me the ID vs. evolution debate remains unresolved among the general public (in the USA) because neither side has managed to speak the same language as the other side. My language is not formally defined in the sense of being a formal language, but it has formally defined ontological types.

Comment author: 10 October 2012 06:13:13PM *  6 points [-]

The merit of this language is that it should allow you to converse about rationality with mysticists or religious people so that you both understand what you are talking about.

I think the most you can hope for is a model of rationality and irrationality that can model mysticists or religious people as well as rationalists. I don't think you can expect everyone to grok that model. That model may not be expressible in a mysticist's model of reality.

How can we differentiate the irrational from the rational, if we do not know what the irrational is?

Irrationality is just less instrumentally rational - less likely to win. You seem to have split rational and irrational into two categories, and I think this is just a methodological mistake. To understand and compare the two, you need to put both on the same scale, and then show how they have different measures on that scale.

Also, now that I look at more of your responses, it seems that you have your own highly developed theory, with your own highly developed language, and you're speaking that language to us. We don't speak your language. If you're going to try to talk to people in a new language, you need to start simple, like "this is a ball", so that we have some meaningful context from which to understand "I hit the ball."

Quickly thereafter, you have to demonstrate, and not just assert, some value to your language to motivate any readers you have to continue learning your language.

Comment author: 11 October 2012 08:58:21AM *  -1 points [-]

I think the most you can hope for is a model of rationality and irrationality that can model mysticists or religious people as well as rationalists. I don't think you can expect everyone to grok that model. That model may not be expressible in a mysticist's model of reality.

Agree. The Pirahã could not use my model because abstract concepts are banned in their culture. I read from New Scientist that white man tried to teach them numbers so that they wouldn't be cheated in trade so much, but upon getting some insight of what a number is, they refused to think that way. The analytic Metaphysics of Quality (my theory) would say that the Pirahã do not use transcendental language. They somehow know what it is and avoid it despite not having a name for it in their language. That language has only a few words.

The point is not to have everyone to grok at this model, but to use this model to explain reality. The differences between the concepts of "abstract" and "concrete" have been difficult to sort out by philosophers, but in this case the Pirahã behavior seems to be adequately explicable by using the concepts of "natural quality" and "transcendental quality" in the analytic Metaphysics of Quality.

Irrationality is just less instrumentally rational - less likely to win. You seem to have split rational and irrational into two categories, and I think this is just a methodological mistake. To understand and compare the two, you need to put both on the same scale, and then show how they have different measures on that scale.

Do you mean by "irrationality" something like a biased way of thinking whose existence can be objectively determined? I don't mean that by irrationality. I mean things whose existence has no rational justification, such as stream of consciousness. Things like dreams. If you are in a dream, and open your (working) wrist watch, and find out it contains coins instead of clockwork, and behave as if that were normal, there is no rational justification for you doing so - at least none that you know of while seeing the dream.

Also, now that I look at more of your responses, it seems that you have your own highly developed theory, with your own highly developed language, and you're speaking that language to us. We don't speak your language. If you're going to try to talk to people in a new language, you need to start simple, like "this is a ball", so that we have some meaningful context from which to understand "I hit the ball."

You're perfectly right. I'd like to go for the dialogue option, but obviously, if it's too exhausting for you because my point of view is too remote, nobody will participate. That's all I'm offering right now, though - dialogue. Maybe something else later, maybe not. I've had some fun already despite losing a lot of "karma".

The problem with simple examples is that, for example, I'd have to start a discussion on what is "useful". It seems to me the question is almost the same as "What is Quality?" The Metaphysics of Quality insists that Quality is undefinable. Although I've noticed some on LW have liked Pirsig's book Zen and the Art of Motorcycle Maintenance, it seems this would already cause a debate in its own right. I'd prefer not to get stuck on that debate and risk missing the chance of saying what I actually wanted to say.

If that discussion, however, is necessary, then I'd like to point out irrational behavior, that is, a somewhat uncritical habit of doing the first thing that pops into my mind, has been very useful for me. It has improved my efficiency in doing things I could rationally justify despite not actually performing the justification except rarely. If I am behaving that way - without keeping any justifications in my mind - I would say I am operating in the subjective or mystical continuum. When I do produce the justification, I do it in the objective or normative continuum by having either one of those emerge from the earlier subjective or mystical continuum via strong emergence. But I am not being rational before I have done this in spite of ending up with results that later appear rationally good.

EDIT: Moved this post here upon finding out that I can reply to this comment. This 10 minute lag is pretty inconvenient.

Comment author: 10 October 2012 03:58:45PM 3 points [-]

It seems to me the ID vs. evolution debate remains unresolved among the general public (in the USA) because neither side has managed to speak the same language as the other side.

If neither side accepts the other side's language as meaningful, why do you believe they would accept the new language?

Somehow related: http://xkcd.com/927/

Comment author: 11 October 2012 08:38:14AM *  0 points [-]

If neither side accepts the other side's language as meaningful, why do you believe they would accept the new language?

Somehow related: http://xkcd.com/927/

That's a very good point. Gonna give you +1 on that. The language, or type system, I am offering has the merit of no such type system being devised before. I stick to this unless proven wrong.

Academic philosophy has it's good sides. "Vagrant predicates" by Rescher are an impressive and pretty recent invention. I also like confirmation holism. But as far as I know, nobody has tried to do an ontology with the following features:

• Is analytically defined
• Explains both strong and weak emergence
• Precision of conceptual differentiation can be expanded arbitrarily (in this case by splitting continua into a greater amount of levels)
• Includes its own incompleteness as a non-well-formed set (Dynamic Quality)
• Uses an assumption of symmetry to figure out the contents and structure of irrational ontological categories which are inherently unable to account for their structure, with no apparent problems

Once you grasp the scope of this theory I don't think you'll find a simpler theory to include all that meaningfully - but please do tell me if you do. I still think my theory is relatively simple when compared to quantum mechanics, except that it has a broad scope.

In any case, the point is that on a closer look it appears that my theory has no viable competition, hence, it is the first standard and not the 15th. No other ontology attempts to cover this broad a scope into a formal model.

Comment author: 10 October 2012 07:12:21PM *  1 point [-]

ID vs. evolution debate remains unresolved among the general public (in the USA) because neither side has managed to speak the same language as the other side

Those are the labels used to describe the issue by the participants. But taking an outside view, the issue is inconsistent principles between the two sides. The fact that true religious believers reject the need for beliefs to pay rent in anticipated experience won't be solved by new vocabulary.

Comment author: 10 October 2012 03:33:50AM 7 points [-]

Must ... not ... respond ...

Comment author: 10 October 2012 01:35:37PM *  -1 points [-]

If you respond to that letter, I will not engage in conversation, because the letter is a badly written outdated progress report of my work. The work is now done, it will be published as a book, and I already have a publisher. If you want to know when the book comes out, you might want to join this Facebook community.

Comment author: 26 October 2012 06:53:14AM 3 points [-]

A bit late to this, but I think I figured out what the basic problem here is: Robert Pirsig is an archer, while LW (and folk like Judea Pearl, Gary Drescher and Marcus Hutter) are building hot-air balloons. And we're talking about doing a Moon shot, building an artificial general intelligence, here.

Archers think that if they get their bowyery really good and train to shoot really, really well, they might eventually land an arrow on the Moon. Maybe they'll need to build some kind of ballista type thing that needs five people to draw, but archery is awesome at skewering all sorts of things, so it should definitely be the way to go.

Hot-air balloonists on the other hand are pretty sure bows and arrows aren't the way to go, despite balloons being a pretty recent invention while archery has been practiced for millennia and has a very distinguished pedigree of masters. Balloons seem to get you higher up than you can get things to go with any sort of throwing device, even one of those fancy newfangled trebuchet things. Sure, nobody has managed to land a balloon on the Moon either, despite decades of trying, so obviously we're still missing something important that nobody really has a good idea about.

But it does look like figuring out how stuff like balloons work and trying to think of something new along similar lines, instead of developing a really good archery style is the way to go if you want to actually land something on the Moon at some point.

Comment author: 08 January 2013 12:48:22PM *  0 points [-]

Would you find a space rocket to resemble either a balloon or an arrow, but not both?

I didn't imply something Pirsig wrote would, in and of itself, have much to do with artificial intelligence.

LessWrong is like a sieve, that only collects stuff that looks like I need it, but on a closer look I don't. You won't come until the table is already set. Fine.

Comment author: 09 January 2013 03:36:05AM 0 points [-]

Would you find a space rocket to resemble either a balloon or an arrow, but not both?

The point is that the people who build it will resemble balloon-builders, not archers. People who are obsessed with getting machines to do things, not people who are obsessed with human performance.

Comment author: 29 January 2013 05:10:53PM -1 points [-]

My work is a type theory for AI for conceptualizing the input it receives via its artificial senses. If it weren't, I would have never come here.

The conceptualization faculty is accompanied with a formula for making moral evaluations, which is the basis of advanced decision making. Whatever the AI can conceptualize, it can also project as a vector on a Cartesian plane. The direction and magnitude of that vector are the data used in this decision making.

The actual decision making algorithm may begin by making random decisions and filtering good decisions from bad with the mathematical model I developed. Based on this filtering, the AI would begin to develop a self-modifying heuristic algorithm for making good decisions and, in general, for behaving in a good manner. What the AI would perceive as good behavior would of course, to some extent, depend of the environment in which the AI is placed.

If you had an AI making random actions and changing it's behavior according to heuristic rules, it could learn things in a similar way as a baby learns things. If you're not interested of that, I don't know what you're interested of.

I didn't come here to talk about some philosophy. I know you're not interested of that. I've done the math, but not the algorithm, because I'm not much of a coder. If you don't want to code a program that implements my mathematical model, that's no reason to give me -54 karma.

Comment author: 29 January 2013 07:06:00PM -3 points [-]

I really don't understand why you don't want a mathematical model of moral decision making, even for discussion. "Moral" is not a philosophical concept here. It is just the thing that makes some decisions better than others. I didn't have the formula when I came here in October. Now I have it. Maybe later I will have something more. And all you can do, with the exception of Risto, is to give me -1. Can you recommend me some transhumanist community?

How do you expect an AI to be rational, if you yourselves don't want to be metarational? Do you want some "pocket calculator" AI?

Too bad you don't like philosophical concepts. I thought you knew computer science is oozing over philosophy, which has all but died on its feet as far as we're concerned of the academia.

One thing's for sure: you don't know whack about karma. The AI could actually differentiate karma, in the proper sense of the word, from "reputation". You keep playing with your lego blocks until you grow up.

It would have been really neat to do this on LessWrong. It would have made for a good story. It would have also been practical. The academia isn't interested of this - there is no academic discipline for studying AI theory at this level of abstraction. I don't even have any AI expertise, and I didn't intend to develop a mathematical model for AI in the first place. That's just what I got when I worked on this for long enough.

I don't like stereotypical LessWrongians - I think they are boring and narrow-minded. I think we could have had something to do together despite the fact that our personalities don't make it easy for us to be friends. Almost anyone with AI expertise is competent enough to help me get started with this. You are not likely to get a better deal to get famous by doing so little work. But some deals of course seem too good to be true. So call me the "snake oil man" and go play with your legos.

Comment author: 15 February 2013 09:33:58AM -1 points [-]

In any case, this "hot-air balloonist vs. archer" (POP!) comparison seems like some sort of an argument ad hominem -type fallacy, and that's why I reacted with an ad hominem attack about legos and stuff. First of all, ad hominem is a fallacy, and does nothing to undermine my case. It does undermine the notion that you are being rational.

Secondly, if my person is that interesting, I'd say I resemble the mathematician C. S. Peirce more than Ramakrishna. It seems to me mathematics is not necessarily considered completely acceptable by the notion of rationality you are advocating, as pure mathematics is only concerned about rules regarding what you'd call "maps" but not rules regarding what you'd call "territory". That's a weird problem, though.

Comment author: 15 February 2013 07:16:46PM 1 point [-]

I didn't intend it as much of an ad hominem, after all both groups in the comparison are so far quite unprepared for the undertaking they're trying to do. Just trying to find ways to try to describe the cultural mismatch that seems to be going on here.

I understand that math is starting to have some stuff dealing with how to make good maps from a territory. Only that's inside the difficult and technical stuff like Jaynes' Probability Theory or Pearl's Causality, instead of somebody just making a nice new logical calculus with an operator for doing induction. There's already some actual philosophically interesting results like an inductive learner needing to have innate biases to be able to learn anything.

Comment author: 22 February 2013 09:33:03AM *  0 points [-]

That's a good result. However, the necessity of innate biases undermines the notion of rationality, unless we have a system for differentiating the rational cognitive faculty from the innately biased cognitive faculty. I am proposing that this differentiation faculty be rational, hence "Metarationality".

In the Cartesian coordinate system I devised object-level entities are projected as vectors. Vectors with a positive Y coordinate are rational. The only defined operation so far is addition: vectors can be added to each other. In this metasystem we are able to combine object-level entities (events, objects, "things") by adding them to each other as vectors. This system can be used to examine individual object-level entities within the context other entities create by virtue of their existence. Because the coordinate system assigns a moral value to each entity it can express, it can be used for decision making. Obviously, it values morally good decisions over morally bad ones.

Every entity in my system is an ordered pair of the form $^x_y p _a = ( ^x_y \& p _b , {^x_y *} p_c )$. Here x and y are propositional variables whose truth values can be -1 (false) or 1 (true). x denotes whether the entity is tangible and y whether it is placed within a rational epistemology. p is the entity. &p is the conceptual part of the entity (a philosopher would call that an "intension"). *p is the sensory part of the entity, ie. what sensory input is considered to be the referent of the entity's conceptual part. A philosopher would call *p an extension. a, b and c are numerical values, which denote the value of the entity itself, of its intension, and of its extension, respectively.

The right side of the following formula (right to the equivalence operator) tells how b and c are used to calculate a. The left side of the formula tells how any entity is converted to vector a. The vector conversion allows both innate cognitive bias and object-level rationality to influence decision making within the same metasystem.

$\forall m,n \in \mathbb{R}( {\bf a}= (xm,yn) \Leftrightarrow {^{x}_{y}p}_{\frac{ \textup{min}(m,n) } { \textup{max}(m,n) }(m+n)} =(^{x}_{y}{\&}p _n , {^{x}_{y}*p} _m ))$

If someone says that it's just a hypothesis this model works, I agree! But I'm eager to test it. However, this would require some teamwork.

Comment author: 10 October 2012 08:35:52PM 3 points [-]

If you compactify the plane correctly, the exterior of a circle is homeomorphic to a disk. This follows from the Jordan-Schoenflies theorem. Defining what something is is the same as defining what it is not.

Comment author: 10 October 2012 03:37:18AM 6 points [-]

How much interest is there for such a metatheory?

None, unless you have compelling credentials, formal theorems, or empirical results so discussion is not wasted space & breath. Philosophers have been doing 'meta-rationality' forever... anytime they discuss epistemology or other standard topics.

Comment author: 10 October 2012 02:43:52AM *  1 point [-]

Isn't this and its associated posts an account of meta-rationality?

Comment author: 10 October 2012 03:07:36AM *  0 points [-]

That post in particular is a vague overview of meta-rationality, not a systematic account of it. It doesn't describe meta-rationality as something that qualifies as a theory. It just says there is such a thing without telling exactly what it is.

Comment author: 10 October 2012 03:28:29AM 0 points [-]

Sorry, I meant that that series of posts addresses the justification issue, if somewhat informally.

Comment author: 10 October 2012 03:38:15AM *  -3 points [-]

Do you mean the sequence "Map and Territory"? I don't find it to include a comprehensive and well-defined taxonomy of ways of being rational and irrational. I was investigating whether I should present a certain theory here. Does this -4 mean you don't want it?

Insofar as LW is interested of irrationality, it seems interested of some kind of pseudo-irrationality: reasoning mistakes whose existence is affirmed by resorting to rational argumentation. I call that pseudo-irrationality, because its existence is affirmed rationally instead of irrationally.

I am talking about the kind of irrationality whose existence can be observed, but cannot be argued for, because it is obvious. Examples of such forms of irrationality include synchronicities. An example of a synchronicity would be you talking about a bee, and a bee appearing in the room. There is no rational reason (ostensibly) why these two events would happen simultaneously, and it could rightly be deemed a coincidence. But how does it exist as a coincidence? If we notice it, it exists as something we pay attention to, but is there any way we could be more specific about this?

If we could categorize such irrationally existing things comprehensively, we would have a clearer grasp on what is the rationality that we are advocating. We would know what that rationality is not.

Comment author: 10 October 2012 04:02:48AM 1 point [-]

This post is another one of the ones I was talking about. I wasn't really paying attention to where in the sequences anything was (it's been so long since I read them that they're all blurred together in my mind).

There are certainly strong arguments against the meaningfulness of coincidence (and I think the heuristics and biases program does address some of when and why people think coincidences are meaningful).

Comment author: 10 October 2012 01:49:43PM -1 points [-]

The page says:

But this doesn't answer the legitimate philosophical dilemma: If every belief must be justified, and those justifications in turn must be justified, then how is the infinite recursion terminated?

I do not assume that every belief must be justified, except possibly within rationality.

Do the arguments against the meaningfulness of coincidence state that coincidences do not exist?

Comment author: 10 October 2012 02:32:07AM 1 point [-]

...but I don't want to be rational for deep philosophical reasons. My justification is that (instrumental) rationality is useful. To demonstrate that, one would have to look at outcomes for those behaving rationally and those behaving irrational -- not necessarily easy, but definitely a tractable problem.

Comment author: 10 October 2012 03:09:11AM *  0 points [-]

I am not talking about a prescriptive theory that tells, whether one should be rational or not. I am talking about a rational theory, that produces a taxonomy of different ways of being rational or irrational without making a stance on which way should be chosen. Such a theory already implicitly advocates rationality, so it doesn't need to explicitly arrive at conclusions about whether one ought to be rational or not.

Comment author: 11 October 2012 08:17:07AM *  -2 points [-]

I can't reply to some of the comments, because they are below the threshold. Replies to downvoted comments are apparently "discouraged" but not banned, and I'm not on LW for any other reason than this, so let's give it a shot. I don't suppose I am simply required to not reply to a critical post about my own work.

First of all, thanks for the replies, and I no longer feel bad for the about -35 "karma" points I received. I could have tried to write some sort of a general introduction to you, but I've attempted to write them earlier, and I've found dialogue to be a better way. The book I wrote is a general introduction, but it's 140 pages long. Furthermore, my published wouldn't want me to give it away for free, and the style isn't very fitting to LessWrong. I'd perhaps hape to write another book and publish it for free as a series of LessWrong articles.

Mitchell_Porter said:

Tuukka's system looks like a case study in how a handful of potentially valid insights can be buried under a structure made of wordplay (multiple uses of "irrational"); networks of concepts in which formal structures are artificially repeated but the actual relations between concepts are fatally vague (his big flowchart); and a severe misuse of mathematical objects and propositions in an attempt to be rigorous.

The contents of the normative and objective continua are relatively easily processed by an average LW user. The objective continuum consists of dialectic (classical quality) about sensory input. Sensory input is categorized as it is categorized in Maslow's hierarchy of needs. I know there is some criticism of Maslow's theory, but can be accept it as a starting point? "Lower needs" includes homeostasis, eating, sex, excretion and such. "Higher needs" includes reputation, respect, intimacy and such. "Deliberation" includes Maslow's "self-actuation", that is, problem solving, creativity, learning and such. Sense-data is not included in Maslow's theory, but it could be assumed that humans have a need to have sensory experiences, and that this need is so easy to satisfy that it did not occur to Maslow to include it as the lowest need of his hierarchy.

The normative continuum is similarily split to a dialectic portion and a "sensory" portion. That is to say, a central thesis of the work is that there are some kind of mathematical intuitions that are not language, but that are used to operate in the domain of pure math and logic. In order to demonstrate that "mathematical intuitions" really do exist, let us consider the case of a synesthetic savant, who is able to evaluate numbers according to how they "feel", and use this feeling to determine whether the number is a prime. The "feeling" is sense-data, but the correlation between the feeling and primality is some other kind of non-lingual intuition.

If synesthetic primality checks exist, it follows that mathematical ability is not entirely based on language. Synesthetic primality checks do exist for some people, and not for others. However, I believe we all experience mathematical intuitions - for most, the experiences are just not as clear as they are for synesthetic savants. If the existence of mathematical intuition is denied, synesthetic primality checks are claimed impossible due to mere metaphysical skepticism in spite of lots of evidence that they do exist and produce strikingly accurate results.

Does this make sense? If so, I can continue.

Mitchell_Porter also said:

Occasionally you get someone who constructs their system in the awareness that it's a product of their own mind and not just an objective depiction of the facts as they were found

I'm aware of that. Objectivity is just one continuum in the theory.

Having written his sequel to Pirsig he now needs to outgrow that act as soon as possible, and acquire some genuine expertise in an intersubjectively recognized domain, so that he has people to talk with and not just talk at.

I'm not exactly in trouble. I have a publisher and I have people to talk with. I can talk with a mathematician I know and on LilaSquad. But given that Pirsig's legacy appears to be continental philosophy, nobody on LilaSquad can help me improve the formal approach even though some are interested of it. I can talk about everything else with them. Likewise, the mathematician is only interested of the formal structure of the theory and perhaps slightly of the normative continuum, but not of anything else. I wouldn't say I have something to prove or that I need something in particular. I'm mostly just interested to find out how you will react to this.

What I was picking up on in Tuukka's statement was that the irrationals are uncountable whereas the rationals are countable. So the rationals have the cardinality of a set of discrete combinatorial structures, like possible sentences in a language, whereas the irrationals have the cardinality of a true continuum, like a set of possible experiences, if you imagined qualia to be genuinely real-valued properties and e.g. the visual field to be a manifold in the topological sense. It would be a way of saying "descriptions are countable in number, experiences are uncountable".

Something to that effect. This is another reason why I like talking with people. They express things I've thought about with a different wording. I could never make progress just stuck in my head.

I'd say the irrational continua do not have fixed notions of truth and falsehood. If something is "true" now, there is no guarantee it will persist as a rule in the future. There are no proof methods of methods of justification. In a sense, the notions of truth and falsehood are so distorted in the irrational continua that they hardly qualify as truth or falsehood - even if the Bible, operating in the subjective continuum, would proclaim that it's "the truth" that Jesus is the Christ.

Incidentally, would I be correct in guessing that Robert Pirsig never replied to you?

As far as I know, the letter was never delivered to Pirsig. The insiders of MoQ-Discuss said their mailing list is strictly for discussing Pirsig's thoughts, not any derivative work. The only active member of Lila Squad who I presume to have Pirsig's e-mail address said Pirsig doesn't understand the Metaphysics of Quality himself anymore. It seemed pointless to press the issue that the letter be delivered to him. When the book is out, I can that to him via his publisher and hope he'll receive it. The letter wasn't even very good - the book is better.

I thought Pirsig might want to help me with development of the theory, but it turned out I didn't require his help. Now I only hope he'll enjoy reading the book.

Comment author: 10 October 2012 02:14:46PM *  -3 points [-]

The foundations of rationality, as LW knows it, are not defined with logical rigour. Are you adamant this is not a problem?

We are not here to argue the meaning of a word, not even if that word is "rationality". The point of attaching sequences of letters to particular concepts is to let two people communicate - to help transport thoughts from one mind to another. You cannot change reality, or prove the thought, by manipulating which meanings go with which words.

I don't think it's very helpful to oppose a logical definition for a certain language that would allow you to do this. As it is, you currently have no logical definition. You have this:

Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed "truth" or "accuracy", and we're happy to call it that.

Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".

That is not a language with a formalized type system. If you oppose a formalized type system, even if it were for the advancement of your purely practical goal, why? Wikipedia says:

A type system associates a type with each computed value. By examining the flow of these values, a type system attempts to ensure or prove that no type errors can occur. The particular type system in question determines exactly what constitutes a type error, but in general the aim is to prevent operations expecting a certain kind of value from being used with values for which that operation does not make sense (logic errors); memory errors will also be prevented.

What in a type system is undesirable to you? The "snake oil that cures lung cancer" - I'm pretty sure you've heard about that one - is a value whose type is irrational. If you may use natural language to declare that value as irrational, why do you oppose using a type system for doing the same thing?

Comment author: 30 January 2013 10:05:30AM -3 points [-]

Sorry for being cruel. It didn't occur to me that LessWrong is "an online community for people who want to apply the discovery of biases like the conjunction fallacy, the affect heuristic, and scope insensitivity in order to fix their own thinking." I thought this is a community for people who "apply the discovery of biases and, hence, their thinking is not broken".

I didn't notice "Less Wrong users aim to develop accurate predictive models of the world, and change their mind when they find evidence disconfirming those models". I thought LessWrong users actually do that instead of aiming to do that.

I didn't understand this is a low self-esteem support group for people who want to live up to preconceived notions of morality. I probably don't have anything to do here. Goodbye.