by [anonymous]
1 min read

-25

I've seen there's discussion on LW about rationality, namely, about what it means. I don't think a satisfactory answer can be found without defining what rationality is not. And this seems to be a problem. As far as I know, rationality on LW does not include systematic methods for categorizing and analyzing irrational things. Instead, the discussion seems to draw a circle around rationality. Everyone on LW is excepted to be inside this circle - think of it as a set in a Venn diagram. On the border of the circle there is a sign saying: "Here be dragons". And beyond the circle there is irrationality.

How can we differentiate the irrational from the rational, if we do not know what the irrational is?

But how can we approach the irrational, if we want to be rational?

It seems to me there is no way to give a satisfactory account of rationality from within rationality itself. If we presuppose rationality is the only way to attain justification, and then try to find justification for rationalism (the doctrine according to which we should strive for rationality), we are simply making a circular argument. We already presupposed rationalism before trying to find justification for doing so.

Therefore it seems to me we ought to make a metatheory of rationality in order to find out what is rational and what is irrational. The metatheory itself has to be as rational as possible. That would include having an analytically defined structure, which permits us to at least examine whether the metatheory is logically consistent or inconsistent. This would also allow us to also examine whether the metatheory is mathematically elegant, or whether the same thing could be expressed in a simpler form. The metatheory should also correspond with our actual observations so that we could figure out whether it contradicts empirical findings or not.

How much interest is there for such a metatheory?

New Comment
43 comments, sorted by Click to highlight new comments since:
[-]gwern100

How much interest is there for such a metatheory?

None, unless you have compelling credentials, formal theorems, or empirical results so discussion is not wasted space & breath. Philosophers have been doing 'meta-rationality' forever... anytime they discuss epistemology or other standard topics.

If you respond to that letter, I will not engage in conversation, because the letter is a badly written outdated progress report of my work. The work is now done, it will be published as a book, and I already have a publisher. If you want to know when the book comes out, you might want to join this Facebook community.

As luck would have it, I always land on the following page when I start typing "less..." in my browser. http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/

I find it useful consider epistemic rationality a subtype of instrumental rationality, and identify other types of instrumental rationality such as social rationality.

EDIT: I went on about this recently in: http://lesswrong.com/lw/eqn/the_useful_idea_of_truth/7jyn

A bit late to this, but I think I figured out what the basic problem here is: Robert Pirsig is an archer, while LW (and folk like Judea Pearl, Gary Drescher and Marcus Hutter) are building hot-air balloons. And we're talking about doing a Moon shot, building an artificial general intelligence, here.

Archers think that if they get their bowyery really good and train to shoot really, really well, they might eventually land an arrow on the Moon. Maybe they'll need to build some kind of ballista type thing that needs five people to draw, but archery is awesome at skewering all sorts of things, so it should definitely be the way to go.

Hot-air balloonists on the other hand are pretty sure bows and arrows aren't the way to go, despite balloons being a pretty recent invention while archery has been practiced for millennia and has a very distinguished pedigree of masters. Balloons seem to get you higher up than you can get things to go with any sort of throwing device, even one of those fancy newfangled trebuchet things. Sure, nobody has managed to land a balloon on the Moon either, despite decades of trying, so obviously we're still missing something important that nobody really has a good idea about.

But it does look like figuring out how stuff like balloons work and trying to think of something new along similar lines, instead of developing a really good archery style is the way to go if you want to actually land something on the Moon at some point.

Would you find a space rocket to resemble either a balloon or an arrow, but not both?

I didn't imply something Pirsig wrote would, in and of itself, have much to do with artificial intelligence.

LessWrong is like a sieve, that only collects stuff that looks like I need it, but on a closer look I don't. You won't come until the table is already set. Fine.

Would you find a space rocket to resemble either a balloon or an arrow, but not both?

The point is that the people who build it will resemble balloon-builders, not archers. People who are obsessed with getting machines to do things, not people who are obsessed with human performance.

My work is a type theory for AI for conceptualizing the input it receives via its artificial senses. If it weren't, I would have never come here.

The conceptualization faculty is accompanied with a formula for making moral evaluations, which is the basis of advanced decision making. Whatever the AI can conceptualize, it can also project as a vector on a Cartesian plane. The direction and magnitude of that vector are the data used in this decision making.

The actual decision making algorithm may begin by making random decisions and filtering good decisions from bad with the mathematical model I developed. Based on this filtering, the AI would begin to develop a self-modifying heuristic algorithm for making good decisions and, in general, for behaving in a good manner. What the AI would perceive as good behavior would of course, to some extent, depend of the environment in which the AI is placed.

If you had an AI making random actions and changing it's behavior according to heuristic rules, it could learn things in a similar way as a baby learns things. If you're not interested of that, I don't know what you're interested of.

I didn't come here to talk about some philosophy. I know you're not interested of that. I've done the math, but not the algorithm, because I'm not much of a coder. If you don't want to code a program that implements my mathematical model, that's no reason to give me -54 karma.

In any case, this "hot-air balloonist vs. archer" (POP!) comparison seems like some sort of an argument ad hominem -type fallacy, and that's why I reacted with an ad hominem attack about legos and stuff. First of all, ad hominem is a fallacy, and does nothing to undermine my case. It does undermine the notion that you are being rational.

Secondly, if my person is that interesting, I'd say I resemble the mathematician C. S. Peirce more than Ramakrishna. It seems to me mathematics is not necessarily considered completely acceptable by the notion of rationality you are advocating, as pure mathematics is only concerned about rules regarding what you'd call "maps" but not rules regarding what you'd call "territory". That's a weird problem, though.

I didn't intend it as much of an ad hominem, after all both groups in the comparison are so far quite unprepared for the undertaking they're trying to do. Just trying to find ways to try to describe the cultural mismatch that seems to be going on here.

I understand that math is starting to have some stuff dealing with how to make good maps from a territory. Only that's inside the difficult and technical stuff like Jaynes' Probability Theory or Pearl's Causality, instead of somebody just making a nice new logical calculus with an operator for doing induction. There's already some actual philosophically interesting results like an inductive learner needing to have innate biases to be able to learn anything.

That's a good result. However, the necessity of innate biases undermines the notion of rationality, unless we have a system for differentiating the rational cognitive faculty from the innately biased cognitive faculty. I am proposing that this differentiation faculty be rational, hence "Metarationality".

In the Cartesian coordinate system I devised object-level entities are projected as vectors. Vectors with a positive Y coordinate are rational. The only defined operation so far is addition: vectors can be added to each other. In this metasystem we are able to combine object-level entities (events, objects, "things") by adding them to each other as vectors. This system can be used to examine individual object-level entities within the context other entities create by virtue of their existence. Because the coordinate system assigns a moral value to each entity it can express, it can be used for decision making. Obviously, it values morally good decisions over morally bad ones.

Every entity in my system is an ordered pair of the form ). Here x and y are propositional variables whose truth values can be -1 (false) or 1 (true). x denotes whether the entity is tangible and y whether it is placed within a rational epistemology. p is the entity. &p is the conceptual part of the entity (a philosopher would call that an "intension"). p is the sensory part of the entity, ie. what sensory input is considered to be the referent of the entity's conceptual part. A philosopher would call p an extension. a, b and c are numerical values, which denote the value of the entity itself, of its intension, and of its extension, respectively.

The right side of the following formula (right to the equivalence operator) tells how b and c are used to calculate a. The left side of the formula tells how any entity is converted to vector a. The vector conversion allows both innate cognitive bias and object-level rationality to influence decision making within the same metasystem.

%20\Leftrightarrow%20{%5E{x}_{y}p}_{\frac{%20\textup{min}(m,n)%20}%20{%20\textup{max}(m,n)%20}(m+n)}%20=(%5E{x}_{y}{\&}p%20_n%20,%20{%5E{x}_{y}*p}%20_m%20)))

If someone says that it's just a hypothesis this model works, I agree! But I'm eager to test it. However, this would require some teamwork.

If you compactify the plane correctly, the exterior of a circle is homeomorphic to a disk. This follows from the Jordan-Schoenflies theorem. Defining what something is is the same as defining what it is not.

This whole conversation seems a little awkward now.

I apologize. I no longer feel a need to behave in the way I did.

Isn't this and its associated posts an account of meta-rationality?

That post in particular is a vague overview of meta-rationality, not a systematic account of it. It doesn't describe meta-rationality as something that qualifies as a theory. It just says there is such a thing without telling exactly what it is.

Sorry, I meant that that series of posts addresses the justification issue, if somewhat informally.

...but I don't want to be rational for deep philosophical reasons. My justification is that (instrumental) rationality is useful. To demonstrate that, one would have to look at outcomes for those behaving rationally and those behaving irrational -- not necessarily easy, but definitely a tractable problem.

I am not talking about a prescriptive theory that tells, whether one should be rational or not. I am talking about a rational theory, that produces a taxonomy of different ways of being rational or irrational without making a stance on which way should be chosen. Such a theory already implicitly advocates rationality, so it doesn't need to explicitly arrive at conclusions about whether one ought to be rational or not.

[-][anonymous]00

Test

[This comment is no longer endorsed by its author]Reply
[-][anonymous]00

buybuydandavis said:

Irrationality is just less instrumentally rational - less likely to win. You seem to have split rational and irrational into two categories, and I think this is just a methodological mistake. To understand and compare the two, you need to put both on the same scale, and then show how they have different measures on that scale.

Do you mean by "irrationality" something like a biased way of thinking whose existence can be objectively determined? I don't mean that by irrationality. I mean things whose existence has no rational justification, such as stream of consciousness. Things like dreams. If you are in a dream, and open your (working) wrist watch, and find out it contains coins instead of clockwork, and behave as if that were normal, there is no rational justification for you doing so - at least none that you know of while seeing the dream.

Also, now that I look at more of your responses, it seems that you have your own highly developed theory, with your own highly developed language, and you're speaking that language to us. We don't speak your language. If you're going to try to talk to people in a new language, you need to start simple, like "this is a ball", so that we have some meaningful context from which to understand "I hit the ball."

You're perfectly right. I'd like to go for the dialogue option, but obviously, if it's too exhausting for you because my point of view is too remote, nobody will participate. That's all I'm offering right now, though - dialogue. Maybe something else later, maybe not. I've had some fun already despite losing a lot of "karma".

The problem with simple examples is that, for example, I'd have to start a discussion on what is "useful". It seems to me the question is almost the same as "What is Quality?" The Metaphysics of Quality insists that Quality is undefinable. Although I've noticed some on LW have liked Pirsig's book Zen and the Art of Motorcycle Maintenance, it seems this would already cause a debate in its own right. I'd prefer not to get stuck on that debate and risk missing the chance of saying what I actually wanted to say.

If that discussion, however, is necessary, then I'd like to point out irrational behavior, that is, a somewhat uncritical habit of doing the first thing that pops into my mind, has been very useful for me. It has improved my efficiency in doing things I could rationally justify despite not actually performing the justification except rarely. If I am behaving that way - without keeping any justifications in my mind - I would say I am operating in the subjective or mystical continuum. When I do produce the justification, I do it in the objective or normative continuum by having either one of those emerge from the earlier subjective or mystical continuum via strong emergence. But I am not being rational before I have done this in spite of ending up with results that later appear rationally good.

[This comment is no longer endorsed by its author]Reply