Meta-rationality
I've seen there's discussion on LW about rationality, namely, about what it means. I don't think a satisfactory answer can be found without defining what rationality is not. And this seems to be a problem. As far as I know, rationality on LW does not include systematic methods for categorizing and analyzing irrational things. Instead, the discussion seems to draw a circle around rationality. Everyone on LW is excepted to be inside this circle - think of it as a set in a Venn diagram. On the border of the circle there is a sign saying: "Here be dragons". And beyond the circle there is irrationality.
How can we differentiate the irrational from the rational, if we do not know what the irrational is?
But how can we approach the irrational, if we want to be rational?
It seems to me there is no way to give a satisfactory account of rationality from within rationality itself. If we presuppose rationality is the only way to attain justification, and then try to find justification for rationalism (the doctrine according to which we should strive for rationality), we are simply making a circular argument. We already presupposed rationalism before trying to find justification for doing so.
Therefore it seems to me we ought to make a metatheory of rationality in order to find out what is rational and what is irrational. The metatheory itself has to be as rational as possible. That would include having an analytically defined structure, which permits us to at least examine whether the metatheory is logically consistent or inconsistent. This would also allow us to also examine whether the metatheory is mathematically elegant, or whether the same thing could be expressed in a simpler form. The metatheory should also correspond with our actual observations so that we could figure out whether it contradicts empirical findings or not.
How much interest is there for such a metatheory?
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (40)
As luck would have it, I always land on the following page when I start typing "less..." in my browser. http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/
I find it useful consider epistemic rationality a subtype of instrumental rationality, and identify other types of instrumental rationality such as social rationality.
EDIT: I went on about this recently in: http://lesswrong.com/lw/eqn/the_useful_idea_of_truth/7jyn
A bit late to this, but I think I figured out what the basic problem here is: Robert Pirsig is an archer, while LW (and folk like Judea Pearl, Gary Drescher and Marcus Hutter) are building hot-air balloons. And we're talking about doing a Moon shot, building an artificial general intelligence, here.
Archers think that if they get their bowyery really good and train to shoot really, really well, they might eventually land an arrow on the Moon. Maybe they'll need to build some kind of ballista type thing that needs five people to draw, but archery is awesome at skewering all sorts of things, so it should definitely be the way to go.
Hot-air balloonists on the other hand are pretty sure bows and arrows aren't the way to go, despite balloons being a pretty recent invention while archery has been practiced for millennia and has a very distinguished pedigree of masters. Balloons seem to get you higher up than you can get things to go with any sort of throwing device, even one of those fancy newfangled trebuchet things. Sure, nobody has managed to land a balloon on the Moon either, despite decades of trying, so obviously we're still missing something important that nobody really has a good idea about.
But it does look like figuring out how stuff like balloons work and trying to think of something new along similar lines, instead of developing a really good archery style is the way to go if you want to actually land something on the Moon at some point.
Would you find a space rocket to resemble either a balloon or an arrow, but not both?
I didn't imply something Pirsig wrote would, in and of itself, have much to do with artificial intelligence.
LessWrong is like a sieve, that only collects stuff that looks like I need it, but on a closer look I don't. You won't come until the table is already set. Fine.
The point is that the people who build it will resemble balloon-builders, not archers. People who are obsessed with getting machines to do things, not people who are obsessed with human performance.
My work is a type theory for AI for conceptualizing the input it receives via its artificial senses. If it weren't, I would have never come here.
The conceptualization faculty is accompanied with a formula for making moral evaluations, which is the basis of advanced decision making. Whatever the AI can conceptualize, it can also project as a vector on a Cartesian plane. The direction and magnitude of that vector are the data used in this decision making.
The actual decision making algorithm may begin by making random decisions and filtering good decisions from bad with the mathematical model I developed. Based on this filtering, the AI would begin to develop a self-modifying heuristic algorithm for making good decisions and, in general, for behaving in a good manner. What the AI would perceive as good behavior would of course, to some extent, depend of the environment in which the AI is placed.
If you had an AI making random actions and changing it's behavior according to heuristic rules, it could learn things in a similar way as a baby learns things. If you're not interested of that, I don't know what you're interested of.
I didn't come here to talk about some philosophy. I know you're not interested of that. I've done the math, but not the algorithm, because I'm not much of a coder. If you don't want to code a program that implements my mathematical model, that's no reason to give me -54 karma.
In any case, this "hot-air balloonist vs. archer" (POP!) comparison seems like some sort of an argument ad hominem -type fallacy, and that's why I reacted with an ad hominem attack about legos and stuff. First of all, ad hominem is a fallacy, and does nothing to undermine my case. It does undermine the notion that you are being rational.
Secondly, if my person is that interesting, I'd say I resemble the mathematician C. S. Peirce more than Ramakrishna. It seems to me mathematics is not necessarily considered completely acceptable by the notion of rationality you are advocating, as pure mathematics is only concerned about rules regarding what you'd call "maps" but not rules regarding what you'd call "territory". That's a weird problem, though.
I didn't intend it as much of an ad hominem, after all both groups in the comparison are so far quite unprepared for the undertaking they're trying to do. Just trying to find ways to try to describe the cultural mismatch that seems to be going on here.
I understand that math is starting to have some stuff dealing with how to make good maps from a territory. Only that's inside the difficult and technical stuff like Jaynes' Probability Theory or Pearl's Causality, instead of somebody just making a nice new logical calculus with an operator for doing induction. There's already some actual philosophically interesting results like an inductive learner needing to have innate biases to be able to learn anything.
That's a good result. However, the necessity of innate biases undermines the notion of rationality, unless we have a system for differentiating the rational cognitive faculty from the innately biased cognitive faculty. I am proposing that this differentiation faculty be rational, hence "Metarationality".
In the Cartesian coordinate system I devised object-level entities are projected as vectors. Vectors with a positive Y coordinate are rational. The only defined operation so far is addition: vectors can be added to each other. In this metasystem we are able to combine object-level entities (events, objects, "things") by adding them to each other as vectors. This system can be used to examine individual object-level entities within the context other entities create by virtue of their existence. Because the coordinate system assigns a moral value to each entity it can express, it can be used for decision making. Obviously, it values morally good decisions over morally bad ones.
Every entity in my system is an ordered pair of the form
. Here x and y are propositional variables whose truth values can be -1 (false) or 1 (true). x denotes whether the entity is tangible and y whether it is placed within a rational epistemology. p is the entity. &p is the conceptual part of the entity (a philosopher would call that an "intension"). *p is the sensory part of the entity, ie. what sensory input is considered to be the referent of the entity's conceptual part. A philosopher would call *p an extension. a, b and c are numerical values, which denote the value of the entity itself, of its intension, and of its extension, respectively.
The right side of the following formula (right to the equivalence operator) tells how b and c are used to calculate a. The left side of the formula tells how any entity is converted to vector a. The vector conversion allows both innate cognitive bias and object-level rationality to influence decision making within the same metasystem.
If someone says that it's just a hypothesis this model works, I agree! But I'm eager to test it. However, this would require some teamwork.
If you compactify the plane correctly, the exterior of a circle is homeomorphic to a disk. This follows from the Jordan-Schoenflies theorem. Defining what something is is the same as defining what it is not.
None, unless you have compelling credentials, formal theorems, or empirical results so discussion is not wasted space & breath. Philosophers have been doing 'meta-rationality' forever... anytime they discuss epistemology or other standard topics.
Must ... not ... respond ...
If you respond to that letter, I will not engage in conversation, because the letter is a badly written outdated progress report of my work. The work is now done, it will be published as a book, and I already have a publisher. If you want to know when the book comes out, you might want to join this Facebook community.
This whole conversation seems a little awkward now.
I apologize. I no longer feel a need to behave in the way I did.
Isn't this and its associated posts an account of meta-rationality?
That post in particular is a vague overview of meta-rationality, not a systematic account of it. It doesn't describe meta-rationality as something that qualifies as a theory. It just says there is such a thing without telling exactly what it is.
Sorry, I meant that that series of posts addresses the justification issue, if somewhat informally.
...but I don't want to be rational for deep philosophical reasons. My justification is that (instrumental) rationality is useful. To demonstrate that, one would have to look at outcomes for those behaving rationally and those behaving irrational -- not necessarily easy, but definitely a tractable problem.
I am not talking about a prescriptive theory that tells, whether one should be rational or not. I am talking about a rational theory, that produces a taxonomy of different ways of being rational or irrational without making a stance on which way should be chosen. Such a theory already implicitly advocates rationality, so it doesn't need to explicitly arrive at conclusions about whether one ought to be rational or not.
I can't reply to some of the comments, because they are below the threshold. Replies to downvoted comments are apparently "discouraged" but not banned, and I'm not on LW for any other reason than this, so let's give it a shot. I don't suppose I am simply required to not reply to a critical post about my own work.
First of all, thanks for the replies, and I no longer feel bad for the about -35 "karma" points I received. I could have tried to write some sort of a general introduction to you, but I've attempted to write them earlier, and I've found dialogue to be a better way. The book I wrote is a general introduction, but it's 140 pages long. Furthermore, my published wouldn't want me to give it away for free, and the style isn't very fitting to LessWrong. I'd perhaps hape to write another book and publish it for free as a series of LessWrong articles.
Mitchell_Porter said:
The contents of the normative and objective continua are relatively easily processed by an average LW user. The objective continuum consists of dialectic (classical quality) about sensory input. Sensory input is categorized as it is categorized in Maslow's hierarchy of needs. I know there is some criticism of Maslow's theory, but can be accept it as a starting point? "Lower needs" includes homeostasis, eating, sex, excretion and such. "Higher needs" includes reputation, respect, intimacy and such. "Deliberation" includes Maslow's "self-actuation", that is, problem solving, creativity, learning and such. Sense-data is not included in Maslow's theory, but it could be assumed that humans have a need to have sensory experiences, and that this need is so easy to satisfy that it did not occur to Maslow to include it as the lowest need of his hierarchy.
The normative continuum is similarily split to a dialectic portion and a "sensory" portion. That is to say, a central thesis of the work is that there are some kind of mathematical intuitions that are not language, but that are used to operate in the domain of pure math and logic. In order to demonstrate that "mathematical intuitions" really do exist, let us consider the case of a synesthetic savant, who is able to evaluate numbers according to how they "feel", and use this feeling to determine whether the number is a prime. The "feeling" is sense-data, but the correlation between the feeling and primality is some other kind of non-lingual intuition.
If synesthetic primality checks exist, it follows that mathematical ability is not entirely based on language. Synesthetic primality checks do exist for some people, and not for others. However, I believe we all experience mathematical intuitions - for most, the experiences are just not as clear as they are for synesthetic savants. If the existence of mathematical intuition is denied, synesthetic primality checks are claimed impossible due to mere metaphysical skepticism in spite of lots of evidence that they do exist and produce strikingly accurate results.
Does this make sense? If so, I can continue.
Mitchell_Porter also said:
I'm aware of that. Objectivity is just one continuum in the theory.
I'm not exactly in trouble. I have a publisher and I have people to talk with. I can talk with a mathematician I know and on LilaSquad. But given that Pirsig's legacy appears to be continental philosophy, nobody on LilaSquad can help me improve the formal approach even though some are interested of it. I can talk about everything else with them. Likewise, the mathematician is only interested of the formal structure of the theory and perhaps slightly of the normative continuum, but not of anything else. I wouldn't say I have something to prove or that I need something in particular. I'm mostly just interested to find out how you will react to this.
Something to that effect. This is another reason why I like talking with people. They express things I've thought about with a different wording. I could never make progress just stuck in my head.
I'd say the irrational continua do not have fixed notions of truth and falsehood. If something is "true" now, there is no guarantee it will persist as a rule in the future. There are no proof methods of methods of justification. In a sense, the notions of truth and falsehood are so distorted in the irrational continua that they hardly qualify as truth or falsehood - even if the Bible, operating in the subjective continuum, would proclaim that it's "the truth" that Jesus is the Christ.
Mitchell asked:
As far as I know, the letter was never delivered to Pirsig. The insiders of MoQ-Discuss said their mailing list is strictly for discussing Pirsig's thoughts, not any derivative work. The only active member of Lila Squad who I presume to have Pirsig's e-mail address said Pirsig doesn't understand the Metaphysics of Quality himself anymore. It seemed pointless to press the issue that the letter be delivered to him. When the book is out, I can that to him via his publisher and hope he'll receive it. The letter wasn't even very good - the book is better.
I thought Pirsig might want to help me with development of the theory, but it turned out I didn't require his help. Now I only hope he'll enjoy reading the book.