I've seen there's discussion on LW about rationality, namely, about what it means. I don't think a satisfactory answer can be found without defining what rationality is not. And this seems to be a problem. As far as I know, rationality on LW does not include systematic methods for categorizing and analyzing irrational things. Instead, the discussion seems to draw a circle around rationality. Everyone on LW is excepted to be inside this circle - think of it as a set in a Venn diagram. On the border of the circle there is a sign saying: "Here be dragons". And beyond the circle there is irrationality.
How can we differentiate the irrational from the rational, if we do not know what the irrational is?
But how can we approach the irrational, if we want to be rational?
It seems to me there is no way to give a satisfactory account of rationality from within rationality itself. If we presuppose rationality is the only way to attain justification, and then try to find justification for rationalism (the doctrine according to which we should strive for rationality), we are simply making a circular argument. We already presupposed rationalism before trying to find justification for doing so.
Therefore it seems to me we ought to make a metatheory of rationality in order to find out what is rational and what is irrational. The metatheory itself has to be as rational as possible. That would include having an analytically defined structure, which permits us to at least examine whether the metatheory is logically consistent or inconsistent. This would also allow us to also examine whether the metatheory is mathematically elegant, or whether the same thing could be expressed in a simpler form. The metatheory should also correspond with our actual observations so that we could figure out whether it contradicts empirical findings or not.
How much interest is there for such a metatheory?
My work is a type theory for AI for conceptualizing the input it receives via its artificial senses. If it weren't, I would have never come here.
The conceptualization faculty is accompanied with a formula for making moral evaluations, which is the basis of advanced decision making. Whatever the AI can conceptualize, it can also project as a vector on a Cartesian plane. The direction and magnitude of that vector are the data used in this decision making.
The actual decision making algorithm may begin by making random decisions and filtering good decisions from bad with the mathematical model I developed. Based on this filtering, the AI would begin to develop a self-modifying heuristic algorithm for making good decisions and, in general, for behaving in a good manner. What the AI would perceive as good behavior would of course, to some extent, depend of the environment in which the AI is placed.
If you had an AI making random actions and changing it's behavior according to heuristic rules, it could learn things in a similar way as a baby learns things. If you're not interested of that, I don't know what you're interested of.
I didn't come here to talk about some philosophy. I know you're not interested of that. I've done the math, but not the algorithm, because I'm not much of a coder. If you don't want to code a program that implements my mathematical model, that's no reason to give me -54 karma.