I've seen there's discussion on LW about rationality, namely, about what it means. I don't think a satisfactory answer can be found without defining what rationality is not. And this seems to be a problem. As far as I know, rationality on LW does not include systematic methods for categorizing and analyzing irrational things. Instead, the discussion seems to draw a circle around rationality. Everyone on LW is excepted to be inside this circle - think of it as a set in a Venn diagram. On the border of the circle there is a sign saying: "Here be dragons". And beyond the circle there is irrationality.
How can we differentiate the irrational from the rational, if we do not know what the irrational is?
But how can we approach the irrational, if we want to be rational?
It seems to me there is no way to give a satisfactory account of rationality from within rationality itself. If we presuppose rationality is the only way to attain justification, and then try to find justification for rationalism (the doctrine according to which we should strive for rationality), we are simply making a circular argument. We already presupposed rationalism before trying to find justification for doing so.
Therefore it seems to me we ought to make a metatheory of rationality in order to find out what is rational and what is irrational. The metatheory itself has to be as rational as possible. That would include having an analytically defined structure, which permits us to at least examine whether the metatheory is logically consistent or inconsistent. This would also allow us to also examine whether the metatheory is mathematically elegant, or whether the same thing could be expressed in a simpler form. The metatheory should also correspond with our actual observations so that we could figure out whether it contradicts empirical findings or not.
How much interest is there for such a metatheory?
I really don't understand why you don't want a mathematical model of moral decision making, even for discussion. "Moral" is not a philosophical concept here. It is just the thing that makes some decisions better than others. I didn't have the formula when I came here in October. Now I have it. Maybe later I will have something more. And all you can do, with the exception of Risto, is to give me -1. Can you recommend me some transhumanist community?
How do you expect an AI to be rational, if you yourselves don't want to be metarational? Do you want some "pocket calculator" AI?
Too bad you don't like philosophical concepts. I thought you knew computer science is oozing over philosophy, which has all but died on its feet as far as we're concerned of the academia.
One thing's for sure: you don't know whack about karma. The AI could actually differentiate karma, in the proper sense of the word, from "reputation". You keep playing with your lego blocks until you grow up.
It would have been really neat to do this on LessWrong. It would have made for a good story. It would have also been practical. The academia isn't interested of this - there is no academic discipline for studying AI theory at this level of abstraction. I don't even have any AI expertise, and I didn't intend to develop a mathematical model for AI in the first place. That's just what I got when I worked on this for long enough.
I don't like stereotypical LessWrongians - I think they are boring and narrow-minded. I think we could have had something to do together despite the fact that our personalities don't make it easy for us to be friends. Almost anyone with AI expertise is competent enough to help me get started with this. You are not likely to get a better deal to get famous by doing so little work. But some deals of course seem too good to be true. So call me the "snake oil man" and go play with your legos.