This whole conversation seems a little awkward now.
That's a good result. However, the necessity of innate biases undermines the notion of rationality, unless we have a system for differentiating the rational cognitive faculty from the innately biased cognitive faculty. I am proposing that this differentiation faculty be rational, hence "Metarationality".
In the Cartesian coordinate system I devised object-level entities are projected as vectors. Vectors with a positive Y coordinate are rational. The only defined operation so far is addition: vectors can be added to each other. In this metasystem w...
In any case, this "hot-air balloonist vs. archer" (POP!) comparison seems like some sort of an argument ad hominem -type fallacy, and that's why I reacted with an ad hominem attack about legos and stuff. First of all, ad hominem is a fallacy, and does nothing to undermine my case. It does undermine the notion that you are being rational.
Secondly, if my person is that interesting, I'd say I resemble the mathematician C. S. Peirce more than Ramakrishna. It seems to me mathematics is not necessarily considered completely acceptable by the notion of ...
Why do you give me all the minus? Just asking.
My work is a type theory for AI for conceptualizing the input it receives via its artificial senses. If it weren't, I would have never come here.
The conceptualization faculty is accompanied with a formula for making moral evaluations, which is the basis of advanced decision making. Whatever the AI can conceptualize, it can also project as a vector on a Cartesian plane. The direction and magnitude of that vector are the data used in this decision making.
The actual decision making algorithm may begin by making random decisions and filtering good decisions fr...
Would you find a space rocket to resemble either a balloon or an arrow, but not both?
I didn't imply something Pirsig wrote would, in and of itself, have much to do with artificial intelligence.
LessWrong is like a sieve, that only collects stuff that looks like I need it, but on a closer look I don't. You won't come until the table is already set. Fine.
I think the most you can hope for is a model of rationality and irrationality that can model mysticists or religious people as well as rationalists. I don't think you can expect everyone to grok that model. That model may not be expressible in a mysticist's model of reality.
Agree. The Pirahã could not use my model because abstract concepts are banned in their culture. I read from New Scientist that white man tried to teach them numbers so that they wouldn't be cheated in trade so much, but upon getting some insight of what a number is, they refused to t...
If neither side accepts the other side's language as meaningful, why do you believe they would accept the new language?
Somehow related: http://xkcd.com/927/
That's a very good point. Gonna give you +1 on that. The language, or type system, I am offering has the merit of no such type system being devised before. I stick to this unless proven wrong.
Academic philosophy has it's good sides. "Vagrant predicates" by Rescher are an impressive and pretty recent invention. I also like confirmation holism. But as far as I know, nobody has tried to do an ...
The page says:
But this doesn't answer the legitimate philosophical dilemma: If every belief must be justified, and those justifications in turn must be justified, then how is the infinite recursion terminated?
I do not assume that every belief must be justified, except possibly within rationality.
Do the arguments against the meaningfulness of coincidence state that coincidences do not exist?
If you respond to that letter, I will not engage in conversation, because the letter is a badly written outdated progress report of my work. The work is now done, it will be published as a book, and I already have a publisher. If you want to know when the book comes out, you might want to join this Facebook community.
The merit of this language is that it should allow you to converse about rationality with mysticists or religious people so that you both understand what you are talking about.
I think the most you can hope for is a model of rationality and irrationality that can model mysticists or religious people as well as rationalists. I don't think you can expect everyone to grok that model. That model may not be expressible in a mysticist's model of reality.
How can we differentiate the irrational from the rational, if we do not know what the irrational is?
Ir...
I am not talking about a prescriptive theory that tells, whether one should be rational or not. I am talking about a rational theory, that produces a taxonomy of different ways of being rational or irrational without making a stance on which way should be chosen. Such a theory already implicitly advocates rationality, so it doesn't need to explicitly arrive at conclusions about whether one ought to be rational or not.
That post in particular is a vague overview of meta-rationality, not a systematic account of it. It doesn't describe meta-rationality as something that qualifies as a theory. It just says there is such a thing without telling exactly what it is.
How is Buddhism tainted? Christianity could have been tainted during the purges in the early centuries, but I don't find Buddhism to have deviated from its original teachings in such a way that Buddhists themselves would no longer recognize them. There are millions of Buddhists in the world, so there are bound to be weirdos in that lot. But consider the question: "What is Buddhism, as defined by prominent Buddhists themselves, whose prominence is recognized by traditional institutions that uphold Buddhism?" It doesn't seem to me the answer to thi...
The rationalism-empiricism philosophical debate is somewhat dead. I see no problem in using "rationalism" to mean LW rationalism. "Rationality" (1989) by Rescher defines rationality in the way LW uses the word, but doesn't use "rationalism", ostensibly because of the risk of confusion with the rationalism-empiricism debate. Neither LW nor average people are subject to similar limitations as the academic Rescher, so I think it is prudent to overwrite the meaning of the word "rationalism" now.
Maybe "rationalism&qu...
According to Rationality (1989) by Nicholas Rescher, who is for all intents and purposes a rationalist in the sense LW (not academic philosophy) uses the word, the LW rationality is a faith based ideology. See confirmation holism by Quine, outlined in "The Two Dogmas of Empiricism". Rationality is insufficient to justify rationality with rational means, because to do so would presuppose that all means of justification are rational, which already implicitly assumes rationality. Hence, it cannot be refuted that rationality is based on faith. Rescher urges people to accept rationality nevertheless.
Hehe. I'm a psych patient and I'm allowed to visit LessWrong.
Commenting the article:
"When artificial intelligence researchers attempted to capture everyday statements of inference using classical logic they began to realize this was a difficult if not impossible task."
I hope nobody's doing this anymore. It's obviously impossible. "Everyday statements of inference", whatever that might mean, are not exclusively statements of first-order logic, because Russell's paradox is simple enough to be formulated by talking about barbers. The liar paradox is also expressible with simple, practical language.
W...
Okay. In this case, the article does seem to begin to make sense. Its connection to the problem of induction is perhaps rather thin. The idea of using low Kolmogorov complexity as justification for an inductive argument cannot be deduced as a theorem of something that's "surely true", whatever that might mean. And if it were taken as an axiom, philosophers would say: "That's not an axiom. That's the conclusion of an inductive argument you made! You are begging the question!"
However, it seems like advancements in computation theory have ...
I've read some of this Universal Induction article. It seems to operate from flawed premises.
If we prescribe Occam’s razor principle [3] to select the simplest theory consistent with the training examples and assume some general bias towards structured environments, one can prove that inductive learning “works”. These assumptions are an integral part of our scientific method. Whether they admit it or not, every scientist, and in fact every person, is continuously using this implicit bias towards simplicity and structure to some degree.
Suppose the brain...
The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.
At first, I didn't quite understand this. But I'm reading Introduction to Automata Theory...
Okay. That sounds very good. And it would seem to be in accordance with this statement:
Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.
If reductionism does not entail that I must construct the notion of a territory and include it into my conceptualizations at all times, it's not a problem. I now understand even better why I was confused by this. This kind of reductionism is not reductive physicalism. It's hardly a philosophical statement at all, which i...
I got "reductionism" wrong, actually. I thought the author was using some nonstandard definition of reductionism, which would have been something to the effect of not having unnecessary declarations in a theory. I did not take into account that the author could actually be what he says he is, no bells and whistles, because I didn't take into account that reductionism could be taken seriously here. But that just means I misjudged. Of course I am not necessarily even supposed to be on this site. I am looking for people who might give useful ideas f...
But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.
Can you handle the truth then? I don't understand the notion of truth you are using. In everyday language, when a person states something as "true", it doesn't usually need to be grounded to logic in order to work for a practical purpose. But you are making extr...
I wrote a bunch of comments to this work while discussing with Risto_Saarelma. But I thought I should rather post them here. I came here to discuss certain theories that are on the border between philosophy and something which could be useful for the construction of AI. I've developed my own such theory based on many years of work on an unusual metaphysical system called the Metaphysics of Quality, which is largely ignored in the academy and deviates from the tradition. It's not very "old" stuff. The formation of that tradition of discussion bega...
Sorry if this feels like dismissing your stuff.
You don't have to apologize, because you have been useful already. I don't require you to go out of your way to analyze this stuff, but of course it would also be nice if we could understand each other.
...The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn't really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how c
According to the abstract, the scope of the theory you linked is a subset of RP. :D I find this hilarious because the theory was described as "ridiculously broad". It seems to attempt to encompass all of O, and may contain interesting insight my work clearly does not contain. But the RP defines a certain scope of things, and everything in this article seems to belong to O, with perhaps some N without clearly differentiating the two. S is missing, which is rather usual in science. From the scientific point of view, it may be hard to understand wha...
It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.
There are many ways to answer that question. I have a flowchart and formulae. The opposite of that would be something to the effect of having the source code. I'm not sure why you expect me to have that. Was it something I said?
I thought I've given you links to my actual work, but I can't find them. Did I forget? Hmm...
If you dislike ...
...A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you'd have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell's equations on a single sheet of paper won't do. It isn't answering the question of how you'd tell a computer how to be a mind, and th
I don't find the Chinese room argument related to our work - besides, it seems to possibly vaguely try to state that what we are doing can't be done. What I meant is that AI should be able to:
You probably have a much more grassroot-level understanding of the symbol grounding problem. I have only solved the symbol grounding problem to the extent that I have formal understanding of its nature.
In any case, I am probably approaching AI from a point of view that is far from the symbol grounding problem. My theory does not need to be seen as an useful solution to that problem. But when an useful solution is created, I postulate it can be placed within RP. Such a solution would have to be an algorithm for creating S-type or O-type sets of members of R...
Of course the symbol grounding problem is rather important, so it doesn't really suffice to say that "set R is supposed to contain sensory input". The metaphysical idea of RP is something to the effect of the following:
Let n be 4.
R contains everything that could be used to ground the meaning of symbols.
It's not like your average "competent metaphysicist" would understand Langan either. He wouldn't possibly even understand Wheeler. Langan's undoing is to have the goals of a metaphysicist and the methods of a computer scientist. He is trying to construct a metaphysical theory which structurally resebles a programming language with dynamic type checking, as opposed to static typing. Now, metaphysicists do not tend to construct such theories, and computer scientists do not tend to be very familiar with metaphysics. Metaphysical theories tend to be ...
To clarify, I'm not the generic "skeptic" of philosophical thought experiments. I am not at all doubting the existence of the world outside my head. I am just an apparently competent metaphysician in the sense that I require a Wheeler-style reality theory to actually be a Wheeler-style reality theory with respect to not having arbitrary declarations.
That's not a critical flaw. In metaphysics, you can't take for granted that the world is not in your head. The only thing you really can do is to find an inconsistency, if you want to prove someone wrong.
Langan has no problems convincing me. His attempt at constructing a reality theory is serious and mature and I think he conducts his business about the way an ordinary person with such aims would. He's not a literary genius like Robert Pirsig, he's just really smart otherwise.
I've never heard anyone to present such criticism of the CTMU that would actually...
I apologize. I no longer feel a need to behave in the way I did.