All of Tuukka_Virtaperko's Comments + Replies

I apologize. I no longer feel a need to behave in the way I did.

This whole conversation seems a little awkward now.

2Tuukka_Virtaperko
I apologize. I no longer feel a need to behave in the way I did.

That's a good result. However, the necessity of innate biases undermines the notion of rationality, unless we have a system for differentiating the rational cognitive faculty from the innately biased cognitive faculty. I am proposing that this differentiation faculty be rational, hence "Metarationality".

In the Cartesian coordinate system I devised object-level entities are projected as vectors. Vectors with a positive Y coordinate are rational. The only defined operation so far is addition: vectors can be added to each other. In this metasystem w... (read more)

In any case, this "hot-air balloonist vs. archer" (POP!) comparison seems like some sort of an argument ad hominem -type fallacy, and that's why I reacted with an ad hominem attack about legos and stuff. First of all, ad hominem is a fallacy, and does nothing to undermine my case. It does undermine the notion that you are being rational.

Secondly, if my person is that interesting, I'd say I resemble the mathematician C. S. Peirce more than Ramakrishna. It seems to me mathematics is not necessarily considered completely acceptable by the notion of ... (read more)

1Risto_Saarelma
I didn't intend it as much of an ad hominem, after all both groups in the comparison are so far quite unprepared for the undertaking they're trying to do. Just trying to find ways to try to describe the cultural mismatch that seems to be going on here. I understand that math is starting to have some stuff dealing with how to make good maps from a territory. Only that's inside the difficult and technical stuff like Jaynes' Probability Theory or Pearl's Causality, instead of somebody just making a nice new logical calculus with an operator for doing induction. There's already some actual philosophically interesting results like an inductive learner needing to have innate biases to be able to learn anything.

Why do you give me all the minus? Just asking.

-4Tuukka_Virtaperko
Why do you give me all the minus? Just asking.
-6Tuukka_Virtaperko
-10Tuukka_Virtaperko

My work is a type theory for AI for conceptualizing the input it receives via its artificial senses. If it weren't, I would have never come here.

The conceptualization faculty is accompanied with a formula for making moral evaluations, which is the basis of advanced decision making. Whatever the AI can conceptualize, it can also project as a vector on a Cartesian plane. The direction and magnitude of that vector are the data used in this decision making.

The actual decision making algorithm may begin by making random decisions and filtering good decisions fr... (read more)

-7Tuukka_Virtaperko

Would you find a space rocket to resemble either a balloon or an arrow, but not both?

I didn't imply something Pirsig wrote would, in and of itself, have much to do with artificial intelligence.

LessWrong is like a sieve, that only collects stuff that looks like I need it, but on a closer look I don't. You won't come until the table is already set. Fine.

0Risto_Saarelma
The point is that the people who build it will resemble balloon-builders, not archers. People who are obsessed with getting machines to do things, not people who are obsessed with human performance.

I think the most you can hope for is a model of rationality and irrationality that can model mysticists or religious people as well as rationalists. I don't think you can expect everyone to grok that model. That model may not be expressible in a mysticist's model of reality.

Agree. The Pirahã could not use my model because abstract concepts are banned in their culture. I read from New Scientist that white man tried to teach them numbers so that they wouldn't be cheated in trade so much, but upon getting some insight of what a number is, they refused to t... (read more)

If neither side accepts the other side's language as meaningful, why do you believe they would accept the new language?

Somehow related: http://xkcd.com/927/

That's a very good point. Gonna give you +1 on that. The language, or type system, I am offering has the merit of no such type system being devised before. I stick to this unless proven wrong.

Academic philosophy has it's good sides. "Vagrant predicates" by Rescher are an impressive and pretty recent invention. I also like confirmation holism. But as far as I know, nobody has tried to do an ... (read more)

The page says:

But this doesn't answer the legitimate philosophical dilemma: If every belief must be justified, and those justifications in turn must be justified, then how is the infinite recursion terminated?

I do not assume that every belief must be justified, except possibly within rationality.

Do the arguments against the meaningfulness of coincidence state that coincidences do not exist?

If you respond to that letter, I will not engage in conversation, because the letter is a badly written outdated progress report of my work. The work is now done, it will be published as a book, and I already have a publisher. If you want to know when the book comes out, you might want to join this Facebook community.

0TimS
Those are the labels used to describe the issue by the participants. But taking an outside view, the issue is inconsistent principles between the two sides. The fact that true religious believers reject the need for beliefs to pay rent in anticipated experience won't be solved by new vocabulary.

The merit of this language is that it should allow you to converse about rationality with mysticists or religious people so that you both understand what you are talking about.

I think the most you can hope for is a model of rationality and irrationality that can model mysticists or religious people as well as rationalists. I don't think you can expect everyone to grok that model. That model may not be expressible in a mysticist's model of reality.

How can we differentiate the irrational from the rational, if we do not know what the irrational is?

Ir... (read more)

5Viliam_Bur
If neither side accepts the other side's language as meaningful, why do you believe they would accept the new language? Somehow related: http://xkcd.com/927/
-2[anonymous]
Unless I missed something, I'm only seeing one out of the three things he was stating were necessary.
9gwern
Your theory seems completely arbitrary to me and I can only stare in perplexity at the graphs you build on top of it, but moving on: Really? Maybe you should restate it all in mainstream terms and you won't look crazier than a bug in a rug. Incidentally, would I be correct in guessing that Robert Pirsig never replied to you?
4novalis
I didn't vote on this article, as it happens. This post is another one of the ones I was talking about. I wasn't really paying attention to where in the sequences anything was (it's been so long since I read them that they're all blurred together in my mind). There are certainly strong arguments against the meaningfulness of coincidence (and I think the heuristics and biases program does address some of when and why people think coincidences are meaningful).

I am not talking about a prescriptive theory that tells, whether one should be rational or not. I am talking about a rational theory, that produces a taxonomy of different ways of being rational or irrational without making a stance on which way should be chosen. Such a theory already implicitly advocates rationality, so it doesn't need to explicitly arrive at conclusions about whether one ought to be rational or not.

That post in particular is a vague overview of meta-rationality, not a systematic account of it. It doesn't describe meta-rationality as something that qualifies as a theory. It just says there is such a thing without telling exactly what it is.

2novalis
Sorry, I meant that that series of posts addresses the justification issue, if somewhat informally.

How is Buddhism tainted? Christianity could have been tainted during the purges in the early centuries, but I don't find Buddhism to have deviated from its original teachings in such a way that Buddhists themselves would no longer recognize them. There are millions of Buddhists in the world, so there are bound to be weirdos in that lot. But consider the question: "What is Buddhism, as defined by prominent Buddhists themselves, whose prominence is recognized by traditional institutions that uphold Buddhism?" It doesn't seem to me the answer to thi... (read more)

0Richard_Kennaway
This chap would disagree. There's rather a lot of words there, so briefly: Buddhism in the Western world -- what he calls "Consensus Buddhism" -- is for the most part an invention of the 19th and 20th centuries with more roots in European and American culture than in the countries it came from.

The rationalism-empiricism philosophical debate is somewhat dead. I see no problem in using "rationalism" to mean LW rationalism. "Rationality" (1989) by Rescher defines rationality in the way LW uses the word, but doesn't use "rationalism", ostensibly because of the risk of confusion with the rationalism-empiricism debate. Neither LW nor average people are subject to similar limitations as the academic Rescher, so I think it is prudent to overwrite the meaning of the word "rationalism" now.

Maybe "rationalism&qu... (read more)

According to Rationality (1989) by Nicholas Rescher, who is for all intents and purposes a rationalist in the sense LW (not academic philosophy) uses the word, the LW rationality is a faith based ideology. See confirmation holism by Quine, outlined in "The Two Dogmas of Empiricism". Rationality is insufficient to justify rationality with rational means, because to do so would presuppose that all means of justification are rational, which already implicitly assumes rationality. Hence, it cannot be refuted that rationality is based on faith. Rescher urges people to accept rationality nevertheless.

Hehe. I'm a psych patient and I'm allowed to visit LessWrong.

4Alicorn
Do you have fascinating delusions you would like to let us try to do Bayes to?

Commenting the article:

"When artificial intelligence researchers attempted to capture everyday statements of inference using classical logic they began to realize this was a difficult if not impossible task."

I hope nobody's doing this anymore. It's obviously impossible. "Everyday statements of inference", whatever that might mean, are not exclusively statements of first-order logic, because Russell's paradox is simple enough to be formulated by talking about barbers. The liar paradox is also expressible with simple, practical language.

W... (read more)

Okay. In this case, the article does seem to begin to make sense. Its connection to the problem of induction is perhaps rather thin. The idea of using low Kolmogorov complexity as justification for an inductive argument cannot be deduced as a theorem of something that's "surely true", whatever that might mean. And if it were taken as an axiom, philosophers would say: "That's not an axiom. That's the conclusion of an inductive argument you made! You are begging the question!"

However, it seems like advancements in computation theory have ... (read more)

0Tuukka_Virtaperko
Commenting the article: "When artificial intelligence researchers attempted to capture everyday statements of inference using classical logic they began to realize this was a difficult if not impossible task." I hope nobody's doing this anymore. It's obviously impossible. "Everyday statements of inference", whatever that might mean, are not exclusively statements of first-order logic, because Russell's paradox is simple enough to be formulated by talking about barbers. The liar paradox is also expressible with simple, practical language. Wait a second. Wikipedia already knows this stuff is a formalization of Occam's razor. One article seems to attribute the formalization of that principle to Solomonoff, another one to Hutter. In addition, Solomonoff induction, that is essential for both, is not computable. Ugh. So Hutter and Rathmanner actually have the nerve to begin that article by talking about the problem of induction, when the goal is obviously to introduce concepts of computation theory? And they are already familiar with Occam's razor, and aware of it having, at least probably, been formalized? Okay then, but this doesn't solve the problem of induction. They have not even formalized the problem of induction in a way that accounts for the logical structure of inductive inference, and leaves room for various relevance operators to take place. Nobody else has done that either, though. I should get back to this later.

I've read some of this Universal Induction article. It seems to operate from flawed premises.

If we prescribe Occam’s razor principle [3] to select the simplest theory consistent with the training examples and assume some general bias towards structured environments, one can prove that inductive learning “works”. These assumptions are an integral part of our scientific method. Whether they admit it or not, every scientist, and in fact every person, is continuously using this implicit bias towards simplicity and structure to some degree.

Suppose the brain... (read more)

1Risto_Saarelma
The omitted information in this approach is information with a high Kolmogorov complexity, which is omitted in favor of information with low Kolmogorov complexity. A very rough analogy would be to describe humans as having a bias towards ideas expressible in few words of English in favor of ideas that need many words of English to express. Using Kolmogorov complexity for sequence prediction instead of English language for ideas in the construction gets rid of the very many problems of rigor involved in the latter, but the basic idea is pretty much the same. You look into things that are briefly expressible in favor of things that must be expressed in length. The information isn't permanently omitted, it's just depriorized. The algorithm doesn't start looking at the stuff you need long sentences to describe before it has convinced itself that there are no short sentences that describe the observations it wants to explain in a satisfactory way. One bit of context that is assumed is that the surrounding universe is somewhat amenable to being Kolmogorov-compressed. That is, there are some recurring regularities that you can begin to discover. The term "lawful universe" sometimes thrown around in LW probably refers to something similar. Solomonoff's universal induction would not work in a completely chaotic universe, where there are no regularities for Kolmogorov compression to latch on. You'd also be unlikely to find any sort of native intelligent entities in such universes. I'm not sure if this means that the Solomonoff approach is philosophically untenable, but needing to have some discoverable regularities to begin with before discovering regularities with induction becomes possible doesn't strike me as that great a requirement. If the problem of context is about exactly where you draw the data for the sequence which you will then try to predict with Solomonoff induction, in a lawless universe you wouldn't be able to infer things no matter which simple instrumenta

The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.

At first, I didn't quite understand this. But I'm reading Introduction to Automata Theory... (read more)

1Risto_Saarelma
Yeah, that's probably where it comes from. The [A-Z] can be read as "the set of every possible English capital letter" just like X can be read as "the set of every possible perception to an agent", and the * denotes some ordered sequence of elements from the set exactly the same way in both cases.

Okay. That sounds very good. And it would seem to be in accordance with this statement:

Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

If reductionism does not entail that I must construct the notion of a territory and include it into my conceptualizations at all times, it's not a problem. I now understand even better why I was confused by this. This kind of reductionism is not reductive physicalism. It's hardly a philosophical statement at all, which i... (read more)

I got "reductionism" wrong, actually. I thought the author was using some nonstandard definition of reductionism, which would have been something to the effect of not having unnecessary declarations in a theory. I did not take into account that the author could actually be what he says he is, no bells and whistles, because I didn't take into account that reductionism could be taken seriously here. But that just means I misjudged. Of course I am not necessarily even supposed to be on this site. I am looking for people who might give useful ideas f... (read more)

3DSimon
You seem to be overthinking this. Reductionism is "merely" a really useful cognition technique, because calculating everything at the finest possible level is hopelessly inefficient. Perhaps a practical simple example is needed: An AI that can use reductionism can say "Oh, that collection of pixels within my current view is a dog, and this collection is a man, and the other collection is a leash", and go on to match against (and develop on its own) patterns about objects at the coarser-than-pixel size of dogs, men, and leashes. Without reductionism, it would be forced to do the pattern matching for everything, even for complex concepts like "Man walking a dog", directly at the pixel level, which is not impossible but is certainly a lot slower to run and harder to update. If you've ever refactored a common element out in your code into its own module, or even if you've used a library or high-level language, you are also using reductionism. The non-reductionistic alternative would be something like writing every program from scratch, in machine code.

But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.

Can you handle the truth then? I don't understand the notion of truth you are using. In everyday language, when a person states something as "true", it doesn't usually need to be grounded to logic in order to work for a practical purpose. But you are making extr... (read more)

0DSimon
Actually, this may be a good point for me to try to figure out what you mean by "realism", because here you seem to have connected that word to some but not all strategies of problem-solving. Can you give me some specific examples of problems which the mind tends to use realism in solving, and problems where it doesn't?
1DSimon
I don't follow why you claim that reductionism and realism are incompatible. I think this may be because I'm very confused when I try to figure out, from context, what you mean by "realism", and I strongly suspect that that's because you don't have a definition of that word which can be used in tests for updating predictions, which is the sort of thing LWers look for in a useful definition. Basically, I'm inclined to agree with you when you say: This is a really good reason in my experience for not getting into long discussions about "But what is reality, really?"
1thomblake
A belief is true when it corresponds to reality. Or equivalently, "X" is true iff X. In the map/territory distinction, reality is the territory. Less figuratively, reality is the thing that generates experimental results. From The Simple Truth:

I wrote a bunch of comments to this work while discussing with Risto_Saarelma. But I thought I should rather post them here. I came here to discuss certain theories that are on the border between philosophy and something which could be useful for the construction of AI. I've developed my own such theory based on many years of work on an unusual metaphysical system called the Metaphysics of Quality, which is largely ignored in the academy and deviates from the tradition. It's not very "old" stuff. The formation of that tradition of discussion bega... (read more)

2Risto_Saarelma
I'll go with 61 pages and quite a few formulae.

Sorry if this feels like dismissing your stuff.

You don't have to apologize, because you have been useful already. I don't require you to go out of your way to analyze this stuff, but of course it would also be nice if we could understand each other.

The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn't really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how c

... (read more)

According to the abstract, the scope of the theory you linked is a subset of RP. :D I find this hilarious because the theory was described as "ridiculously broad". It seems to attempt to encompass all of O, and may contain interesting insight my work clearly does not contain. But the RP defines a certain scope of things, and everything in this article seems to belong to O, with perhaps some N without clearly differentiating the two. S is missing, which is rather usual in science. From the scientific point of view, it may be hard to understand wha... (read more)

It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.

There are many ways to answer that question. I have a flowchart and formulae. The opposite of that would be something to the effect of having the source code. I'm not sure why you expect me to have that. Was it something I said?

I thought I've given you links to my actual work, but I can't find them. Did I forget? Hmm...

If you dislike ... (read more)

2Risto_Saarelma
I'm mostly writing this stuff trying to explain what my mindset, which I guess to be somewhat coincident with the general LW one, is like, and where it seems to run into problems with trying to understand these theories. My question about the assumptions is basically poking at something like "what's the informal explanation of why this is a good way to approach figuring out reality", which isn't really an easy thing to answer. I'm mostly writing about my own viewpoint instead of addressing the metaphysical theory, since it's easy to write about stuff I already understand, and a lot harder to to try to understand something coming from a different tradition and make meaningful comments about it. Sorry if this feels like dismissing your stuff. The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn't really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how complex the mechanism is. People could still get away with assuming that intelligent thought is not that formally complex around the time of Russell and Wittgenstein, until it started dawning just what a massive hairball of a mess human intelligence working in the real world is after the 1950s. Still, most philosophy seems to be following the same mode of investigation as Wittgenstein or Kant did, despite the sudden unfortunate appearance of a bookshelf full of volumes written by insane aliens between the realm of human thought and basic logic discovered by molecular biologists and cognitive scientists. I'm not expecting people to rewrite the 100 000 pages of complexity into human mathematics, but I'm always aware that it needs to be dealt with somehow. For one thing, it's a reason to pay more attention to empiricism than philosophy has traditionally done. As in, actually do empirical stuff, not just go "ah, yes, empiricism is

A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you'd have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell's equations on a single sheet of paper won't do. It isn't answering the question of how you'd tell a computer how to be a mind, and th

... (read more)

I don't find the Chinese room argument related to our work - besides, it seems to possibly vaguely try to state that what we are doing can't be done. What I meant is that AI should be able to:

  • Observe behavior
  • Categorize entities into deterministic machines which cannot take a metatheoretic approach to their data processing habits and alter them.
  • Categorize entities into agencies who process information recursively and can consciously alter their own data processing or explain it to others.
  • Use this categorization ability to differentiate entities whose b
... (read more)
5Risto_Saarelma
About the classification thing: Agree that it's very important that a general AI be able to classify entities into "dumb machines" and things complex enough to be self-aware, warrant an intentional stance and require ethical consideration. Even putting aside the ethical concerns, being able to recognize complex agents with intentions and model their intentions instead of their most likely massively complex physical machinery is probably vital to any sort of meaningful ability to act in a social domain with many other complex agents (cf. Dennett's intentional stance) I understood the existing image reconstruction experiments measure the activation on the visual cortex when the subject is actually viewing an image, which does indeed get you a straightforward mapping to a bitmap. This isn't the same as thinking about a cat, a person could be thinking about a cat while not looking at one, and they could have a cat in their visual field while daydreaming or suffering from hysterical blindness, so that they weren't thinking about a cat despite having a cat image correctly show up in their visual cortex scan. I don't actually know what the neural correlate of thinking about a cat, as opposed to having one's visual cortex activated by looking at one, would be like, but I was assuming interpreting it would require much more sophisticated understanding of the brain, basically at the level of difficult of telling whether a brain scan correlates with thinking about freedom, a theory of gravity or reciprocality. Basically something that's entirely beyond current neuroscience and more indicative of some sort of Laplace's demon like thought experiment where you can actually observe and understand the whole mechanical ensemble of the brain. Quines are maps that contain themselves. A quining system could reflect on its entire static structure, though it would have to run some sort of emulation slower than its physical substrate to predict its future states. Hofstadter's GEB links

You probably have a much more grassroot-level understanding of the symbol grounding problem. I have only solved the symbol grounding problem to the extent that I have formal understanding of its nature.

In any case, I am probably approaching AI from a point of view that is far from the symbol grounding problem. My theory does not need to be seen as an useful solution to that problem. But when an useful solution is created, I postulate it can be placed within RP. Such a solution would have to be an algorithm for creating S-type or O-type sets of members of R... (read more)

4Risto_Saarelma
I don't really understand this part. "The scanner does not understand the information but the person does" sounds like some variant of Searle's Chinese Room argument when presented without further qualifiers. People in AI tend to regard Searle as a confused distraction. The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines. I suppose the idea here is that there is some difference whether there is a human being sitting in the scanner, or, say, a toy robot with a state of two bits where one is I am thinking about cats and the other is I am broken and will lie about thinking about cats. With the robot, we could just check the "broken" bit as well from the scan when the robot is disagreeing with the scanner, and if it is set, conclude that the robot is broken. I'm not seeing how humans must be fundamentally different. The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat. Things being deterministic and predictable from knowing their initial state doesn't mean they can't have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding. Sorry I keep skipping over your formalism stuff, but I'm still not really grasping the underlying assumptions behind this approach. (The underlying approach in the compute
2Risto_Saarelma
I'll address the rest in a bit, but about the notation: T -> U is a function from set T to set U. P* means a list of elements in set P, where the difference from set is that elements in a list are in a specific order. The notation as a whole was a somewhat fudged version of intelligent agent formalism. The idea is to set up a skeleton for modeling any sort of intelligent entity, based on the idea that the entity only learns things from its surroundings though a series of perceptions, which might for example be a series of matrices corresponding to the images a robot's eye camera sees, and can only affect its surroundings by choosing an action it is capable of, such as moving a robotic arm or displaying text to a terminal. The agent model is pretty all-encompassing, but also not that useful except as the very first starting point, since all of the difficulty is in the exact details of the function that turns the most likely massive amount of data in the perception history into a well-chosen action that efficiently furthers the goals of the AI. Modeling AIs as the function from a history of perceptions to an action is also related to thought experiments like Ned Block's Blockhead, where a trivial AI that passes the Turing test with flying colors is constructed by merely enumerating every possible partial conversation up to a certain length, and writing up the response a human would make at that point of that conversation. Scott Aaronson's Why philosophers should care about computational complexity proposes to augment the usual high-level mathematical frameworks with some limits to the complexity of the black box functions, to make the framework reject cases like Blockhead, which seem to be very different from what we'd like to have when we're looking for a computable function that implements an AI.

Of course the symbol grounding problem is rather important, so it doesn't really suffice to say that "set R is supposed to contain sensory input". The metaphysical idea of RP is something to the effect of the following:

Let n be 4.

R contains everything that could be used to ground the meaning of symbols.

  • R1 contains sensory perceptions
  • R2 contains biological needs such as eating and sex, and emotions
  • R3 contains social needs such as friendship and respect
  • R4 contains mental needs such as perceptions of symmetry and beauty (the latter is sometimes
... (read more)
6Risto_Saarelma
We were talking about applying the metaphysics system to making an AI earlier in IRC, and the symbol grounding problem came up there as a basic difficulty in binding formal reasoning systems to real-time actions. It doesn't look like this was mentioned here before. I'm assuming I'd want to actually build an AI that needs to deal with symbol grounding, that is, it needs to usefully match some manner of declarative knowledge it represents in its internal state to the perceptions it receives from the outside world and to the actions it performs on it. Given this, I'm getting almost no notion of what useful work this theory would do for me. Mathematical descriptions can be useful for people, but it's not given that they do useful work for actually implementing things. I can define a self-improving friendly general artificial intelligence mathematically by defining * FAI = <S, P*> as an artificial intelligence instance, consisting of its current internal state S and the history of its perceptions up to the present P*, * a: FAI -> A* as a function that gives the list of possible actions for a given FAI instance * u: A -> Real as a function that gives the utility of each action as a real number, with higher numbers given to actions that advance the purposes of the FAI better based on its current state and perception history and * f: FAI * A -> S, P as an update function that takes an action and returns a new FAI internal state with any possible self-modifications involved in the action applied, and a new perception item that contains whatever new observations the FAI made as a direct result of its action. And there's a quite complete mathematical description of a friendly artificial intelligence, you could probably even write a bit of neat pseudocode using the pieces there, but that's still not likely to land me a cushy job supervising the rapid implementation of the design at SIAI, since I don't have anything that does actual work there. All I did was push all the

It's not like your average "competent metaphysicist" would understand Langan either. He wouldn't possibly even understand Wheeler. Langan's undoing is to have the goals of a metaphysicist and the methods of a computer scientist. He is trying to construct a metaphysical theory which structurally resebles a programming language with dynamic type checking, as opposed to static typing. Now, metaphysicists do not tend to construct such theories, and computer scientists do not tend to be very familiar with metaphysics. Metaphysical theories tend to be ... (read more)

To clarify, I'm not the generic "skeptic" of philosophical thought experiments. I am not at all doubting the existence of the world outside my head. I am just an apparently competent metaphysician in the sense that I require a Wheeler-style reality theory to actually be a Wheeler-style reality theory with respect to not having arbitrary declarations.

6Risto_Saarelma
There might not be many people here to who are sufficiently up to speed on philosophical metaphysics to have any idea what a Wheeler-style reality theory, for example, is. My stereotypical notion is that the people at LW have been pretty much ignoring philosophy that isn't grounded in mathematics, physics or cognitive science from Kant onwards, and won't bother with stuff that doesn't seem readable from this viewpoint. The tricky thing that would help would be to somehow translate the philosopher-speak into lesswronger-speak. Unfortunately this'd require some fluency in both.

That's not a critical flaw. In metaphysics, you can't take for granted that the world is not in your head. The only thing you really can do is to find an inconsistency, if you want to prove someone wrong.

Langan has no problems convincing me. His attempt at constructing a reality theory is serious and mature and I think he conducts his business about the way an ordinary person with such aims would. He's not a literary genius like Robert Pirsig, he's just really smart otherwise.

I've never heard anyone to present such criticism of the CTMU that would actually... (read more)

0Tuukka_Virtaperko
To clarify, I'm not the generic "skeptic" of philosophical thought experiments. I am not at all doubting the existence of the world outside my head. I am just an apparently competent metaphysician in the sense that I require a Wheeler-style reality theory to actually be a Wheeler-style reality theory with respect to not having arbitrary declarations.