Comment author: Risto_Saarelma 11 February 2012 04:56:44PM 1 point [-]

The omitted information in this approach is information with a high Kolmogorov complexity, which is omitted in favor of information with low Kolmogorov complexity. A very rough analogy would be to describe humans as having a bias towards ideas expressible in few words of English in favor of ideas that need many words of English to express. Using Kolmogorov complexity for sequence prediction instead of English language for ideas in the construction gets rid of the very many problems of rigor involved in the latter, but the basic idea is pretty much the same. You look into things that are briefly expressible in favor of things that must be expressed in length. The information isn't permanently omitted, it's just depriorized. The algorithm doesn't start looking at the stuff you need long sentences to describe before it has convinced itself that there are no short sentences that describe the observations it wants to explain in a satisfactory way.

One bit of context that is assumed is that the surrounding universe is somewhat amenable to being Kolmogorov-compressed. That is, there are some recurring regularities that you can begin to discover. The term "lawful universe" sometimes thrown around in LW probably refers to something similar.

Solomonoff's universal induction would not work in a completely chaotic universe, where there are no regularities for Kolmogorov compression to latch on. You'd also be unlikely to find any sort of native intelligent entities in such universes. I'm not sure if this means that the Solomonoff approach is philosophically untenable, but needing to have some discoverable regularities to begin with before discovering regularities with induction becomes possible doesn't strike me as that great a requirement.

If the problem of context is about exactly where you draw the data for the sequence which you will then try to predict with Solomonoff induction, in a lawless universe you wouldn't be able to infer things no matter which simple instrumentation you picked, while in a lawful universe you could pick all sorts of instruments, tracking the change of light during time, tracking temperature, tracking the luminousity of the Moon, for simple examples, and you'd start getting Kolmogorov-compressible data where the induction system could start figuring repeating periods.

The core thing "independent of context" in all this is that all the universal induction systems are reduced to basically taking a series of numbers as input, and trying to develop an efficient predictor for what the next number will be. The argument in the paper is that this construction is basically sufficient for all the interesting things an induction solution could do, and that all the various real-world cases where induction is needed can be basically reduced into such a system by describing the instrumentation which turns real-world input into a time series of numbers.

Comment author: Tuukka_Virtaperko 15 February 2012 08:35:33PM 1 point [-]

Okay. In this case, the article does seem to begin to make sense. Its connection to the problem of induction is perhaps rather thin. The idea of using low Kolmogorov complexity as justification for an inductive argument cannot be deduced as a theorem of something that's "surely true", whatever that might mean. And if it were taken as an axiom, philosophers would say: "That's not an axiom. That's the conclusion of an inductive argument you made! You are begging the question!"

However, it seems like advancements in computation theory have made people able to do at least remotely practical stuff on areas, that bear resemblance to more inert philosophical ponderings. That's good, and this article might even be used as justification for my theory RP - given that the use of Kolmogorov complexity is accepted. I was not familiar with the concept of Kolmogorov complexity despite having heard of it a few times, but my intuitive goal was to minimize the theory's Kolmogorov complexity by removing arbitrary declarations and favoring symmetry.

I would say, that there are many ways of solving the problem of induction. Whether a theory is a solution to the problem of induction depends on whether it covers the entire scope of the problem. I would say this article covers half of the scope. The rest is not covered, to my knowledge, by anyone else than Robert Pirsig and experts of Buddhism, but these writings are very difficult to approach analytically. Regrettably, I am still unable to publish the relativizability article, which is intended to succeed in the analytic approach.

In any case, even though the widely rejected "statistical relevance" and this "Kolmogorov complexity relevance" share the same flaw, if presented as an explanation of inductive justification, the approach is interesting. Perhaps, even, this paper should be titled: "A Formalization of Occam's Razor Principle". Because that's what it surely seems to be. And I think it's actually an achievement to formalize that principle - an achievement more than sufficient to justify the writing of the article.

Comment author: Risto_Saarelma 14 January 2012 06:39:28PM 3 points [-]

We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?

1) The brain scanner is broken 2) The person is broken In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.

I don't really understand this part.

"The scanner does not understand the information but the person does" sounds like some variant of Searle's Chinese Room argument when presented without further qualifiers. People in AI tend to regard Searle as a confused distraction.

The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.

I suppose the idea here is that there is some difference whether there is a human being sitting in the scanner, or, say, a toy robot with a state of two bits where one is I am thinking about cats and the other is I am broken and will lie about thinking about cats. With the robot, we could just check the "broken" bit as well from the scan when the robot is disagreeing with the scanner, and if it is set, conclude that the robot is broken.

I'm not seeing how humans must be fundamentally different. The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat. Things being deterministic and predictable from knowing their initial state doesn't mean they can't have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.

Sorry I keep skipping over your formalism stuff, but I'm still not really grasping the underlying assumptions behind this approach. (The underlying approach in the computer science approach are, roughly, "the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else", "animals and humans are just clever robots made of the stuff", "magical souls aren't involved, not even if they wear a paper bag that says 'conscious experience' on their head")

The whole philosophical theory of everything thing does remind me of this strange thing from a year ago, where the building blocks for the theory were made out of nowadays more fashionable category theory rather than set theory though.

Comment author: Tuukka_Virtaperko 08 February 2012 11:59:46AM 1 point [-]

I've read some of this Universal Induction article. It seems to operate from flawed premises.

If we prescribe Occam’s razor principle [3] to select the simplest theory consistent with the training examples and assume some general bias towards structured environments, one can prove that inductive learning “works”. These assumptions are an integral part of our scientific method. Whether they admit it or not, every scientist, and in fact every person, is continuously using this implicit bias towards simplicity and structure to some degree.

Suppose the brain uses algorithms. An uncontroversial supposition. From a computational point of view, the former citation is like saying: "In order for a computer to not run a program, such as Indiana Jones and the Fate of Atlantis, the computer must be executing some command to the effect of "DoNotExecuteProgram('IndianaJonesAndTheFateOfAtlantis')".

That's not how computers operate. They just don't run the program. They don't need a special process for not running the program. Instead, not running the program is "implicitly contained" in the state of affairs that the computer is not running it. But this notion of implicit containment makes no sense for the computer. There are infinitely many programs the computer is not running at a given moment, so it can't process the state of affairs that it is not running any of them.

Likewise, the use of an implicit bias towards simplicity cannot be meaningfully conceptualized by humans. In order to know how this bias simplifies everything, one would have to know, what information regarding "everything" is omitted by the bias. But if we knew that, the bias would not exist in the sense the author intends it to exist.

Furthermore:

This is in some way a contradiction to the well-known no-free-lunch theorems which state that, when averaged over all possible data sets, all learning algorithms perform equally well, and actually, equally poorly [11]. There are several variations of the no-free-lunch theorem for particular contexts but they all rely on the assumption that for a general learner there is no underlying bias to exploit because any observations are equally possible at any point. In other words, any arbitrarily complex environments are just as likely as simple ones, or entirely random data sets are just as likely as structured data. This assumption is misguided and seems absurd when applied to any real world situations. If every raven we have ever seen has been black, does it really seem equally plausible that there is equal chance that the next raven we see will be black, or white, or half black half white, or red etc. In life it is a necessity to make general assumptions about the world and our observation sequences and these assumptions generally perform well in practice.

The author says that there are variations of the no free lunch theorem for particular contexts. But he goes on to generalize that the notion of no free lunch theorem means something independent of context. What could that possibly be? Also, such notions as "arbitrary complexity" or "randomness" seem intuitively meaningful, but what is their context?

The problem is, if there is no context, the solution cannot be proven to address the problem of induction. But if there is a context, it addresses the problem of induction only within that context. Then philosophers will say that the context was arbitrary, and formulate the problem again in another context where previous results will not apply.

In a way, this makes the problem of induction seem like a waste of time. But the real problem is about formalizing the notion of context in such a way, that it becomes possible to identify ambiguous assumptions about context. That would be what separates scientific thought from poetry. In science, ambiguity is not desired and should therefore be identified. But philosophers tend to place little emphasis on this, and rather spend time dwelling on problems they should, in my opinion, recognize as unsolvable due to ambiguity of context.

Comment author: Risto_Saarelma 14 January 2012 06:39:28PM 3 points [-]

We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?

1) The brain scanner is broken 2) The person is broken In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.

I don't really understand this part.

"The scanner does not understand the information but the person does" sounds like some variant of Searle's Chinese Room argument when presented without further qualifiers. People in AI tend to regard Searle as a confused distraction.

The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.

I suppose the idea here is that there is some difference whether there is a human being sitting in the scanner, or, say, a toy robot with a state of two bits where one is I am thinking about cats and the other is I am broken and will lie about thinking about cats. With the robot, we could just check the "broken" bit as well from the scan when the robot is disagreeing with the scanner, and if it is set, conclude that the robot is broken.

I'm not seeing how humans must be fundamentally different. The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat. Things being deterministic and predictable from knowing their initial state doesn't mean they can't have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.

Sorry I keep skipping over your formalism stuff, but I'm still not really grasping the underlying assumptions behind this approach. (The underlying approach in the computer science approach are, roughly, "the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else", "animals and humans are just clever robots made of the stuff", "magical souls aren't involved, not even if they wear a paper bag that says 'conscious experience' on their head")

The whole philosophical theory of everything thing does remind me of this strange thing from a year ago, where the building blocks for the theory were made out of nowadays more fashionable category theory rather than set theory though.

Comment author: Tuukka_Virtaperko 19 January 2012 10:38:46PM *  1 point [-]

The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.

At first, I didn't quite understand this. But I'm reading Introduction to Automata Theory, Languages and Computation. Are you using the * in the same sense here as it is used in the following UNIX-style regular expression?

  • '[A-Z][a-z]*'

This expression is intended to refer to all word that begin with a capital letter and do not contain any surprising characters such as ö or -. Examples: "Jennifer", "Washington", "Terminator". The * means [a-z] may have an arbitrary amount of iterations.

Comment author: DSimon 17 January 2012 04:21:02AM *  2 points [-]

The point is that I'm trying to construct a framework theory for AI that is not grounded on anything else than sensory (or emotional etc.) perception - all the abstract parts are defined recursively. Structurally, the theory is intended to resemble a programming language with dynamic typing, as opposed to static typing. [...] The main recursion loop of the theory, in its current form, will not create any information if only reduction is allowed.

You seem to be overthinking this. Reductionism is "merely" a really useful cognition technique, because calculating everything at the finest possible level is hopelessly inefficient. Perhaps a practical simple example is needed:

An AI that can use reductionism can say "Oh, that collection of pixels within my current view is a dog, and this collection is a man, and the other collection is a leash", and go on to match against (and develop on its own) patterns about objects at the coarser-than-pixel size of dogs, men, and leashes. Without reductionism, it would be forced to do the pattern matching for everything, even for complex concepts like "Man walking a dog", directly at the pixel level, which is not impossible but is certainly a lot slower to run and harder to update.

If you've ever refactored a common element out in your code into its own module, or even if you've used a library or high-level language, you are also using reductionism. The non-reductionistic alternative would be something like writing every program from scratch, in machine code.

In response to comment by DSimon on Reductionism
Comment author: Tuukka_Virtaperko 17 January 2012 11:02:51AM 0 points [-]

Okay. That sounds very good. And it would seem to be in accordance with this statement:

Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

If reductionism does not entail that I must construct the notion of a territory and include it into my conceptualizations at all times, it's not a problem. I now understand even better why I was confused by this. This kind of reductionism is not reductive physicalism. It's hardly a philosophical statement at all, which is good. I would say that "the notion of higher levels being out there in the territory" is meaningless, but expressing disbelief to that notion is apparently intended to convey approximately the same meaning.

RP doesn't yet actually include reduction. It's about next on the to do list. Currently it includes an emergence loop that is based on the power set function. The function produces a staggering amount of information in just a few cycles. It seems to me that this is because instead of accounting for emergence relations the mind actually performs, it accounts for all defined emergence relations the mind could perform. So the theory is clearly still under construction, and it doesn't yet have any kind of an algorithm part. I'm not much of a coder, so I need to work with someone who is. I already know one mathematician who likes to do this stuff with me. He's not interested of the metaphysical part of the theory, and even said he doesn't want to know too much about it. :) I'm not guaranteeing RP can be used for anything at all, but it's interesting.

Comment author: DSimon 17 January 2012 12:45:23AM *  0 points [-]

Because humans are not realists all the time. Their mind has a lot of features, and the metaphysical assumption of realism is usually only constructed when it is needed to perform some task.

Actually, this may be a good point for me to try to figure out what you mean by "realism", because here you seem to have connected that word to some but not all strategies of problem-solving. Can you give me some specific examples of problems which the mind tends to use realism in solving, and problems where it doesn't?

In response to comment by DSimon on Reductionism
Comment author: Tuukka_Virtaperko 17 January 2012 03:19:22AM 1 point [-]

I got "reductionism" wrong, actually. I thought the author was using some nonstandard definition of reductionism, which would have been something to the effect of not having unnecessary declarations in a theory. I did not take into account that the author could actually be what he says he is, no bells and whistles, because I didn't take into account that reductionism could be taken seriously here. But that just means I misjudged. Of course I am not necessarily even supposed to be on this site. I am looking for people who might give useful ideas for theoretical work which could be useful for constructing AI, and I'm trying to check whether my approach is deemed intelligible here.

"Realism" is the belief that there is an external world, usually thought to consist of quarks, leptons, forces and such. It is typically thought of as a belief or a doctrine that is somehow true, instead of just an assumption an AI or a human makes because it needs to. Depending on who labels themself as a realist and on what mood is he, this can entail that everybody who is not a realist is considered mistaken.

An example of a problem whose solution does not need to involve realism is: "John is a small kid who seems to emulate his big brother almost all the time. Why is he doing this?" Possible answers would be: "He thinks his brother is cool" or "He wants to annoy his brother" or "He doesn't emulate his brother, they are just very similar". Of course you could just brain scan John. But if you really knew John, that's not what you would do, unless brain scanners were about as common and inexpensive as laptops. And have much better functionality than they currently do.

In the John problem, there's no need to construct the assumptions of a physical world, because the problem would be intelligible even in the case you meet John in a dream. You can't take any physical brain scanner with you in a dream, so you can't brain scan John. But you can analyze John's behavior with the same criteria according to which you would analyze him had you met him while awake.

I'm not trying to impose any views on you, because I'm basically just trying to find out whether someone is interested of this kind of stuff. The point is that I'm trying to construct a framework theory for AI that is not grounded on anything else than sensory (or emotional etc.) perception - all the abstract parts are defined recursively. Structurally, the theory is intended to resemble a programming language with dynamic typing, as opposed to static typing. The theory would be pretty much both philosophy and AI.

The problem I see now is this. My theory, RP, is founded on the notion that important parts of thinking are based on metaphysical emergence. The main recursion loop of the theory, in its current form, will not create any information if only reduction is allowed. I would allow both, but if the people on LW are reductionist, I would suppose that the logical consequence of that would be they believe my theory cannot work. And that's why I'm a bit troubled by the notion that you might accept reductionism as some sort of an axiom, because you don't want to have a long philosophical conversation and would prefer to settle down with something that currently seems reasonable. So should I expect you to not want to consider other options? It's strange that I should go elsewhere with my project, because that would amount to you rejecting an AI theory on grounds of contradicting your philosophical assumptions. Yet, my common sense expectation would be that you'd find AI more important than philosophy.

In response to Reductionism
Comment author: Tuukka_Virtaperko 16 January 2012 10:31:07PM *  1 point [-]

But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.

Can you handle the truth then? I don't understand the notion of truth you are using. In everyday language, when a person states something as "true", it doesn't usually need to be grounded to logic in order to work for a practical purpose. But you are making extremely abstract statements here. They just don't mean anything unless you define truth and solve the symbol grounding problem. You have criticized philosophy in other threads, yet here you are making dubious arguments. The arguments are dubious because they are not clearly mere rhetoric, and not clearly philosophy. If someone tries to require you to explain the meaning of them, you could say you're not interested of philosophy, so philosophical counterarguments are irrelevant to you. But you can't be disinterested of philosophy if you make philosophical claims like that and actually consider them important.

I don't like contemporary philosophy either, but I would suppose you are in trouble with these things, and I wonder if you are open to a solution? If not, fine.

But the way physics really works, as far as we can tell, is that there is only the most basic level - the elementary particle fields and fundamental forces. You can't handle the raw truth, but reality can handle it without the slightest simplification. (I wish I knew where Reality got its computing power.)

But you haven't defined reality. As long as you haven't done so, "reality" will be a metaphorical, vague concept, which frequently changes its meaning in use. This means if you state something to be "reality" in one discussion, logical analysis would probably reveal you didn't use it in the same meaning in another discussion.

You can have a deterministic definition of reality, but that will be arbitrary. Then people will start having completely pointless debates with you, and to make matters worse, you will perceive these debates as people trying to unjustify what you are doing. That's a problem caused by you not realizing you didn't have to justify your activities or approach in the first place. You didn't need to make these philosophical claims, and I don't suppose you would done so had you not felt threatened by something, such as religion or mysticism or people imposing their views on you.

This, as I see it, is the thesis of reductionism. Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

If you categorize yourself as a reductionist, why don't you go all the way? You can't be both a reductionist and a realist. Ie. you can't believe in reductionism and in the existence of a territory at the same time. You have to drop either one of them. But which one?

Drop the one you need to drop. I'm serious. You don't need this metaphysical nonsense to justify something you are doing. Neither reductionism nor realism is "true" in any meaningful way. You are not doing anything wrong if you are a reductionist for 15 minutes, then switch to realism (ie. the belief in a "territory") for ten seconds, then switch again into reductionism and then maybe to something else. And that is also the way you really live your life. I mean, think about your mind. I suppose it's somewhat similar to mine. You don't think about that metaphysical nonsense when you're actually doing something practical. So you are not a metaphycisist when you're riding a bike and enjoying the wind or something.

It's just some conception of yourself which you have, that you have defined as someone who is an advocate of "reductionism and realism". This conception is true only when you indeed are either one of those. It's not true, when you're neither of those. But you are operating in your mind. Suppose someone says to you you're not a "reductionist and a realist" when you are, for example, in intense pain for some reason and are very unlikely to think about philosophy. Well, even in that case you could remind yourself of your own conception of yourself, that is, you are a "reductionist and a realist", and argue that the person who said you are not was wrong. But why would you want to do so? The only reasons I see are some naive or egoistic or defensive reasons, such as:

  • You are afraid the person who said you're not a "reductionist or realist" will try to waste your time by presenting stupid arguments according to which you may or may not or should or should not do something.
  • You believe your image of yourself as a "reductionist and realist" is somehow "true". But you are able to decide at will whether that image is true. It is true when you are thinking in a certain way, and false when you are not thinking that way. So the statement conveys no useful information, except maybe on something you would like to be or something like that. But that is no longer philosophy.
  • You have some sort of a need to never get caught uttering something that's not true. But in philosophy, it's a really bad idea to want to make true statements all the time. Metaphysical theories in and of itself are neither true nor false. Instead, they are used to define truth and falsehood. They can be contradictory or silly or arbitrary, but they can't be true or false.

If you state you to regard one state of mind or one theory, such as realism or reductionism, as some sort of an ultimate truth, you are simply putting yourself into a prison of words for no reason except that you apparently perceive some sort of safety in that prison or something like that. But its not safe. It exposes you to philosophical criticism you previously were invulnerable towards, because before you went to that prison, you didn't even participate in that game.

If you actually care about philosophy, great. But I haven't yet gotten such an impression. It seems like philosophy is an unpleasant chore to you. You want to use philosophy to obtain justification, a sense of entitlement, or something, and then throw it away because you think you're already finished with it - that you've obtained a framework theory which already suits your needs, and you can now focus on the needs. But you're not a true reductionist in the sense you defined reductionism, unless you also scrap the belief in the territory. I don't care what you choose as long as you're fine with it, but I don't want you to contradict yourself.

There is no way to express the existence of the "territory" as a meaningfully true statement. Or if there is, I haven't heard of it. It is a completely arbitrary declaration you use to create a framework for the rest of the things you do. You can't construct a "metatheory of reality" which is about the territory, which you suppose to exist, and have that same territory prove the metatheory is right. The territory may contain empirical evidence that the metatheory is okay, but no algorithm can use that evidence to produce proof for the metatheory, because:

  • From "territory's" point of view, the metatheory is undefined.
  • But the notion of gathering empirical evidence is meaningless if the metatheory, according to which the "territory" exists, is undefined.

Therefore, you have to define it if you want to use it for something, and just accept the fact that you can't prove it to be somehow true, much less use its alleged truth to prove something else false. You can believe what you want, but you can't make an AI that would use "territory" to construct a metatheory of territory, if it's somehow true to the AI that territory is all there is. The AI can't even construct a metatheory of "map and territory", if it's programmed to hold as somehow true that map and territory are the only things that exist. This entails that the AI cannot conceptualize its own metaphysical beliefs even as well as you can. It could not talk about them at all. To do so, it would have to be able to construct arbitrary metatheories on its own. This can only be done if the AI holds no metaphysical belief as infallible, that is, the AI is a reductionist in your meaning of the word.

I've seen some interest towards AI on LW. If you really would like to one day construct a very human-like AI, you will have problems if you cannot program an AI that can conceptualize the structure of its own cognitive processes also in terms that do not include realism. Because humans are not realists all the time. Their mind has a lot of features, and the metaphysical assumption of realism is usually only constructed when it is needed to perform some task. So if you want to have that assumption around all the time, you'll just end up adding unnecessary extra baggage to the AI which will probably also make the code very difficult to comprehend. You don't want to lug the assumption around all the time just because it's supposed to be true in some way nobody can define.

You could as well have a reductionist theory, which only constructs realism (ie. the declaration that an external world exists) under certain conditions. Now, philosophy doesn't usually include such theories, because the discipline is rather outdated, but there's no inherent reason why it can't be done. Realism is neither true not false in any meaningful and universal way. You are free to state it to exist if you are going to use that statement for something. But if you just say it, as if it would mean something in and of itself, you are not saying anything meaningful.

I hope you were interested of my rant.

Comment author: Risto_Saarelma 16 January 2012 08:11:08AM *  2 points [-]

I'm mostly writing this stuff trying to explain what my mindset, which I guess to be somewhat coincident with the general LW one, is like, and where it seems to run into problems with trying to understand these theories. My question about the assumptions is basically poking at something like "what's the informal explanation of why this is a good way to approach figuring out reality", which isn't really an easy thing to answer. I'm mostly writing about my own viewpoint instead of addressing the metaphysical theory, since it's easy to write about stuff I already understand, and a lot harder to to try to understand something coming from a different tradition and make meaningful comments about it. Sorry if this feels like dismissing your stuff.

The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn't really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how complex the mechanism is. People could still get away with assuming that intelligent thought is not that formally complex around the time of Russell and Wittgenstein, until it started dawning just what a massive hairball of a mess human intelligence working in the real world is after the 1950s. Still, most philosophy seems to be following the same mode of investigation as Wittgenstein or Kant did, despite the sudden unfortunate appearance of a bookshelf full of volumes written by insane aliens between the realm of human thought and basic logic discovered by molecular biologists and cognitive scientists.

I'm not expecting people to rewrite the 100 000 pages of complexity into human mathematics, but I'm always aware that it needs to be dealt with somehow. For one thing, it's a reason to pay more attention to empiricism than philosophy has traditionally done. As in, actually do empirical stuff, not just go "ah, yes, empiricism is indeed a thing, it goes in that slot in the theory". You can't understand raw DNA much, but you can poke people with sticks, see what they do, and get some clues on what's going on with them.

For another thing, being aware of the evolutionary history of humans and the current physical constraints of human cognition and DNA can guide making an actual theory of mind from the ground up. The kludged up and sorta-working naturally evolved version might be equal to 100 000 pages of math, which is quite a lot, but also tells us that we should be able to get where we want without having to write 1 000 000 000 pages of math. A straight-up mysterian could just go, yeah, the human intelligence might be infinitely complex and you'll never come up with the formal theory. Before we knew about DNA, we would have had a harder time coming up with a counterargument.

I keep going on about the basic science stuff, since I have the feeling that the LW style of approaching things basically starts from mid-20th century computer science and natural science, not from the philosophical tradition going back to antiquity, and there's some sort of slight mutual incomprehension between it and modern traditional philosophy. It's a bit like C.P. Snow's Two Cultures thing. Many philosophers seem to be from Culture One, while LW is people from Culture Two trying to set up a philosophy of their own. Some key posts about LW's problems with philosophy are probably Against Modal Logics and A Diseased Discipline. Also there's the book Good and Real, which is philosophy being done by a computer scientist and which LW folk seem to find approachable.

The key ideas in the LW approach are that you're running on top of a massive hairball of junky evolved cognitive machinery that will trip you up at any chance you get, so you'll need to practice empirical science to figure out what's actually going on with life, plain old thinking hard won't help since that'll just lead to your broken head machinery tripping you up again, and that the end result of what you're trying to do should be a computable algorithm. Neither of these things show up in traditional philosophy, since traditional philosophy got started before there was computer science or cognitive science or molecular biology. So LessWrongers will be confused about non-empirical attempts to get to the bottom of real-world stuff and they will be confused if the get to the bottom attempt doesn't look like it will end up being an algorithm.

I'm not saying this approach is better. Philosophers obviously spend a long time working through their stuff, and what I am doing here is basically just picking low-hanging fruits from science that's so recent that it hasn't percolated into the cultural background thought yet. But we are living in interesting times when philosophers can stay mulling through the conceptual analysis, and then all of a sudden scientists will barge in and go, hey, we were doing some empiric stuff with machines, and it turns out conterfactual worlds are actually sort of real.

Comment author: Tuukka_Virtaperko 16 January 2012 06:18:01PM 0 points [-]
In response to Against Modal Logics
Comment author: Tuukka_Virtaperko 16 January 2012 06:16:00PM *  2 points [-]

I wrote a bunch of comments to this work while discussing with Risto_Saarelma. But I thought I should rather post them here. I came here to discuss certain theories that are on the border between philosophy and something which could be useful for the construction of AI. I've developed my own such theory based on many years of work on an unusual metaphysical system called the Metaphysics of Quality, which is largely ignored in the academy and deviates from the tradition. It's not very "old" stuff. The formation of that tradition of discussion began in 1974. So that's my background.

The kind of work that I try to do is not about language. It is about reducing mentalistic models to purely causal models, about opening up black boxes to find complicated algorithms inside, about dissolving mysteries - in a word, about cognitive science.

What would I answer to the question whether my work is about language? I'd say it's both about language and algorithms, but it's not some Chomsky-style stuff. It does account for the symbol grounding problem in a way that is not typically expected of language theory. But the point is, and I think this is important: even the mentalistic models to not currently exist in a coherent manner. So how are people going to reduce something undefined to purely causal models? Well, that doesn't sound very possible, so I'd say the goals of RP are relevant.

But this kind of reductionism is hard work.

I would imagine mainstream philosophy to be hard work, too. This work, unfortunately, would, to a great extent, consist of making correct references to highly illegible works.

Modern philosophy doesn't enforce reductionism, or even strive for it.

Well... I wouldn't say RP enforces reductionism or that it doesn't enforce reductionism. It kinda ruins RP if you develop a metatheory where theories are classified either as reductionist or nonreductionist. You can do that - it's not a logical contradiction - but the point of RP is to be such a theory, that even though we could construct such metatheoretic approaches to it, we don't want to do so, because it's not only useless, but also complicates things for no apparent benefit. Unless, of course, we are not interested of AI but trying to device some very grand philosophy of which I'm not sure what it could be used for. My intention is that things like "reductionism" are placed within RP instead of placing RP into a box labeled "reductionism".

RP is supposed to define things recursively. That is not, to my knowledge, impossible. So I'm not sure why the definition would necessarily have to be reductive in some sense LISP, to my knowledge, is not reductive. But I'm not sure what Eliezer means with "reductive". It seems like yet another philosophical concept. I'd better check if it's defined somewhere on LW...

And then they publish it and say, "Look at how precisely I have defined my language!"

I'm not a fetishist. Not in this matter, at least. I want to define things formally because the structure of the theory is very hard to understand otherwise. The formal definitions make it easier to find out things I would not have otherwise noticed. That's why I want to understand the formal definitions myself despite sometimes having other people practically do them for me.

Consider the popular philosophical notion of "possible worlds". Have you ever seen a possible world?

I think that's pretty cogent criticism. I've found the same kind of things troublesome.

Philosophers keep telling me that I should look at philosophy. I have, every now and then. But the main reason I look at philosophy is when I find it desirable to explain things to philosophers.

I understand how Eliezer feels. I guess I don't even tell people they need to look at philosophy for its own sake. How should I know what someone else wants to do for its own sake? But it's not so simple with RP, because it could actually work for something. The good philosophy is simply hard to find, and if I hadn't studied the MOQ, I might very well now be laughing at Langan's CTMU with many others, because I wouldn't understand what that thing is he is a bit awkwardly trying to express.

I'd like to illustrate the stagnation of academic philosophy with the following thought experiment. Let's suppose someone has solved the problem of induction. What is the solution like?

  • Ten pages?
  • Hundred pages?
  • Thousand pages?
  • Does it contain no formulae or few formulae?
  • Does it contain a lot of formulae?

I've read academic publications to the point that I don't believe there is any work the academic community would, generally speaking, regard as a solution to the problem of induction. I simply don't believe many scholars think there really can be such a thing. They are interested of "refining" the debate somehow. They don't treat it as some matter that needs to be solved because it actually means something.

This example might not right a bell to someone completely unfamiliar with academic philosophy, but I think it does illustrate how the field is flawed.

Comment author: Risto_Saarelma 16 January 2012 08:11:08AM *  2 points [-]

I'm mostly writing this stuff trying to explain what my mindset, which I guess to be somewhat coincident with the general LW one, is like, and where it seems to run into problems with trying to understand these theories. My question about the assumptions is basically poking at something like "what's the informal explanation of why this is a good way to approach figuring out reality", which isn't really an easy thing to answer. I'm mostly writing about my own viewpoint instead of addressing the metaphysical theory, since it's easy to write about stuff I already understand, and a lot harder to to try to understand something coming from a different tradition and make meaningful comments about it. Sorry if this feels like dismissing your stuff.

The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn't really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how complex the mechanism is. People could still get away with assuming that intelligent thought is not that formally complex around the time of Russell and Wittgenstein, until it started dawning just what a massive hairball of a mess human intelligence working in the real world is after the 1950s. Still, most philosophy seems to be following the same mode of investigation as Wittgenstein or Kant did, despite the sudden unfortunate appearance of a bookshelf full of volumes written by insane aliens between the realm of human thought and basic logic discovered by molecular biologists and cognitive scientists.

I'm not expecting people to rewrite the 100 000 pages of complexity into human mathematics, but I'm always aware that it needs to be dealt with somehow. For one thing, it's a reason to pay more attention to empiricism than philosophy has traditionally done. As in, actually do empirical stuff, not just go "ah, yes, empiricism is indeed a thing, it goes in that slot in the theory". You can't understand raw DNA much, but you can poke people with sticks, see what they do, and get some clues on what's going on with them.

For another thing, being aware of the evolutionary history of humans and the current physical constraints of human cognition and DNA can guide making an actual theory of mind from the ground up. The kludged up and sorta-working naturally evolved version might be equal to 100 000 pages of math, which is quite a lot, but also tells us that we should be able to get where we want without having to write 1 000 000 000 pages of math. A straight-up mysterian could just go, yeah, the human intelligence might be infinitely complex and you'll never come up with the formal theory. Before we knew about DNA, we would have had a harder time coming up with a counterargument.

I keep going on about the basic science stuff, since I have the feeling that the LW style of approaching things basically starts from mid-20th century computer science and natural science, not from the philosophical tradition going back to antiquity, and there's some sort of slight mutual incomprehension between it and modern traditional philosophy. It's a bit like C.P. Snow's Two Cultures thing. Many philosophers seem to be from Culture One, while LW is people from Culture Two trying to set up a philosophy of their own. Some key posts about LW's problems with philosophy are probably Against Modal Logics and A Diseased Discipline. Also there's the book Good and Real, which is philosophy being done by a computer scientist and which LW folk seem to find approachable.

The key ideas in the LW approach are that you're running on top of a massive hairball of junky evolved cognitive machinery that will trip you up at any chance you get, so you'll need to practice empirical science to figure out what's actually going on with life, plain old thinking hard won't help since that'll just lead to your broken head machinery tripping you up again, and that the end result of what you're trying to do should be a computable algorithm. Neither of these things show up in traditional philosophy, since traditional philosophy got started before there was computer science or cognitive science or molecular biology. So LessWrongers will be confused about non-empirical attempts to get to the bottom of real-world stuff and they will be confused if the get to the bottom attempt doesn't look like it will end up being an algorithm.

I'm not saying this approach is better. Philosophers obviously spend a long time working through their stuff, and what I am doing here is basically just picking low-hanging fruits from science that's so recent that it hasn't percolated into the cultural background thought yet. But we are living in interesting times when philosophers can stay mulling through the conceptual analysis, and then all of a sudden scientists will barge in and go, hey, we were doing some empiric stuff with machines, and it turns out conterfactual worlds are actually sort of real.

Comment author: Tuukka_Virtaperko 16 January 2012 01:01:36PM *  1 point [-]

Sorry if this feels like dismissing your stuff.

You don't have to apologize, because you have been useful already. I don't require you to go out of your way to analyze this stuff, but of course it would also be nice if we could understand each other.

The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn't really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how complex the mechanism is. People could still get away with assuming that intelligent thought is not that formally complex around the time of Russell and Wittgenstein, until it started dawning just what a massive hairball of a mess human intelligence working in the real world is after the 1950s. Still, most philosophy seems to be following the same mode of investigation as Wittgenstein or Kant did, despite the sudden unfortunate appearance of a bookshelf full of volumes written by insane aliens between the realm of human thought and basic logic discovered by molecular biologists and cognitive scientists.

That's a good point. The philosophical tradition of discussion I belong to was started in 1974 as a radical deviation from contemporary philosophy, which makes it pretty fresh. My personal opinion is that within decades of centuries, the largely obsolete mode of investigation you referred to will be mostly replaced by something that resembles what I and a few others are currently doing. This is because the old mode of investigation does not produce results. Despite intense scrutiny for 300 years, it has not provided an answer to such a simple philosophical problem as the problem of induction. Instead, it has corrupted the very writing style of philosophers. When one is reading philosophical publications by authors with academic prestige, every other sentence seems somehow defensive, and the writer seems to be squirming in the inconvenience caused by his intuitive understanding that what he's doing is barren but he doesn't know of a better option. It's very hard for a distinguished academic to go into the freaky realm and find out whether someone made sense but had a very different approach than the academic approach. Aloof but industrious young people, with lots of ability but little prestige, are more suitable for that.

Nowadays the relatively simple philosophical problem of induction (proof of the Poincare conjecture is relatiely extremely complex) has been portrayed as such a difficult problem, that if someone devises a theoretic framework which facilitates a relatively simple solution to the problem, academic people are very inclined to state that they don't understand the solution. I believe this is because they insist the solution should be something produced by several authors working together for a century. Something that will make theoretical philosophy again appear glamorous. It's not that glamorous, and I don't think it was very glamorous to invent 0 either - whoever did that - but it was pretty important.

I'm not sure what good this ranting of mine is supposed to do, though.

I'm not expecting people to rewrite the 100 000 pages of complexity into human mathematics, but I'm always aware that it needs to be dealt with somehow. For one thing, it's a reason to pay more attention to empiricism than philosophy has traditionally done. As in, actually do empirical stuff, not just go "ah, yes, empiricism is indeed a thing, it goes in that slot in the theory". You can't understand raw DNA much, but you can poke people with sticks, see what they do, and get some clues on what's going on with them.

The metaphysics of quality, of which my RP is a much-altered instance, is an empiricist theory, written by someone who has taught creative writing in Uni, but who has also worked writing technical documents. The author has a pretty good understanding of evolution, social matters, computers, stuff like that. Formal logic is the only thing in which he does not seem proficient, which maybe explains why it took so long for me to analyze his theories. :)

If you want, you can buy his first book, Zen and the Art of Motorcycle Maintenance from Amazon at the price of a pint of beer. (Tap me in the shoulder if this is considered inappropriate advertising.) You seem to be logically rather demanding, which is good. It means I should tell you that in order to attain understanding of MOQ that explains a lot more of the metaphysical side of RP, you should also read his second book. They are also available in every Finnish public library I have checked (maybe three or four libraries).

What more to say... Pirsig is extremely critical of the philosophical tradition starting from antiquity. I already know LW does not think highly of contemporary philosophy, and that's why I thought we might have something in common in the first place. I think we belong to the same world, because I'm pretty sure I don't belong to Culture One.

The key ideas in the LW approach are that you're running on top of a massive hairball of junky evolved cognitive machinery that will trip you up at any chance you get

Okay, but nobody truly understands that hairball, if it's the brain.

the end result of what you're trying to do should be a computable algorithm.

That's what I'm trying to do! But it is not my only goal. I'm also trying to have at least some discourse with World One, because I want to finish a thing I began. My friend is currently in the process of writing a formal definition related to that thing, and I won't get far with the algorithm approach before he's finished that and is available for something else. But we are actually planning that. I'm not bullshitting you or anything. We have been planning to do that for some time already. And it won't be fancy at first, but I suppose it could get better and better the more we work on it, or the approach would maybe prove a failure, but that, again, would be an interesting result. Our approach is maybe not easily understood, though...

My friend understands philosophy pretty well, but he's not extremely interested of it. I have this abstract model of how this algortihm thing should be done, but I can't prove to anyone that it's correct. Not right now. It's just something I have developed by analyzing an unusual metaphysical theory for years. The reason my friend wants to do this apparently is that my enthusiasm is contagious and he does enjoy maths for the sake of maths itself. But I don't think I can convince people to do this with me on grounds that it would be useful! And some time ago, people thought number theory is a completely useless but a somehow "beautiful" form of mathematics. Now the products of number theory are used in top-secret military encryption, but the point is, nobody who originally developed number theory could have convinced anyone the theory would have such use in the future. So, I don't think I can have people working with me in hopes of attaining grand personal success. But I think I could meet someone who finds this kind of activity very enjoyable.

The "state basic assumptions" approach is not good in the sense that it would go all the way to explaining RP. It's maybe a good starter, but I can't really transform RP into something that could be understood from an O point of view. That would be like me needing to express equation x + 7 = 20 to you in such terms that x + y = 20. You couldn't make any sense of that.

I really have to go now, actually I'm already late from somewhere...

Comment author: Risto_Saarelma 14 January 2012 06:39:28PM 3 points [-]

We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?

1) The brain scanner is broken 2) The person is broken In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.

I don't really understand this part.

"The scanner does not understand the information but the person does" sounds like some variant of Searle's Chinese Room argument when presented without further qualifiers. People in AI tend to regard Searle as a confused distraction.

The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.

I suppose the idea here is that there is some difference whether there is a human being sitting in the scanner, or, say, a toy robot with a state of two bits where one is I am thinking about cats and the other is I am broken and will lie about thinking about cats. With the robot, we could just check the "broken" bit as well from the scan when the robot is disagreeing with the scanner, and if it is set, conclude that the robot is broken.

I'm not seeing how humans must be fundamentally different. The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat. Things being deterministic and predictable from knowing their initial state doesn't mean they can't have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.

Sorry I keep skipping over your formalism stuff, but I'm still not really grasping the underlying assumptions behind this approach. (The underlying approach in the computer science approach are, roughly, "the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else", "animals and humans are just clever robots made of the stuff", "magical souls aren't involved, not even if they wear a paper bag that says 'conscious experience' on their head")

The whole philosophical theory of everything thing does remind me of this strange thing from a year ago, where the building blocks for the theory were made out of nowadays more fashionable category theory rather than set theory though.

Comment author: Tuukka_Virtaperko 15 January 2012 11:39:03PM *  0 points [-]

According to the abstract, the scope of the theory you linked is a subset of RP. :D I find this hilarious because the theory was described as "ridiculously broad". It seems to attempt to encompass all of O, and may contain interesting insight my work clearly does not contain. But the RP defines a certain scope of things, and everything in this article seems to belong to O, with perhaps some N without clearly differentiating the two. S is missing, which is rather usual in science. From the scientific point of view, it may be hard to understand what Buddhists could conceivably believe to achieve by meditation. They have practiced it for millenia, yet they did not do brain scans that would have revealed its beneficial effects, and they did not perform questionnaires either and compile the results into a statistic. But they believed it is good to meditate, and were not very interested of knowing why it is good. That belongs to the realm of S.

In any case, this illustrates an essential feature of RP. It's not so much a theory about "things", you know, cars, flowers, finances, than a theory about what are the most basic kind of things, or about what kind of options for the scope of any theory or statement are intelligible. It doesn't currently do much more because the algorith part is missing. It's also not necessarily perfect or anything like that. If something apparently coherent cannot be included to the scope of RP in a way that makes sense, maybe the theory needs to be revised.

Perhaps I could give a weird link in return. This is written by someone who is currently a Professor of Analytic Philosophy at the University of Melbourne. I find the theory to mathematically outperform that of Langan in that it actually has mathematical content instead of some sort of a sketch. The writer expresses himself coherently and appears to understand in what style do people expect to read that kind of text. But the theory does not recurse in interesting ways. It seems quite naive and simple to me and ignores the symbol grounding problem. It is practically an N-type theory, which only allegedly has S or O content. The writer also seems to make exagerrating interpretations of what Nagarjuna said. These exagerrating interpretations lead to making the same assumptions which are the root of the contradiction in CTMU, but The Structure of Emptiness is not described as a Wheeler-style reality theory, so in that paper, the assumptions do not lead to a contradiction although they still seem to misunderstand Nagarjuna.

By the way, I have thought about your way of asking for basic assumptions. I guess I initially confused it with you asking for some sort of axioms, but since you weren't interested of the formalisms, I didn't understand what you wanted. But now I have the impression that you asked me to make general statements of what the theory can do that are readily understood from the O viewpoint, and I think it has been an interesting approach for me, because I didn't use that in the MOQ community, which would have been unlikely to request that approach.

View more: Prev | Next