Comment author: Risto_Saarelma 15 January 2012 02:54:17PM *  3 points [-]

About the classification thing: Agree that it's very important that a general AI be able to classify entities into "dumb machines" and things complex enough to be self-aware, warrant an intentional stance and require ethical consideration. Even putting aside the ethical concerns, being able to recognize complex agents with intentions and model their intentions instead of their most likely massively complex physical machinery is probably vital to any sort of meaningful ability to act in a social domain with many other complex agents (cf. Dennett's intentional stance)

The latter has, to my knowledge, never been done. Arguably, the latter task requires different ability which the scanner may not have. The former requires acquiring a bitmap and using image recognition. It has already been done with simple images such as parallel black and white lines, but I don't know whether bitmaps or image recognition were involved in that. If the cat is a problem, let's simplify the image to the black and white lines.

I understood the existing image reconstruction experiments measure the activation on the visual cortex when the subject is actually viewing an image, which does indeed get you a straightforward mapping to a bitmap. This isn't the same as thinking about a cat, a person could be thinking about a cat while not looking at one, and they could have a cat in their visual field while daydreaming or suffering from hysterical blindness, so that they weren't thinking about a cat despite having a cat image correctly show up in their visual cortex scan.

I don't actually know what the neural correlate of thinking about a cat, as opposed to having one's visual cortex activated by looking at one, would be like, but I was assuming interpreting it would require much more sophisticated understanding of the brain, basically at the level of difficult of telling whether a brain scan correlates with thinking about freedom, a theory of gravity or reciprocality. Basically something that's entirely beyond current neuroscience and more indicative of some sort of Laplace's demon like thought experiment where you can actually observe and understand the whole mechanical ensemble of the brain.

But RP is about the understanding a consciousness could attain of itself. Such an understanding could not be deterministic within the viewpoint of that consciousness. That would be like trying to have a map contain itself.

Quines are maps that contain themselves. A quining system could reflect on its entire static structure, though it would have to run some sort of emulation slower than its physical substrate to predict its future states. Hofstadter's GEB links quines to reflection in AI.

Well, that's supposed to be a good thing, because there are supposed to be none. But saying that might not help. If you don't know what consciousness or the experience of reality mean in my use (perhaps because you would reduce such experiences to theoretical models of physical entities and states of neural networks), you will probably not understand what I'm doing. That would suggest you cannot conceptualize idealistic ontology or you believe "mind" to refer to an empty set.

"There aren't any assumptions" is just a plain non-starter. There's the natural language we're using that's used to present the theory and ground the concepts in the theory, and natural language basically carries a billion years of evolution leading to the three billion base pair human genome loaded with accidental complexity, leading to something from ten to a hundred thousand years of human cultural evolution with even more accidental complexity that probably gets us something in the ballpark of 100 megabytes irreducible complexity from the human DNA that you need to build up a newborn brain and another 100 megabytes (going by the heuristic of one bit of permanently learned knowledge per one second) for the kernel of the cultural stuff a human needs to learn from their perceptions to be able to competently deal with concepts like "income tax" or "calculus". You get both of those for free when talking with other people, and neither when trying to build an AGI-grade theory of the mind.

This is also why I spelled out the trivial basic assumptions I'm working from (and probably did a very poor job at actually conveying the whole idea complex). When you start doing set theory, I assume we're dealing with things at the complexity of mathematical objects. Then you throw in something like "anthropology" as an element in a set, and I, still in math mode, start going, whaa, you need humans before you have anthropology, and you need the billion years of evolution leading to the accidental complexity in humans to have humans, and you need physics to have the humans live and run the societies for anthropology to study, and you need the rest of the biosphere for the humans to not just curl up and die in the featureless vacuum and, and.. and that's a lot of math. While the actual system with the power sets looks just like uniform, featureless soup to me. Sure, there are all the labels, which make my brain do the above i-don't-get-it dance, but the thing I'm actually looking for is the mathematical structure. And that's just really simple, nowhere near what you'd need to model a loose cloud of hydrogen floating in empty space, not to mention something many orders of magnitude more complex like a society of human beings.

My confusion about the assumptions is basically that I get the sense that analytic philosophers seem to operate like they could just write the name of some complex human concept, like "morality", then throw in some math notation like modal logic, quantified formulas and set memberships, and call it a day. But what I'm expecting is something that teaches me how to program a computer to do mind-stuff, and a computer won't have the corresponding mental concept for the word "morality" like a human has, since the human has the ~200M special sauce kernel which gives them that. And I hardly ever see philosophers talking about this bit.

A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you'd have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell's equations on a single sheet of paper won't do. It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.

Comment author: Tuukka_Virtaperko 15 January 2012 08:07:15PM *  1 point [-]

It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.

There are many ways to answer that question. I have a flowchart and formulae. The opposite of that would be something to the effect of having the source code. I'm not sure why you expect me to have that. Was it something I said?

I thought I've given you links to my actual work, but I can't find them. Did I forget? Hmm...

If you dislike metaphysics, only the latter is for you. I can't paste the content, because the formatting on this website apparently does not permit html formulae. Wait a second, it does permit formulae, but only LaTeX. I know LaTeX, but the formulae aren't in that format right now. I should maybe convert them.

You won't understand the flowchart if you don't want to discuss metaphysics. I don't think I can prove that something, of which you don't know what it is, could be useful to you. You would have to know what it is and judge for yourself. If you don't want to know, it's ok.

I am currently not sure why you would want to discuss this thing at all, given that you do not seem quite interested of the formalisms, but you do not seem interested of metaphysics either. You seem to expect me to explain this stuff to you in terms of something that is familiar to you, yet you don't seem very interested to have a discussion where I would actually do that. If you don't know why you are having this discussion, maybe you would like to do something else?

There are quite probably others in LessWrong who would be interested of this, because there has been prior discussion of CTMU. People interested in fringe theories, unfortunately, are not always the brightest of the lot, and I respect your abilities to casually namedrop a bunch of things I will probably spend days thinking about.

But I don't know why you wrote so much about billions of years, babies, human cultural evolution, 100 megabytes and such. I am troubled by the thought that you might think I'm some loony hippie who actually needs a recap on those things. I am not yet feeling very comfortable in this forum because I perceive myself as vulnerable to being misrepresented as some sort of a fool by people who don't understand what I'm doing.

I'm not trying to change LessWrong. But if this forum has people criticizing the CTMU without having a clue of what it is, then I attain a certain feeling of entitlement. You can't just go badmouthing people and their theories and not expect any consequences if you are mistaken. You don't need to defend yourself either, because I'm here to tell you what recursive metaphysical theories such as the CTMU are about, or recommend you to shut up about the CTMU if you are not interested of metaphysics. I'm not here to bloat my ego by portraying other people as fools with witty rhetoric, and if you Google about the CTMU, you'll find a lot of people doing precisely that to the CTMU, and you will understand why I fear that I, too, could be treated in such a way.

Comment author: Risto_Saarelma 15 January 2012 02:54:17PM *  3 points [-]

About the classification thing: Agree that it's very important that a general AI be able to classify entities into "dumb machines" and things complex enough to be self-aware, warrant an intentional stance and require ethical consideration. Even putting aside the ethical concerns, being able to recognize complex agents with intentions and model their intentions instead of their most likely massively complex physical machinery is probably vital to any sort of meaningful ability to act in a social domain with many other complex agents (cf. Dennett's intentional stance)

The latter has, to my knowledge, never been done. Arguably, the latter task requires different ability which the scanner may not have. The former requires acquiring a bitmap and using image recognition. It has already been done with simple images such as parallel black and white lines, but I don't know whether bitmaps or image recognition were involved in that. If the cat is a problem, let's simplify the image to the black and white lines.

I understood the existing image reconstruction experiments measure the activation on the visual cortex when the subject is actually viewing an image, which does indeed get you a straightforward mapping to a bitmap. This isn't the same as thinking about a cat, a person could be thinking about a cat while not looking at one, and they could have a cat in their visual field while daydreaming or suffering from hysterical blindness, so that they weren't thinking about a cat despite having a cat image correctly show up in their visual cortex scan.

I don't actually know what the neural correlate of thinking about a cat, as opposed to having one's visual cortex activated by looking at one, would be like, but I was assuming interpreting it would require much more sophisticated understanding of the brain, basically at the level of difficult of telling whether a brain scan correlates with thinking about freedom, a theory of gravity or reciprocality. Basically something that's entirely beyond current neuroscience and more indicative of some sort of Laplace's demon like thought experiment where you can actually observe and understand the whole mechanical ensemble of the brain.

But RP is about the understanding a consciousness could attain of itself. Such an understanding could not be deterministic within the viewpoint of that consciousness. That would be like trying to have a map contain itself.

Quines are maps that contain themselves. A quining system could reflect on its entire static structure, though it would have to run some sort of emulation slower than its physical substrate to predict its future states. Hofstadter's GEB links quines to reflection in AI.

Well, that's supposed to be a good thing, because there are supposed to be none. But saying that might not help. If you don't know what consciousness or the experience of reality mean in my use (perhaps because you would reduce such experiences to theoretical models of physical entities and states of neural networks), you will probably not understand what I'm doing. That would suggest you cannot conceptualize idealistic ontology or you believe "mind" to refer to an empty set.

"There aren't any assumptions" is just a plain non-starter. There's the natural language we're using that's used to present the theory and ground the concepts in the theory, and natural language basically carries a billion years of evolution leading to the three billion base pair human genome loaded with accidental complexity, leading to something from ten to a hundred thousand years of human cultural evolution with even more accidental complexity that probably gets us something in the ballpark of 100 megabytes irreducible complexity from the human DNA that you need to build up a newborn brain and another 100 megabytes (going by the heuristic of one bit of permanently learned knowledge per one second) for the kernel of the cultural stuff a human needs to learn from their perceptions to be able to competently deal with concepts like "income tax" or "calculus". You get both of those for free when talking with other people, and neither when trying to build an AGI-grade theory of the mind.

This is also why I spelled out the trivial basic assumptions I'm working from (and probably did a very poor job at actually conveying the whole idea complex). When you start doing set theory, I assume we're dealing with things at the complexity of mathematical objects. Then you throw in something like "anthropology" as an element in a set, and I, still in math mode, start going, whaa, you need humans before you have anthropology, and you need the billion years of evolution leading to the accidental complexity in humans to have humans, and you need physics to have the humans live and run the societies for anthropology to study, and you need the rest of the biosphere for the humans to not just curl up and die in the featureless vacuum and, and.. and that's a lot of math. While the actual system with the power sets looks just like uniform, featureless soup to me. Sure, there are all the labels, which make my brain do the above i-don't-get-it dance, but the thing I'm actually looking for is the mathematical structure. And that's just really simple, nowhere near what you'd need to model a loose cloud of hydrogen floating in empty space, not to mention something many orders of magnitude more complex like a society of human beings.

My confusion about the assumptions is basically that I get the sense that analytic philosophers seem to operate like they could just write the name of some complex human concept, like "morality", then throw in some math notation like modal logic, quantified formulas and set memberships, and call it a day. But what I'm expecting is something that teaches me how to program a computer to do mind-stuff, and a computer won't have the corresponding mental concept for the word "morality" like a human has, since the human has the ~200M special sauce kernel which gives them that. And I hardly ever see philosophers talking about this bit.

A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you'd have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell's equations on a single sheet of paper won't do. It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.

Comment author: Tuukka_Virtaperko 15 January 2012 06:12:59PM *  0 points [-]

A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you'd have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell's equations on a single sheet of paper won't do. It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.

You want a sweater. I give you a baby sheep, and it is the only baby sheep you have ever seen that is not completely lame or retarded. You need wool to produce the sweater, so why are you disappointed? Look, the mathematical part of the theory is something we wrote less than a week ago, and it is already better than any theory of this type I have ever heard of (three or four). The point is not that this would be excruciatingly difficult. The point is that for some reason, almost nobody is doing this. It probably has something to do with the severe stagnation in the field of philosophy. The people who could develop philosophy find the academic discipline so revolting they don't.

I did not come to LessWrong to tell everyone I have solved the secrets of the universe, or that I am very smart. My ineptitude in math is the greatest single obstacle in my attempts to continue development. If I didn't know exactly one person who is good at math and wants to do this kind of work with me, I might be in an insane asylum, but no more about that. I came here because this is my life... and even though I greatly value the MOQ community, everyone on those mailing lists is apparently even less proficient in maths and logic as I am. Maybe someone here thinks this is fun and wants to have a fun creative process with me.

I would like to write a few of those 100 000 pages that we need. I don't get your point. You seem to require me to have written them before I have written them.

My confusion about the assumptions is basically that I get the sense that analytic philosophers seem to operate like they could just write the name of some complex human concept, like "morality", then throw in some math notation like modal logic, quantified formulas and set memberships, and call it a day. But what I'm expecting is something that teaches me how to program a computer to do mind-stuff, and a computer won't have the corresponding mental concept for the word "morality" like a human has, since the human has the ~200M special sauce kernel which gives them that. And I hardly ever see philosophers talking about this bit.

Do you expect to build the digital sauce kernel without any kind of a plan - not even a tentative one? If not, a few pages of extremely abstract formulae is all I have now, and frankly, I'm not happy about that either. I can't teach you nearly anything you seem interested of, but I could really use some discussion with interested people. And you have already been helpful. You don't need to consider me someone who is aggressively imposing his views on individual people. I would love to find people who are interested of these things because there are so few of them.

I had a hard time figuring out what you mean by basic assumptions, because I've been doing this for such a long time I tend to forget what kind of metaphysical assumptions are generally held by people who like science but are disinterested of metaphysics. I think I've now caught up with you. Here are some basic assumptions.

  • RP is about definable things. It is not supposed to make statements about undefinable things - not even that they don't exist, like you would seem to believe.
  • Humans are before anthropology in RP. The former is in O2 and the latter in O4. I didn't know how to tell you that because I didn't know you wanted to hear that and not some other part of the theory in order to not go whaaa. I'd need to tell you everything but that would involve a lot of metaphysics. But the theory is not a theory of the history of the world, if "world" is something that begins with the Big Bang.
  • From your empirical scientific point of view, I suppose it would be correct to state that RP is a theory of how the self-conscious part of one person evolves during his lifetime.
  • At least in the current simple isntance of RP, you don't need to know anything about the metaphysical content to understand the math. You don't need to go out of math-mode, because there are no nonstandard metaphysical concepts among the formulae.
  • If you do go out of the math mode and want to know what the symbols stand foor, I think that's very good. But this can only be explained to you in terms of metaphysics, because empirical science simply does not account for everything you experience. Suppose you stop by in the grocery store. Where's the empirical theory that accounts for that? Maybe some general sociological theory would. But my point is, no such empirical theory is actually implemented. You don't acquire a scientific explanation for the things you did in the store. Still you remember them. You experienced them. They exist in your self-conscious mind in some way, which is not dependent of your conceptions of what is the relationship between topology and model theory, or of your understanding of why fission of iron does not produce energy, or how one investor could single-handedly significantly affect whether a country joins the Euro. From your personal, what you might perhaps call "subjective", point of view, it does not even depend on your conception of cognition science, unless you actually apply that knowledge to it. You probably don't do that all the time although you do that sometimes.
  • I don't subscribe to any kind of "subjectivism", whatever that might be in this context, or idealism, in the sense that something like that would be "true" in a meaningful way. But you might agree that when trying to develop the theory underlying self-conscious phenomenal and abstract experience, you can't begin from the Big Bang, because you weren't there.
  • You could use RP to describe a world you experience in a dream, and the explanation would work as well as when you are awake. Physical theories don't work in that world. For example, if you look at your watch in a dream, then look away, and look at it again, the watch may display a completely different time. Or the watch may function, but when you take it apart, you find that instead of clockwork, it contains something a functioning mechanical watch will not contain, such as coins.
  • RP is intended to relate abstract thought (O, N, S) to sensory perceptions, emotions and actions (R), but to define all relations between abstract entities to other abstract entities recursively.
  • One difference between RP and the empiric theories of cosmology and such, that you mentioned, is that the latter will not describe the ability of person X to conceptualize his own cognitive processess in a way that can actually be used right now to describe what, or rather, how, some person is thinking with respect to abstract concepts. RP does that.
  • RP can be used to estimate the metaphysical composure of other people. You seem to place most of the questions you label "metaphysical" or "philosophical" in O.
  • I don't yet know if this forum tolerates much metaphysical discussion, but my theory is based on about six years of work on the Metaphysics of Quality. That is not mainstream philosophy and I don't know how people here will perceive it. I have altered the MOQ a lot. It's latest "authorized" variant in 1991 decisively included mostly just the O patterns. Analyzing the theory was very difficult for me in general. But maybe I will confuse people if I say nothing about the metaphysical side. So I'll think what to say...
  • RP is not an instance of relativism (except in the Buddhist sense), absolutism, determinism, indeterminism, realism, antirealism or solipsism. Also, I consider all those theories to be some kind figures of speech, because I can't find any use for them except to illustrate a certain point in a certain discussion in a metaphorical fashion. In logical analysis, these concepts do not necessarily retain the same meaning when they are used again in another discussion. These concepts acquire definable meaning only when detached from the philosophical use and being placed within a specific context.
  • Structurally RP resembles what I believe computer scientists call context-free languages, or programming languages with dynamic typing. I am not yet sure what is the exact definition of the former, but having written a few programs, I do understand what it means to do typing run-time. The Western mainstream philosophical tradition does not seem to include any theories that would be analogues of these computer science topics.

I have read GEB but don't remember much. I'll recap what a quine is. I tend to need to discuss mathematical things with someone face to face before I understand them, which slows down progress.

The cat/line thing is not very relevant, but apparently I didn't remember the experiment right. However, if the person and the robot could not see the lines at the same time for some reason - such as the robot needing to operate the scanner and thus not seeing inside the scanner - the robot could alter the person's brain to produce a very strong response to parallel lines in order to verify that the screen inside the scanner, which displays the lines, does not malfunction, is not unplugged, the person is not blind, etc. There could be more efficient ways of finding such things out, but if the robot has replaceable hardware and can thus live indefinitely, it has all the time in the world...

Comment author: Risto_Saarelma 14 January 2012 06:39:28PM 3 points [-]

We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?

1) The brain scanner is broken 2) The person is broken In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.

I don't really understand this part.

"The scanner does not understand the information but the person does" sounds like some variant of Searle's Chinese Room argument when presented without further qualifiers. People in AI tend to regard Searle as a confused distraction.

The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.

I suppose the idea here is that there is some difference whether there is a human being sitting in the scanner, or, say, a toy robot with a state of two bits where one is I am thinking about cats and the other is I am broken and will lie about thinking about cats. With the robot, we could just check the "broken" bit as well from the scan when the robot is disagreeing with the scanner, and if it is set, conclude that the robot is broken.

I'm not seeing how humans must be fundamentally different. The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat. Things being deterministic and predictable from knowing their initial state doesn't mean they can't have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.

Sorry I keep skipping over your formalism stuff, but I'm still not really grasping the underlying assumptions behind this approach. (The underlying approach in the computer science approach are, roughly, "the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else", "animals and humans are just clever robots made of the stuff", "magical souls aren't involved, not even if they wear a paper bag that says 'conscious experience' on their head")

The whole philosophical theory of everything thing does remind me of this strange thing from a year ago, where the building blocks for the theory were made out of nowadays more fashionable category theory rather than set theory though.

Comment author: Tuukka_Virtaperko 15 January 2012 10:56:55AM 1 point [-]

I don't find the Chinese room argument related to our work - besides, it seems to possibly vaguely try to state that what we are doing can't be done. What I meant is that AI should be able to:

  • Observe behavior
  • Categorize entities into deterministic machines which cannot take a metatheoretic approach to their data processing habits and alter them.
  • Categorize entities into agencies who process information recursively and can consciously alter their own data processing or explain it to others.
  • Use this categorization ability to differentiate entities whose behavior can be corrected or explained by means of social interaction.
  • Use the differentiation ability to develop the "common sense" view that, given permission by the owner of the scanner and if deemed interesting, the robot could not ask for the consent of the brain scanner to take it apart and fix it.
  • Understand that even if the robot were capable of performing incredibly precise neurosurgery, the person will understand the notion, that the robot wishes to use surgery to alter his thoughts to correspond with the result of the brain scanner, and could consent to this or deny consent.
  • Possibly try to have a conversation with the person in order to find out, why they said that they were not thinking of a cat.

Failure to understand this could make the robot naively both take machines apart and cut peoples brains in order to experimentally verify, which approach produces better results. Of course there are also other things to consider when the robot tries to figure out what to do.

I don't consider robots and humans fundamentally different. If the AI were complex enough to understand the aforementioned things, it also would understand the notion that someone wants to take it apart and reprogam it, and could consent or object.

The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat.

The latter has, to my knowledge, never been done. Arguably, the latter task requires different ability which the scanner may not have. The former requires acquiring a bitmap and using image recognition. It has already been done with simple images such as parallel black and white lines, but I don't know whether bitmaps or image recognition were involved in that. If the cat is a problem, let's simplify the image to the black and white lines.

Things being deterministic and predictable from knowing their initial state doesn't mean they can't have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.

Even the simplest entities, such as irrational numbers or cellular automata, can have complex behavior. Humans, too, could be deterministic and predictable given that the one analyzing a human has enough data and computing power. But RP is about the understanding a consciousness could attain of itself. Such an understanding could not be deterministic within the viewpoint of that consciousness. That would be like trying to have a map contain itself. Every iteration of the map representing itself needs also to be included in the map, resulting in a requirement that the map should contain an infinite amount of information. Only an external observer could make a finite map, but that's not what I had in mind when beginning this RP project. I do consider the goals of RP somehow relevant to AI, because I don't suppose it's ok a robot cannot conceptualize its own thought very elaborately, if it were intended to be as much human as possible, and maybe even be able to write novels.

I am interested in the ability to genuinely understand the worldviews of other people. For example, the gap between scientific and religious people. In the extreme, these people think of each other in such a derogatory way, that it would be as if they would view each other as having failed the Turing test. I would like robots to understand also the goals and values of religious people.

I'm still not really grasping the underlying assumptions behind this approach.

Well, that's supposed to be a good thing, because there are supposed to be none. But saying that might not help. If you don't know what consciousness or the experience of reality mean in my use (perhaps because you would reduce such experiences to theoretical models of physical entities and states of neural networks), you will probably not understand what I'm doing. That would suggest you cannot conceptualize idealistic ontology or you believe "mind" to refer to an empty set.

I see here the danger for rather trivial debates, such as whether I believe an AI could "experience" consciousness or reality. I don't know what such a question would even mean. I am interested of whether it can conceptualize them in ways a human could.

(The underlying approach in the computer science approach are, roughly, "the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else"

The CTMU also states something to the effect of this. In that case, Langan is making a mistake, because he believes the CTMU to be a Wheeler-style reality theory, which contradicts the earlier statement. In your case, I guess it's just an opinion, and I don't feel a need to say you should believe otherwise. But I suppose I can present a rather cogent argument against that within a few days. The argument would be in the language of formal logic, so you should be able to understand it. Stay tuned...

, "animals and humans are just clever robots made of the stuff", "magical souls aren't involved, not even if they wear a paper bag that says 'conscious experience' on their head")

I don't wish to be unpolite, but I consider these topics boring and obvious. Hopefully I haven't missed anything important when making this judgement.

Your strange link is very intriguing. I like very much being given this kind of links. Thank you.

Comment author: Risto_Saarelma 14 January 2012 11:00:20AM 4 points [-]

We were talking about applying the metaphysics system to making an AI earlier in IRC, and the symbol grounding problem came up there as a basic difficulty in binding formal reasoning systems to real-time actions. It doesn't look like this was mentioned here before.

I'm assuming I'd want to actually build an AI that needs to deal with symbol grounding, that is, it needs to usefully match some manner of declarative knowledge it represents in its internal state to the perceptions it receives from the outside world and to the actions it performs on it. Given this, I'm getting almost no notion of what useful work this theory would do for me.

Mathematical descriptions can be useful for people, but it's not given that they do useful work for actually implementing things. I can define a self-improving friendly general artificial intelligence mathematically by defining

  • FAI = <S, P*> as an artificial intelligence instance, consisting of its current internal state S and the history of its perceptions up to the present P*,
  • a: FAI -> A* as a function that gives the list of possible actions for a given FAI instance
  • u: A -> Real as a function that gives the utility of each action as a real number, with higher numbers given to actions that advance the purposes of the FAI better based on its current state and perception history and
  • f: FAI * A -> S, P as an update function that takes an action and returns a new FAI internal state with any possible self-modifications involved in the action applied, and a new perception item that contains whatever new observations the FAI made as a direct result of its action.

And there's a quite complete mathematical description of a friendly artificial intelligence, you could probably even write a bit of neat pseudocode using the pieces there, but that's still not likely to land me a cushy job supervising the rapid implementation of the design at SIAI, since I don't have anything that does actual work there. All I did was push all the complexity into the black boxes of the u, a and f.

I also implied a computational approach where the system enumerates every possible action, evaluates them all and then picks a winner with how I decided to split up the definition. This is mathematically expedient, given that in mathematics any concerns of computation time can be pretty much waved off, but appears rather naive computationally, as it is likely that both coming up with possible actions and evaluating them can get extremely expensive in the artificial general intelligence domain.

With the metaphysics thing, beyond not getting a sense of it doing any work, I'm not even seeing where the work would hide. I'm not seeing black box functions that need to do an unknowable amount of work, just sets with strange elements being connected to other sets with strange elements. What should you be able to do with this thing?

Comment author: Tuukka_Virtaperko 14 January 2012 01:24:11PM *  1 point [-]

You probably have a much more grassroot-level understanding of the symbol grounding problem. I have only solved the symbol grounding problem to the extent that I have formal understanding of its nature.

In any case, I am probably approaching AI from a point of view that is far from the symbol grounding problem. My theory does not need to be seen as an useful solution to that problem. But when an useful solution is created, I postulate it can be placed within RP. Such a solution would have to be an algorithm for creating S-type or O-type sets of members of R.

More generally, I would find RP to be useful as an extremely general framework of how AI or parts of AI can be constructed in relation to each other, ecspecially with regards to understanding lanugage and the notion of consciousness. This doesn't necessarily have anything to do with some more atomistic AI projects, such as trying to make a robot vacuum cleaner find its way back to the charging dock.

At some point, philosophical questions and AI will collide. Suppose the following thought experiment:

We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?

  • 1) The brain scanner is broken
  • 2) The person is broken

In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.

RP should help with such problems because it is intended as an elegant, compact and flexible way of defining recursion while allowing the solution of the symbol grounding problem to be contained in the definition in a nontrivial way. That is, RP as a framework of AI is not something that says: "Okay, this here is RP. Just perform the function RP(sensory input) and it works, voilá." Instead, it manages to express two different ways of solving the symbol grounding problem and to define their accuracy as a natural number n. In addition, many emergence relations in RP are logical consequences of the way RP solves the symbol grounding problem (or, if you prefer, "categorizes the parts of the actual solution to the symbol grounding problem").

In the previous thought experiment, the AI should manage to understand that the scanner deterministically performs the operation ℘(R) ⊆ S, and does not define S in terms of anything else. The person, on the other hand, is someone whose information processing is based on RP or something similar.

But what you read from moq.fi is something we wrote just a few days ago. It is by no means complete.

  • One problem is that ℘(T) does not seem to define actual emergences, but only all possible emergences.
  • We should define functions for "generalizing" and "specifying" sets or predicates, in which generalization would create a new set or predicate from an existing one by adding members, and specifying would do so by reducing members.
  • We should add a discard order to sets. Sets that are used often have a high discard order, but sets that are never used end up erased from memory. This is similar to nonused pathways in the brain dying out, and often used pathways becoming stronger.
  • The theory does not yet have an algorithmic part, but it should have. That's why it doesn't yet do anything.
  • ℘(Rn) should be defined to include a metatheoretic approach to the theory itself, facilitating modification of the theory with the yet-undefined generalizing and specifying functions.

Questions to you:

  • Is T -> U the Cartesian product of T and U?
  • What is *?

I will not guarantee having discussions with me is useful for attaining a good job. ;)

Comment author: Risto_Saarelma 10 January 2012 06:34:42PM 4 points [-]

There might not be many people here to who are sufficiently up to speed on philosophical metaphysics to have any idea what a Wheeler-style reality theory, for example, is. My stereotypical notion is that the people at LW have been pretty much ignoring philosophy that isn't grounded in mathematics, physics or cognitive science from Kant onwards, and won't bother with stuff that doesn't seem readable from this viewpoint. The tricky thing that would help would be to somehow translate the philosopher-speak into lesswronger-speak. Unfortunately this'd require some fluency in both.

Comment author: Tuukka_Virtaperko 13 January 2012 01:25:40PM *  0 points [-]

Of course the symbol grounding problem is rather important, so it doesn't really suffice to say that "set R is supposed to contain sensory input". The metaphysical idea of RP is something to the effect of the following:

Let n be 4.

R contains everything that could be used to ground the meaning of symbols.

  • R1 contains sensory perceptions
  • R2 contains biological needs such as eating and sex, and emotions
  • R3 contains social needs such as friendship and respect
  • R4 contains mental needs such as perceptions of symmetry and beauty (the latter is sometimes reducible to the Golden ratio)

N contains relations of purely abstract symbols.

  • N1 contains the elementary abstract entities, such as symbols and their basic operations in a formal system
  • N2 contains functions of symbols
  • N3 contains functions of functions. In mathematics I suppose this would include topology.
  • N4 contains information of the limits of the system, such as completeness or consistency. This information form the basis of what "truth" is like.

Let ℘(T) be the power set of T.

The solving of the symbol grounding problem requires R and N to be connected. Let us assume that ℘(Rn) ⊆ Rn+1. R5 hasn't been defined, though. If we don't assume subsets of R to emerge from each other, we'll have to construct a lot more complicated theories that are more difficult to understand.

This way we can assume there are two ways of connecting R and N. One is to connect them in the same order, and one in the inverse order. The former is set O and the latter is set S.

O set includes the "realistic" theories, which assume the existence of an "objective reality".

  • ℘(R1) ⊆ O1 includes theories regarding sensory perceptions, such as physics.
  • ℘(R2) ⊆ O2 includes theories regarding biological needs, such as the theory of evolution
  • ℘(R3) ⊆ O3 includes theories regarding social affairs, such as anthropology
  • ℘(R4) ⊆ O4 includes theories regarding rational analysis and judgement of the way in which social affairs are conducted

The relationship between O and N:

  • N1 ⊆ O1 means that physical entities are the elementary entities of the objective portion of the theory of reality. Likewise:
  • N2 ⊆ O2
  • N3 ⊆ O3
  • N4 ⊆ O4

S set includes "solipsistic" ideas in which "mind focuses to itself".

  • ℘(R4) ⊆ S1 includes ideas regarding what one believes
  • ℘(R3) ⊆ S2 includes ideas regarding learning, that is, adoption of new beliefs from one's surroundings. Here social matters such as prestige, credibility and persuasiveness affect which beliefs are adopted.
  • ℘(R2) ⊆ S3 includes ideas regarding judgement of ideas. Here, ideas are mostly judged by how they feel. Ie. if a person is revolted by the idea of creationism, they are inclined to reject it even without rational grounds, and if it makes them happy, they are inclined to adopt it.
  • ℘(R1) ⊆ S4 includes ideas regarding the limits of the solipsistic viewpoint. Sensory perceptions of objectively existing physical entities obviously present some kind of a challenge to it.

The relationship between S and N:

  • N4 ⊆ S1 means that beliefs are the elementary entities of the solipsistic portion of the theory of reality. Likewise:
  • N3 ⊆ S2
  • N2 ⊆ S3
  • N1 ⊆ S4

That's the metaphysical portion in a nutshell. I hope someone was interested!

Comment author: Risto_Saarelma 10 January 2012 06:34:42PM 4 points [-]

There might not be many people here to who are sufficiently up to speed on philosophical metaphysics to have any idea what a Wheeler-style reality theory, for example, is. My stereotypical notion is that the people at LW have been pretty much ignoring philosophy that isn't grounded in mathematics, physics or cognitive science from Kant onwards, and won't bother with stuff that doesn't seem readable from this viewpoint. The tricky thing that would help would be to somehow translate the philosopher-speak into lesswronger-speak. Unfortunately this'd require some fluency in both.

Comment author: Tuukka_Virtaperko 13 January 2012 01:02:05AM 1 point [-]

It's not like your average "competent metaphysicist" would understand Langan either. He wouldn't possibly even understand Wheeler. Langan's undoing is to have the goals of a metaphysicist and the methods of a computer scientist. He is trying to construct a metaphysical theory which structurally resebles a programming language with dynamic type checking, as opposed to static typing. Now, metaphysicists do not tend to construct such theories, and computer scientists do not tend to be very familiar with metaphysics. Metaphysical theories tend to be deterministic instead of recursive, and have a finite preset amount of states that an object can have. I find the CTMU paper a bit sketchy and missing important content besides having the mistake. If you're interested in the mathematical structure of a recursive metaphysical theory, here's one: http://www.moq.fi/?p=242

Formal RP doesn't require metaphysical background knowledge. The point is that because the theory includes a cycle of emergence, represented by the power set function, any state of the cycle can be defined in relation to other states and prior cycles, and the amount of possible states is infinite. The power set function will generate a staggering amount of information in just a few cycles, though. Set R is supposed to contain sensory input and thus solve the symbol grounding problem.

Comment author: Tuukka_Virtaperko 05 January 2012 10:04:40PM *  0 points [-]

That's not a critical flaw. In metaphysics, you can't take for granted that the world is not in your head. The only thing you really can do is to find an inconsistency, if you want to prove someone wrong.

Langan has no problems convincing me. His attempt at constructing a reality theory is serious and mature and I think he conducts his business about the way an ordinary person with such aims would. He's not a literary genius like Robert Pirsig, he's just really smart otherwise.

I've never heard anyone to present such criticism of the CTMU that would actually imply understanding of what Langan is trying to do. The CTMU has a mistake. It's that Langan believes (p. 49) the CTMU to satisfy the Law Without Law condition, which states: "Concisely, nothing can be taken as given when it comes to cosmogony." (p. 8)

According to the Mind Equals Reality Principle, the CTMU is comprehensive. This principle "makes the syntax of this theory comprehensive by ensuring that nothing which can be cognitively or perceptually recognized as a part of reality is excluded for want of syntax". (p. 15) But undefinable concepts can neither be proven to exist nor proven not to exist. This means the Mind Equals Reality Principle must be assumed as an axiom. But to do so would violate the Law Without Law condition.

The Metaphysical Autology Principle could be stated as an axiom, which would entail the nonexistence of undefinable concepts. This principle "tautologically renders this syntax closed or self-contained in the definitive, descriptive and interpretational senses". (p. 15) But it would be arbitrary to have such an axiom, and the CTMU would again fail to fulfill Law Without Law.

If that makes the CTMU rubbish, then Russell's Principia Mathematica is also rubbish, because it has a similar problem which was pointed out by Gödel. EDIT: Actually the problem is somewhat different than the one addressed by Gödel.

Langan's paper can be found here EDIT: Fixed link.

Comment author: Tuukka_Virtaperko 10 January 2012 03:28:52PM 0 points [-]

To clarify, I'm not the generic "skeptic" of philosophical thought experiments. I am not at all doubting the existence of the world outside my head. I am just an apparently competent metaphysician in the sense that I require a Wheeler-style reality theory to actually be a Wheeler-style reality theory with respect to not having arbitrary declarations.

Comment author: pjeby 02 November 2010 06:40:25PM 2 points [-]

Yeah, I know what it looks like: meta-physical rubbish.

It is. I got as far as this paragraph of the introduction to his paper before I found a critical flaw:

Of particular interest to natural scientists is the fact that the laws of nature are a language. To some extent, nature is regular; the basic patterns or general aspects of structure in terms of which it is apprehended, whether or not they have been categorically identified, are its “laws”. The existence of these laws is given by the stability of perception.

At this point, he's already begging the question, i.e. presupposing the existence of supernatural entities. These "laws" he's talking about are in his head, not in the world.

In other words, he hasn't even got done presenting what problem he's trying to solve, and he's already got it completely wrong, and so it's doubtful he can get to correct conclusions from such a faulty premise.

Comment author: Tuukka_Virtaperko 05 January 2012 10:04:40PM *  0 points [-]

That's not a critical flaw. In metaphysics, you can't take for granted that the world is not in your head. The only thing you really can do is to find an inconsistency, if you want to prove someone wrong.

Langan has no problems convincing me. His attempt at constructing a reality theory is serious and mature and I think he conducts his business about the way an ordinary person with such aims would. He's not a literary genius like Robert Pirsig, he's just really smart otherwise.

I've never heard anyone to present such criticism of the CTMU that would actually imply understanding of what Langan is trying to do. The CTMU has a mistake. It's that Langan believes (p. 49) the CTMU to satisfy the Law Without Law condition, which states: "Concisely, nothing can be taken as given when it comes to cosmogony." (p. 8)

According to the Mind Equals Reality Principle, the CTMU is comprehensive. This principle "makes the syntax of this theory comprehensive by ensuring that nothing which can be cognitively or perceptually recognized as a part of reality is excluded for want of syntax". (p. 15) But undefinable concepts can neither be proven to exist nor proven not to exist. This means the Mind Equals Reality Principle must be assumed as an axiom. But to do so would violate the Law Without Law condition.

The Metaphysical Autology Principle could be stated as an axiom, which would entail the nonexistence of undefinable concepts. This principle "tautologically renders this syntax closed or self-contained in the definitive, descriptive and interpretational senses". (p. 15) But it would be arbitrary to have such an axiom, and the CTMU would again fail to fulfill Law Without Law.

If that makes the CTMU rubbish, then Russell's Principia Mathematica is also rubbish, because it has a similar problem which was pointed out by Gödel. EDIT: Actually the problem is somewhat different than the one addressed by Gödel.

Langan's paper can be found here EDIT: Fixed link.

View more: Prev