About the classification thing: Agree that it's very important that a general AI be able to classify entities into "dumb machines" and things complex enough to be self-aware, warrant an intentional stance and require ethical consideration. Even putting aside the ethical concerns, being able to recognize complex agents with intentions and model their intentions instead of their most likely massively complex physical machinery is probably vital to any sort of meaningful ability to act in a social domain with many other complex agents (cf. Dennett's intentional stance)
The latter has, to my knowledge, never been done. Arguably, the latter task requires different ability which the scanner may not have. The former requires acquiring a bitmap and using image recognition. It has already been done with simple images such as parallel black and white lines, but I don't know whether bitmaps or image recognition were involved in that. If the cat is a problem, let's simplify the image to the black and white lines.
I understood the existing image reconstruction experiments measure the activation on the visual cortex when the subject is actually viewing an image, which does indeed get you a straightforward mapping to a bitmap. This isn't the same as thinking about a cat, a person could be thinking about a cat while not looking at one, and they could have a cat in their visual field while daydreaming or suffering from hysterical blindness, so that they weren't thinking about a cat despite having a cat image correctly show up in their visual cortex scan.
I don't actually know what the neural correlate of thinking about a cat, as opposed to having one's visual cortex activated by looking at one, would be like, but I was assuming interpreting it would require much more sophisticated understanding of the brain, basically at the level of difficult of telling whether a brain scan correlates with thinking about freedom, a theory of gravity or reciprocality. Basically something that's entirely beyond current neuroscience and more indicative of some sort of Laplace's demon like thought experiment where you can actually observe and understand the whole mechanical ensemble of the brain.
But RP is about the understanding a consciousness could attain of itself. Such an understanding could not be deterministic within the viewpoint of that consciousness. That would be like trying to have a map contain itself.
Quines are maps that contain themselves. A quining system could reflect on its entire static structure, though it would have to run some sort of emulation slower than its physical substrate to predict its future states. Hofstadter's GEB links quines to reflection in AI.
Well, that's supposed to be a good thing, because there are supposed to be none. But saying that might not help. If you don't know what consciousness or the experience of reality mean in my use (perhaps because you would reduce such experiences to theoretical models of physical entities and states of neural networks), you will probably not understand what I'm doing. That would suggest you cannot conceptualize idealistic ontology or you believe "mind" to refer to an empty set.
"There aren't any assumptions" is just a plain non-starter. There's the natural language we're using that's used to present the theory and ground the concepts in the theory, and natural language basically carries a billion years of evolution leading to the three billion base pair human genome loaded with accidental complexity, leading to something from ten to a hundred thousand years of human cultural evolution with even more accidental complexity that probably gets us something in the ballpark of 100 megabytes irreducible complexity from the human DNA that you need to build up a newborn brain and another 100 megabytes (going by the heuristic of one bit of permanently learned knowledge per one second) for the kernel of the cultural stuff a human needs to learn from their perceptions to be able to competently deal with concepts like "income tax" or "calculus". You get both of those for free when talking with other people, and neither when trying to build an AGI-grade theory of the mind.
This is also why I spelled out the trivial basic assumptions I'm working from (and probably did a very poor job at actually conveying the whole idea complex). When you start doing set theory, I assume we're dealing with things at the complexity of mathematical objects. Then you throw in something like "anthropology" as an element in a set, and I, still in math mode, start going, whaa, you need humans before you have anthropology, and you need the billion years of evolution leading to the accidental complexity in humans to have humans, and you need physics to have the humans live and run the societies for anthropology to study, and you need the rest of the biosphere for the humans to not just curl up and die in the featureless vacuum and, and.. and that's a lot of math. While the actual system with the power sets looks just like uniform, featureless soup to me. Sure, there are all the labels, which make my brain do the above i-don't-get-it dance, but the thing I'm actually looking for is the mathematical structure. And that's just really simple, nowhere near what you'd need to model a loose cloud of hydrogen floating in empty space, not to mention something many orders of magnitude more complex like a society of human beings.
My confusion about the assumptions is basically that I get the sense that analytic philosophers seem to operate like they could just write the name of some complex human concept, like "morality", then throw in some math notation like modal logic, quantified formulas and set memberships, and call it a day. But what I'm expecting is something that teaches me how to program a computer to do mind-stuff, and a computer won't have the corresponding mental concept for the word "morality" like a human has, since the human has the ~200M special sauce kernel which gives them that. And I hardly ever see philosophers talking about this bit.
A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you'd have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell's equations on a single sheet of paper won't do. It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
About the classification thing: Agree that it's very important that a general AI be able to classify entities into "dumb machines" and things complex enough to be self-aware, warrant an intentional stance and require ethical consideration. Even putting aside the ethical concerns, being able to recognize complex agents with intentions and model their intentions instead of their most likely massively complex physical machinery is probably vital to any sort of meaningful ability to act in a social domain with many other complex agents (cf. Dennett's intentional stance)
I understood the existing image reconstruction experiments measure the activation on the visual cortex when the subject is actually viewing an image, which does indeed get you a straightforward mapping to a bitmap. This isn't the same as thinking about a cat, a person could be thinking about a cat while not looking at one, and they could have a cat in their visual field while daydreaming or suffering from hysterical blindness, so that they weren't thinking about a cat despite having a cat image correctly show up in their visual cortex scan.
I don't actually know what the neural correlate of thinking about a cat, as opposed to having one's visual cortex activated by looking at one, would be like, but I was assuming interpreting it would require much more sophisticated understanding of the brain, basically at the level of difficult of telling whether a brain scan correlates with thinking about freedom, a theory of gravity or reciprocality. Basically something that's entirely beyond current neuroscience and more indicative of some sort of Laplace's demon like thought experiment where you can actually observe and understand the whole mechanical ensemble of the brain.
Quines are maps that contain themselves. A quining system could reflect on its entire static structure, though it would have to run some sort of emulation slower than its physical substrate to predict its future states. Hofstadter's GEB links quines to reflection in AI.
"There aren't any assumptions" is just a plain non-starter. There's the natural language we're using that's used to present the theory and ground the concepts in the theory, and natural language basically carries a billion years of evolution leading to the three billion base pair human genome loaded with accidental complexity, leading to something from ten to a hundred thousand years of human cultural evolution with even more accidental complexity that probably gets us something in the ballpark of 100 megabytes irreducible complexity from the human DNA that you need to build up a newborn brain and another 100 megabytes (going by the heuristic of one bit of permanently learned knowledge per one second) for the kernel of the cultural stuff a human needs to learn from their perceptions to be able to competently deal with concepts like "income tax" or "calculus". You get both of those for free when talking with other people, and neither when trying to build an AGI-grade theory of the mind.
This is also why I spelled out the trivial basic assumptions I'm working from (and probably did a very poor job at actually conveying the whole idea complex). When you start doing set theory, I assume we're dealing with things at the complexity of mathematical objects. Then you throw in something like "anthropology" as an element in a set, and I, still in math mode, start going, whaa, you need humans before you have anthropology, and you need the billion years of evolution leading to the accidental complexity in humans to have humans, and you need physics to have the humans live and run the societies for anthropology to study, and you need the rest of the biosphere for the humans to not just curl up and die in the featureless vacuum and, and.. and that's a lot of math. While the actual system with the power sets looks just like uniform, featureless soup to me. Sure, there are all the labels, which make my brain do the above i-don't-get-it dance, but the thing I'm actually looking for is the mathematical structure. And that's just really simple, nowhere near what you'd need to model a loose cloud of hydrogen floating in empty space, not to mention something many orders of magnitude more complex like a society of human beings.
My confusion about the assumptions is basically that I get the sense that analytic philosophers seem to operate like they could just write the name of some complex human concept, like "morality", then throw in some math notation like modal logic, quantified formulas and set memberships, and call it a day. But what I'm expecting is something that teaches me how to program a computer to do mind-stuff, and a computer won't have the corresponding mental concept for the word "morality" like a human has, since the human has the ~200M special sauce kernel which gives them that. And I hardly ever see philosophers talking about this bit.
A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you'd have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell's equations on a single sheet of paper won't do. It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.
There are many ways to answer that question. I have a flowchart and formulae. The opposite of that would be something to the effect of having the source code. I'm not sure why you expect me to have that. Was it something I said?
I thought I've given you links to my actual work, but I can't find them. Did I forget? Hmm...
If you dislike metaphysics, only the latter is for you. I can't paste the content, because the formatting on this website apparently does not permit html formulae. Wait a second, it does permit formulae, but only LaTeX. I know LaTeX, but the formulae aren't in that format right now. I should maybe convert them.
You won't understand the flowchart if you don't want to discuss metaphysics. I don't think I can prove that something, of which you don't know what it is, could be useful to you. You would have to know what it is and judge for yourself. If you don't want to know, it's ok.
I am currently not sure why you would want to discuss this thing at all, given that you do not seem quite interested of the formalisms, but you do not seem interested of metaphysics either. You seem to expect me to explain this stuff to you in terms of something that is familiar to you, yet you don't seem very interested to have a discussion where I would actually do that. If you don't know why you are having this discussion, maybe you would like to do something else?
There are quite probably others in LessWrong who would be interested of this, because there has been prior discussion of CTMU. People interested in fringe theories, unfortunately, are not always the brightest of the lot, and I respect your abilities to casually namedrop a bunch of things I will probably spend days thinking about.
But I don't know why you wrote so much about billions of years, babies, human cultural evolution, 100 megabytes and such. I am troubled by the thought that you might think I'm some loony hippie who actually needs a recap on those things. I am not yet feeling very comfortable in this forum because I perceive myself as vulnerable to being misrepresented as some sort of a fool by people who don't understand what I'm doing.
I'm not trying to change LessWrong. But if this forum has people criticizing the CTMU without having a clue of what it is, then I attain a certain feeling of entitlement. You can't just go badmouthing people and their theories and not expect any consequences if you are mistaken. You don't need to defend yourself either, because I'm here to tell you what recursive metaphysical theories such as the CTMU are about, or recommend you to shut up about the CTMU if you are not interested of metaphysics. I'm not here to bloat my ego by portraying other people as fools with witty rhetoric, and if you Google about the CTMU, you'll find a lot of people doing precisely that to the CTMU, and you will understand why I fear that I, too, could be treated in such a way.