Mass_Driver comments on Dreams of AIXI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (145)
Sorry, what is AIXI? It was not clear to me from the linked abstract.
Sorry, I should have linked to a quick overview of AIXI. Its basically an algorithm for ultimate universal intelligence, and a theorem showing the algorithm is optimal. It shows what a universal intelligence could be or should be like at the limits - given vast amounts of computation.
Interesting.
(1) What do you mean by "intelligence?"
(2) Why would "actually running such an algorithm on an infinite Turing Machine...have the interesting side effect of actually creating all such universes?"
The AIXI algorithm amounts to a formal mathematical definition of intelligence, but in plain english we can just say intelligence is a capacity for modelling and predicting one's environment.
This relates to the computability of physics and the materialist computationalist assumption in the SA itself. If we figure out the exact math underlying the universe (and our current theories are pretty close), and you ran that program on an infinite or near infinite computer, that system would be indistinguishable from the universe itself from the perspective of observers inside the simulation. Thus it would recreate the universe (albeit embedded in a parent universe). If you were to look inside that simulated universe, it would have entire galaxies, planets, humans or aliens pondering their consciousness, writing on websites, etc etc etc
I worry that there may be an instance of the Mind Projection Fallacy involved here. You are assuming there is a one-place predicate E(X) <=> {X has real existence}. But maybe the right way of thinking about it is as a two-place predicate J(A,X)<=> {Agent A judges that X has real existence}.
Example: In this formulation, Descartes's "cogito ergo sum" might best be expressed as leading to the conclusion J(me,me). Perhaps I can also become convinced of J(you,you) and perhaps even J(sim-being,sim_being). But getting from there to E(me) seems to be Mind Projection; getting to J(me, you) seems difficult; and getting to J(me, sim-being) seems very difficult. Especially if I can't also get to J(sim-being, me).
Very coherent; thank you.
Do your claims really depend on the optimality of AIXI? It seems to me that, using your logic, if I ran the exact math underlying the universe on, say, Wolfram Alpha, or a TI-86 graphing calculator, the simulated inhabitants would still have realistic experiences; they would just have them more slowly relative to our current frame of reality's time-stream.
No, computationalism is separate, and was more or less assumed. I discussed AIXI as interesting just because it shows that universal intelligence is in fact simulation, and so future hyperintelligences will create beings like us just by thinking/simulating our time period (in sufficient detail). And moreover, they won't have much of a choice (if they really want to deeply understand it).
As to your second thought, turing machines are turing machines, so it doesn't matter what form it takes as long as it has sufficient space and time. Of course, that rules out your examples though: you'll need something just a tad bigger than a TI-86 or Wolfram Alpha (on today's machines) to simulate anything on the scale of a planet, let alone a single human brain.
I think I'm finally starting to understand your article. I will probably have to go back and vote it up; it's a worthwhile point.
Do you have the link for that? I think there's an article somewhere, but I can't remember what it's called.
If there isn't one, why do you assume computationalism? I find it stunningly implausible that the mere specification of formal relationships among abstract concepts is sufficient to reify those concepts, i.e., to cause them to actually exist. For me, the very definitions of "concept," "relationship," and "exist" are almost enough to justify an assumption of anti-computationalism. A "concept" is something that might or might not exist; it is merely potential existence. A "relationship" is a set of concepts. I either don't know of or don't understand any of the insights that would suggest that everything that potentially exists and is computed therefore actually exists -- computing, to me, just sounds like a way of manipulating concepts, or, at best, of moving a few bits of matter around, perhaps LED switches or a turing tape, in accordance with a set of concepts. How could moving LED switches around make things real?
By "real," I mean made of "stuff." I get through a typical day and navigate my ordinary world by assuming that there is a distinction between "stuff" (matter-energy) and "ideas" (ways of arranging the matter-energy in space-time). Obviously thinking about an idea will tend to form some analog of the idea in the stuff that makes up my brain, and, if my brain were so thorough and precise as to resemble AIXI, the analog might be a very tight analog indeed, but it's still an analog, right? I mean, I don't take you to mean that an AIXI 'brain' would literally form a class-M planet inside its CPU so as to better understand the sentient beings on that planet. The AIXI brain would just be thinking about the ideas that govern the behavior of the sentient beings...and thinking about ideas, even very precisely, doesn't make the ideas real.
I might be missing something here; I'd appreciate it if you could point out the flaw(s) in my logic.
Substrate independence, functionalism, even the generalized anti-zombie principle--all of these have been covered in some depth on Lesswrong before. Much of it is in the sequences, like nonperson predicates and some of the links from it.
If you don't believe an emulated mind can be conscious, do you believe that your mind is noncomputable or that meat has special computational properties?
I buy that. That sort of model could probably exist.
That sort of zombie can't possibly exist.
It's not that I don't believe an emulated mind can be conscious. Perhaps it could. What boggles my mind is the assertion that emulation is sufficient to make a mind conscious -- that there exists a particular bunch of equations and algorithms such htat when they are written on a piece of paper they are almost certainly non-conscious, but when they are run through a Turing machine they are almost certainly conscious.
I have no opinion about whether my mind is computable. It seems likely that a reasonably good model of my mind might be computable.
I'm not sure what to make of the proposition that meat has special computational properties. I wouldn't put it that way, especially since I don't like the connotation that brains are fundamentally physically different from rocks. My point isn't that brains are special; my point is that matter-energy is special. Existence, in the physical sense, doesn't seem to me to be a quality that can be specified in an equation or an algorithm. I can solve Maxwell's equations all day long and never create a photon from scratch.
That doesn't necessarily mean that photons have special computational properties; it just means that even fully computable objects don't come into being by virtue of their having been computed. I guess I don't believe in substrate independence?
There are several reasons this is mind boggling, but they stem from a false intuition pump - consciousness like your own requires vastly more information than could be written down on a piece of paper.
Here is a much better way of thinking about it. From physics and neuroscience etc we know that the pattern identity of human-level consciousness (as consciousness isn't a simple boolean quality) is essentially encoded in the synaptic junctions, and corresponds to about 10^15 bits (roughly). Those bits are you.
Now if we paused your brain activity with chemicals, or we froze it, you would cease to be conscious, but would still exist because there is the potential to regain conscious activity in the future. So consciousness as a state is an active computational process that requires energy.
So in the end of the day, consciousness is a particular computational process(energy) on a particular arrangement of bits(matter).
There are many other equivalent ways of representing that particular arrangement, and the generality of turing machines is such that a sufficiently powerful computer is an arrangement of mass(bits) that with sufficient energy(computation) can represent any other system that can possibly exist. Anything. Including human consciousness.
I think you've successfully analyzed your beliefs, as far as you've gone--it does seem that "substrate independence" is something you don't believe in. However, "substrate independence" is not an indivisible unit; it's composed of parts which you do seem to believe in.
For instance, you seem to accept that the highly detailed model of EY, whether that just means functionally emulating his neurons and glial cells, or actually computing his hamiltonian, will claim to be him, for much the same reason he does. If we then simulate, at whatever level appropriate to our simulated EY, a highly detailed model of his house and neighborhood that evolves according to the same rules that the real life versions do, he will think the same things regarding these things that the real life EY does.
If we go on to simulate the rest of the universe, including all the other people in it, with the same degree of fidelity, no observation or piece of evidence other than the anthropic could tell them they're in a simulation.
Bear in mind that nothing magic happens when these equations go from paper to computer: If you had the time and low mathematical error rate and notebook space to sit down and work everything out on paper, the consequences would be the same. It's a slippery concept to work one's intuition around, but xkcd #505 gives as good an intuition pump as I've seen.
I don't think you can make this distinction meaningful. After all, what's an electron? Just a pattern in the electron field...
This isn't actually what I meant by computationalism (although I was using the word from memory, and my concept may differ from the philosopher's definition).
The idea that mere specification of formal relationships, that mere math in theory, can cause worlds to exist is a separate position than basic computationalism, and I don't buy it.
A formal mathematical system needs to actually be computed to be real. That is what causes time to flow in the child virtual universe. And in our physics, that requires energy in the parent universe. It also requires mass to represent bits. So computation can't just arise out of nothing - it requires computational elements in a parent universe organized in the right way.
khafra's replies are delving deeper into the philosophical background, so I don't need to add much more