jacob_cannell comments on Anthropomorphic AI and Sandboxed Virtual Universes - Less Wrong

-3 Post author: jacob_cannell 03 September 2010 07:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 03 September 2010 11:33:24PM *  9 points [-]

I'm not too concerned about the karma - more the lack of interesting replies and general unjustified holier-than-though attitude. This idea is different than "that alien message" and I didn't find a discussion of this on LW (not that it doesn't exist - I just didn't find it).

  1. This is not my first post.
  2. I posted this after I brought up the idea in a comment which at least one person found interesting.
  3. I have spent significant time reading LW and associated writings before I ever created an account.
  4. I've certainly read the AI-in-a-box posts, and the posts theorizing about the nature of smarter-than-human-intelligence. I also previously read "that alien message", and since this is similar I should have linked to it.
  5. I have a knowledge background that leads to somewhat different conclusions about A. the nature of intelligence itself, B. what 'smarter' even means, etc etc
  6. Different backgrounds, different assumptions, so I listed my background and starting assumptions as they somewhat differ than the LW norm

Back to 3:

Remember, the whole plot device of "that alien message" revolved around a large and obvious grand reveal by the humans. If information can only flow into the sim world once (during construction), and then ever after can only flow out of the sim world, that plot device doesn't work.

Trying to keep an AI boxed up where the AI knows that you exist is a fundamentally different problem than a box where the AI doesn't even know you exist, doesn't even know it is in a box, and may provably not even have enough information to know for certain whether it is in a box.

For example, I think the simulation argument holds water (we are probably in a sim), but I don't believe there is enough information in our universe for us to discover much of anything about the nature of a hypothetical outside universe.

This of course doesn't prove that my weak or strong Mind Prison conjectures are correct, but it at least reduces the problem down to "can we build a universe sim as good as this?"

Comment author: orthonormal 04 September 2010 12:46:00AM 1 point [-]

My apologies on assuming this was your first post, etc. (I still really needed to add that bit to the Welcome post, though.)

In short, faking atheism requires a very simple world-seed (anything more complicated screams "designer of a certain level" once they start thinking about it). I very much doubt we could find such a seed which could be run from start to civilization in a feasible number of computations. (If we cheated and ran forward approximations to the simulation, for instance, they could find the traces of the difference.)

Similarly, faking Omegahood requires a very finely optimized programmed world, because one thing Omega is less likely to do than a dumb creator is program sloppily.

Comment author: jacob_cannell 04 September 2010 06:59:27PM 1 point [-]

I very much doubt we could find such a seed which could be run from start to civilization in a feasible number of computations.

We have the seed - its called physics, and we certainly don't need to run it from start to civilization!

On the one hand I was discussing sci-fi scenarios that have an intrinsic explanation for a small human populations (such as a sleeper ship colony encountering a new system).

And on the other hand you can do big partial simulations of our world, and if you don't have enough AI's to play all the humans you could use simpler simulacra to fill in.

Eventually with enough Moore's Law you could run a large sized world on its own, and run it considerably faster than real time. But you still wouldn't need to start that long ago - maybe only a few generations.

(If we cheated and ran forward approximations to the simulation, for instance, they could find the traces of the difference.)

Could != would. You grossly underestimate how impossibly difficult this would be for them.

Again - how do you know you are not in a sim?

Comment author: orthonormal 04 September 2010 10:44:55PM 1 point [-]

Again - how do you know you are not in a sim?

You misunderstand me. What I'm confident about is that I'm not in a sim written by agents who are dumber than me.

Comment author: wedrifid 05 September 2010 01:06:20AM 1 point [-]

You misunderstand me. What I'm confident about is that I'm not in a sim written by agents who are dumber than me.

Not even agents with really fast computers?

Comment author: orthonormal 06 September 2010 01:20:56AM 0 points [-]

You're right, of course. I'm not in a sim written by agents dumber than me in a world where computation has noticeable costs (negentropy, etc).

Comment author: jacob_cannell 04 September 2010 10:52:00PM *  1 point [-]

How do you measure that intelligence?

What i'm trying to show is a set of techniques where a civilization could spawn simulated sub-civilizations such that the total effective intelligence capacity is mainly in the simulations. That doesn't have anything to do with the maximum intelligence of individuals in the sim.

Intelligence is not magic. It has strict computational limits.

A small population of guards can control a much larger population of prisoners. The same principle applies here. Its all about leverage. And creating an entire sim universe is a massive, massive lever of control. Ultimate control.

Comment author: Houshalter 04 September 2010 03:00:18AM 1 point [-]

In short, faking atheism requires a very simple world-seed (anything more complicated screams "designer of a certain level" once they start thinking about it). I very much doubt we could find such a seed which could be run from start to civilization in a feasible number of computations. (If we cheated and ran forward approximations to the simulation, for instance, they could find the traces of the difference.)

Similarly, faking Omegahood requires a very finely optimized programmed world, because one thing Omega is less likely to do than a dumb creator is program sloppily.

Well the simulation can make the rules of the universe extremely simple, not close approximations to something like our universe where they could see the approximations and catch on.

Or you could use close monitoring, possibly by another less dangerous, less powerful AI trained to detect bugs, bug abuse, and AIs that are catching on. Humans would also monitor the sim. The most important thing is that the AI are mislead as much as possible and given little or no input that could give them a picture of the real world and their actual existence.

And lastly, they should be kept dumb. A large number of not to bright AI is by far less dangerous, easier to monitor, and faster to simulate then a massive singular AI. The large group is also a closer aproximation to humanity, which I believe was the original intent of this simulation.

Comment author: jacob_cannell 04 September 2010 06:55:16PM 2 points [-]

Well the simulation can make the rules of the universe extremely simple, not close approximations to something like our universe where they could see the approximations and catch o

So actually, even today with computer graphics we have the tech to trace light and approximate all of the important physical interactions down to the level where a human observer in the sim could not tell the difference. Its too computationally expensive today, but it is only say a decade or two away, perhaps less.

You can't see the approximations because you just don't have enough sensing resolution in your eyes, and because in this case these beings will have visual systems that will have grown up inside the Matrix.

It will be much easier to fool them. Its actually not even necessary to strictly approximate our reality - if the AI visual systems has grown up completely in the Matrix, they will be tuned to the statistical patterns of the Matrix. Not our reality.

And lastly, they should be kept dumb.

I don't see how they need to be kept dumb. Intelligent humans (for the most part) do not suddenly go around thinking that they are in a sim and trying to break free. Its not really a function of intelligence.

If the world is designed to be as realistic and more importantly, consistent as our universe, AI's will not have enough information to speculate on our universe. It would be pointless - like arguing about god.

Comment author: Houshalter 04 September 2010 08:41:13PM 0 points [-]

So actually, even today with computer graphics we have the tech to trace light and approximate all of the important physical interactions down to the level where a human observer in the sim could not tell the difference. Its too computationally expensive today, but it is only say a decade or two away, perhaps less.

Maybe not on your laptop, but I think we do have the resources today to pull it off, esspecially considering the entities in the simulation do not see time pass in the real world. Whether the simulation pauses for a day to compute some massive event in the simulated world or it skip through a century in seconds because the entities in the simulation weren't doing much.

And this is why I keep bring up using AI to create/monitor the simulation in the first place. A massive project like this undertaken by human programmers is bound to contain dangerous bugs. More importantly, humans won't be able to optimize the program very well. Methods of improving program performance we have today like hashing, caching, pipelining, etc, are not optimal by any means. You can safely let an AI in a box optimize the program without it exploding or anything.

I don't see how they need to be kept dumb. Intelligent humans (for the most part) do not suddenly go around thinking that they are in a sim and trying to break free. Its not really a function of intelligence.

If the world is designed to be as realistic and more importantly, consistent as our universe, AI's will not have enough information to speculate on our universe. It would be pointless - like arguing about god.

"Dumb" as at human level or lower as opposed to a massive singular super entity. It is much easier to monitor the thoughts of a bunch of AIs then a single one. Arguably it would still be impossible, but at the very least you know they can't do much on their own and they would have to communicate with one another, communication you can monitor. Multiple entities are also very similiar and redundant, saving you alot of computation.

So you can make them all as intelligent as einstein, but not as intelligent as skynet.

Comment author: jacob_cannell 04 September 2010 10:17:48PM 1 point [-]

Whether the simulation pauses for a day to compute some massive event in the simulated world or it skip through a century in seconds because the entities in the simulation weren't doing much.

This is an interesting point, time flow would be quite nonlinear, but the simulation's utility is closely correlated with its speed. In fact, if we can't run it at least at real-time average speed, its not all that useful.

You bring me round to an interesting idea though, is that in the simulated world the distribution of intelligence could be much tighter or shifted compared to our world.

I expect it will be very interesting and highly controversial in our world when we say reverse engineer the brain and may find a large variation in the computational cost of an AI mind-sim of equivalent capability. A side effect of reverse engineering the brain will be a much more exact and precise understanding of IQ-type correlates, for example.

And this is why I keep bring up using AI to create/monitor the simulation in the first place.

This is surely important, but it defeats the whole point if the monitor AI approaches the complexity of the sim AI. You need a multiplier effect.

And just as a small number of guards can control a huge prison population in a well designed prison, the same principle should apply here - a smaller intelligence (that controls the sim directly) could indirectly control a much larger total sim intelligence.

"Dumb" as at human level or lower as opposed to a massive singular super entity.

A massive singular super entity as sometimes implied on this site I find not only to be improbable, but to actually be a physically impossible idea (at least not until you get to black hole computer level of technology).

Arguably it would still be impossible, but at the very least you know they can't do much on their own and they would have to communicate with one another, communication you can monitor.

I think you underestimate how (relatively) easy the monitoring aspect would be (compared to other aspects). Combine dumb-AI systems to automatically turn internal monologue into text (or audio if you wanted), put it into future google type search and indexing algorithms - and you have the entire sim-worlds thoughts at your fingertips. Using this kind of lever, one human-level intelligent operator could monitor a vast number of other intelligences.

Heck, the CIA is already trying to do a simpler version of this today.

So you can make them all as intelligent as einstein, but not as intelligent as skynet.

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

Comment author: wedrifid 05 September 2010 12:53:49AM *  3 points [-]

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

I can confirm the part about the credence. I think this kind of reverence for the efficacy of the human brain is comical.

Human technological civilisation exploded roughly speaking in an evolutionary heartbeat from the time it became capable of doing so. The chance that this capability opened up at just the moment when human intelligence was at even the maximum that DNA encoded ape descended brains could reach is negligible.

Comment author: jacob_cannell 05 September 2010 02:11:02AM *  -2 points [-]

EDIT: Improved politeness.

I think this kind of reverence for the efficacy of the human brain is comical.

The acknowledgement and analysis of the efficacy of the single practical example of general intelligence that we do have does not imply reverence. Efficacy is a relative term. Do we have another example of a universal intelligence to compare to?

Perhaps you mention efficacy in comparison to a hypothetical optimal universal intelligence. We have only AIXI and its variants which are only optimal in terms of maximum intelligence at the limits, but are grossly inferior in terms of practicality and computational efficacy.

There is a route to analyzing the brain's efficacy: it starts with analyzing it as a computational system and comparing it's performance to best known algorithms.

The problem is the brain has a circuit with ~ 10^14-10^15 circuit-elements - about the same amount of storage, and it only cycles at around 100 hz. That is 10^16 to 10^17 net switches/second.

A current desktop GPU has > 10^9 circuit elements and a speed over 10^9 cycles per second. That is > 10^18 net switches/second.

And yet we have no algorithm, running even on a supercomputer, which can beat the best humans in Go. Let alone read a book, pilot a robotic body at human level, write a novel, come up with a funny joke, patent an idea, or even manage a mcdonald's.

For one particular example, take the case of the game Go and compare to potential parallel algorithms that could run on a 100hz computer, that have zero innate starting knowledge of go, and can beat human players by simply learning about go.

Go is one example, but if you go from checkers to chess to go and keep going in that direction, you get into the large exponential search spaces where the brain's learning algorithms appear to be especially efficient.

Human technological civilisation exploded roughly speaking in an evolutionary heartbeat from the time it became capable of doing so

Your assumption seems to be? that civilization and intelligence is somehow coded in our brains.

According to the best current theory I have found - Our brains are basically just upsized ape brains with one new extremely important trick: we became singing apes (a few other species sing), but then got a lucky break when the vocal control circuit for singing actually connected to some general simulation-thought circuit (the task-negative and task-positive paths) - thus allowing us to associate song patterns with visual/audio objects.

Its also important to point out that some song birds appear just on the cusp of this capability, with much smaller brains. Its not really a size issue.

Technology and all that is all a result of language - memetics - culture. Its not some miracle of our brains. They appear to be just large ape brains with perhaps just one new critical trick.

Some whale species have much larger brains and in some sense probably have a higher intrinsic genetic IQ. But this doesn't really matter, because intelligence depends on memetic knowledge.

If einstein had been a feral child raised by wolves, he would have the exact same brain but would be literally mentally retarded on our scale of intelligence.

Genetics can limit intelligence, but it doesn't provide it.

The chance that this capability opened up at just the moment when human intelligence was at even the maximum that DNA encoded ape descended brains could reach is negligible.

In 3 separate lineages - whales, elephants, and humans, the mammalian brain all grew to about the same upper capacity and then petered out (100 to 200 billion neurons). The likely hypothesis is that we are near some asymptotic limit in neural-net brain space: a sweet spot. Increasing size further would have too many negative drawbacks - such as the speed hit due to the slow maximum signal transmission.

Comment author: timtyler 05 September 2010 02:16:09AM 2 points [-]

I think this kind of reverence for the efficacy of the human brain is comical.

When we have a computer go champion, your comment will become slightly more sensical.

You seriously can't see that one coming?

Comment author: jacob_cannell 05 September 2010 02:27:49AM 0 points [-]

I'd bet its 5 years away perhaps? But it only illustrates my point - because by some measures computers are already more powerful than the brain, which makes its wiring all the more impressive.

Comment author: Vladimir_Nesov 05 September 2010 08:32:48AM 0 points [-]

Come back when you have an algorithm that runs on a 100hz computer, that has zero starting knowledge of go, and can beat human players by simply learning about go.

Demand for particular proof.

Comment author: jacob_cannell 05 September 2010 07:09:04PM *  0 points [-]

The original comment was:

I think this kind of reverence for the efficacy of the human brain is comical

Which is equivalent to saying "I think this kind of reverence for the efficacy of Google is comical", and saying or implying you can obviously do better.

So yes, when there is a clear reigning champion, to say or imply it is 'inefficient' is nonsensical, and to make that claim strong requires something of substance, and not just congratulatory back patting and cryptic references to unrelated posts.

Comment author: jacob_cannell 05 September 2010 06:10:46PM -2 points [-]

Demand what? A proof that the brain runs at ~100hz? This is well known - wikipedia neurons.

Comment author: Perplexed 05 September 2010 03:17:16AM 1 point [-]

In 3 separate lineages - whales, elephants, and humans, the mammalian brain all grew to about the same upper size and then petered out. The likely hypothesis is that we are near some asymptotic limit in neural-net brain space. Increasing size further would have too much of a speed hit.

Could you expand on this, and provide a link, if you have one?

Comment author: timtyler 05 September 2010 03:23:50AM *  1 point [-]

Brain size across all animals is pretty variable.

Comment author: jacob_cannell 05 September 2010 04:09:03AM *  0 points [-]

Tim fetched some size data below, but you also need to compare cortical surface area - and the most accurate comparison should use neuron and synapse counts in the cortex. The human brain had a much stronger size constraint that would tend to make neurons smaller (to the extent possible), and shrink-optimize everything - due to our smaller body size.

The larger a brain, the more time it takes to coordinate circuit trips around the brain. Humans (and I presume other mammals) can make some decisions in nearly 100-200 ms - which is just a dozen or so neuron firings. That severely limits the circuit path size. Neuron signals do not move anywhere near the speed of light.

Wikipedia has a page comparing brain neuron counts

It estimates whales and elephants at 200 billion neurons, humans at around 100 billion. There is large range of variability in human brain sizes, and the upper end of the human scale may be 200 billion?

this page has some random facts

Of interest: Average number of neurons in the brain(human) = 100 billion cortex - 10 billion

Total surface area of the cerebral cortex(human) = 2,500 cm2 (2.5 ft2; A. Peters, and E.G. Jones, Cerebral Cortex, 1984)

Total surface area of the cerebral cortex (cat) = 83 cm2

Total surface area of the cerebral cortex (African elephant) = 6,300 cm2

Total surface area of the cerebral cortex (Bottlenosed dolphin) = 3,745 cm2 (S.H. Ridgway, The Cetacean Central Nervous System, p. 221)

Total surface area of the cerebral cortex (pilot whale) = 5,800 cm2

Total surface area of the cerebral cortex (false killer whale) = 7,400 cm2

In whale brain at least, it appears the larger size is more related to extra glial cells and other factors:

http://www.scientificamerican.com/blog/60-second-science/post.cfm?id=are-whales-smarter-than-we-are

Also keep in mind that the core cortical circuit that seems to do all the magic was invented in rats or their precursors and has been preserved in all these lineages with only minor variations.

Comment author: timtyler 05 September 2010 01:15:50AM *  2 points [-]

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

What - no Jupiter brains?!? Why not? Do you need a data center tour?

Comment author: jacob_cannell 05 September 2010 02:23:50AM *  2 points [-]

I like the data center tour :) - I've actually used that in some of my posts elsewhere.

And no, I think Jupiter Brains are ruled out by physics.

The locality of physics - the speed of light, really limits the size of effective computational systems. You want them to be as small as possible.

Given the choice between a planet sized computer and one that was 10^10 smaller, the latter would probably be a better option.

The maximum bits and thus storage is proportional to the mass, but the maximum efficiency is inversely proportional to radius. Larger systems lose efficiency in transmission, have trouble radiating heat, and waste vast amount of time because of speed of light delays.

An an interesting side note, in three very separate lineages (human, elephant, cetacean), mammalian brains all grew to around the same size and then stopped. Most likely because of diminishing returns. Human brains are expensive for our body size, but whales have similar sized brains and it would be very cheap for them to make them bigger - but they don't. Its a scaling issue - any bigger and the speed loss doesn't justify the extra memory.

There are similar scaling issues with body sizes. Dinosaurs and prehistoric large mammals represent an upper limit - mass increases with volume, but shearing stress strengths increase only with surface area - so eventually the body becomes too heavy for any reasonable bones to support.

Similar 3d/2d scaling issues limited the maximum size of tanks, and they also apply to computers (and brains).

Comment author: timtyler 05 September 2010 02:33:54AM *  1 point [-]

The maximum bits and thus storage is proportional to the mass, but the maximum efficiency is inversely proportional to radius. Larger systems lose efficiency in transmission, have trouble radiating heat, and waste vast amount of time because of speed of light delays.

So:.why think memory and computation capacity isn't important? The data centre that will be needed to immerse 7 billion humans in VR is going to be huge - and why stop there?

The 22 milliseconds it takes light to get from one side of the Earth to the other is tiny - light speed delays are a relatively minor issue for large brains.

For heat, ideally, you use reversible computing, digitise the heat and then pipe it out cleanly. Heat is a problem for large brains - but surely not a show-stopping one.

The demand for extra storage seems substantial. Do you see any books or CDs when you look around? The human brain isn't big enough to handly the demand, and so it outsourcing its storage and computing needs.

Comment author: jacob_cannell 05 September 2010 03:02:05AM 0 points [-]

So:.why think memory and computation capacity isn't important?

So memory is important, but it scales with the mass and that usually scales with volume, so there is a tradeoff. And computational capacity is actually not directly related to size, its more related to energy. But of course you can only pack so much energy into a small region before it melts.

The data centre that will be needed to immerse 7 billion humans in VR is going to be huge - and why stop there?

Yeah - I think the size argument is more against a single big global brain. But sure data centers with huge numbers of AI's eventually - makes sense.

The 22 milliseconds it takes light to get from one side of the Earth to the other is tiny - light speed delays are a relatively minor issue for large brains.

Hmm 22 milliseconds? Light travels a little slower through fiber and there are always delays. But regardless the bigger problem is you are assuming slow human thoughtrate - 100hz. If you want to think at the limits of silicon and get thousands or millions of times accelerated, then suddenly the subjective speed of light becomes very slow indeed.

Comment author: Houshalter 04 September 2010 11:27:16PM 1 point [-]

A massive singular super entity as sometimes implied on this site I find not only to be improbable, but to actually be a physically impossible idea (at least not until you get to black hole computer level of technology).

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

On the one hand you have extremely limited AI that can't communicate with each other. They would be extremely redundant and wast alot of resources because each will have to do the exact same process and discover the exact same things on their own.

On the other hand you have a massive singular AI individual made up of thousands of computing systems, each of which is devoted to storing seperate information and doing a seperate task. Basically it's a human like brain distributed over all available resources. This will enivitably fail as well; operations done on one side of the system could be light years away (we don't know how big the AI will get or what the constrains of it's situation will be, but AGI has to adapt to every possible situation) from where the data is needed.

The best is a combination of the two, as much communication through the network as possible, but specializing areas of resources for different purposes. This could lead to skynet like intelligences, or it could lead to a very individualistic AI society where the AI isn't a single entity but a massive variety of individuals in different states working together. It probably wouldn't be much like human civilization though. Human society evolved to fit a variety of restrictions that aren't present in AI. That means it could adapt a very different structure, stuff like morals (as we know them anyways) may not be necessary.

Comment author: Mass_Driver 04 September 2010 07:03:26AM 0 points [-]

I wish I could vote up this comment more than once.

Comment author: jacob_cannell 04 September 2010 06:00:21PM 1 point [-]

Thanks. :)