timtyler comments on Anthropomorphic AI and Sandboxed Virtual Universes - Less Wrong

-3 Post author: jacob_cannell 03 September 2010 07:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 04 September 2010 10:17:48PM 1 point [-]

Whether the simulation pauses for a day to compute some massive event in the simulated world or it skip through a century in seconds because the entities in the simulation weren't doing much.

This is an interesting point, time flow would be quite nonlinear, but the simulation's utility is closely correlated with its speed. In fact, if we can't run it at least at real-time average speed, its not all that useful.

You bring me round to an interesting idea though, is that in the simulated world the distribution of intelligence could be much tighter or shifted compared to our world.

I expect it will be very interesting and highly controversial in our world when we say reverse engineer the brain and may find a large variation in the computational cost of an AI mind-sim of equivalent capability. A side effect of reverse engineering the brain will be a much more exact and precise understanding of IQ-type correlates, for example.

And this is why I keep bring up using AI to create/monitor the simulation in the first place.

This is surely important, but it defeats the whole point if the monitor AI approaches the complexity of the sim AI. You need a multiplier effect.

And just as a small number of guards can control a huge prison population in a well designed prison, the same principle should apply here - a smaller intelligence (that controls the sim directly) could indirectly control a much larger total sim intelligence.

"Dumb" as at human level or lower as opposed to a massive singular super entity.

A massive singular super entity as sometimes implied on this site I find not only to be improbable, but to actually be a physically impossible idea (at least not until you get to black hole computer level of technology).

Arguably it would still be impossible, but at the very least you know they can't do much on their own and they would have to communicate with one another, communication you can monitor.

I think you underestimate how (relatively) easy the monitoring aspect would be (compared to other aspects). Combine dumb-AI systems to automatically turn internal monologue into text (or audio if you wanted), put it into future google type search and indexing algorithms - and you have the entire sim-worlds thoughts at your fingertips. Using this kind of lever, one human-level intelligent operator could monitor a vast number of other intelligences.

Heck, the CIA is already trying to do a simpler version of this today.

So you can make them all as intelligent as einstein, but not as intelligent as skynet.

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

Comment author: timtyler 05 September 2010 01:15:50AM *  2 points [-]

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

What - no Jupiter brains?!? Why not? Do you need a data center tour?

Comment author: jacob_cannell 05 September 2010 02:23:50AM *  2 points [-]

I like the data center tour :) - I've actually used that in some of my posts elsewhere.

And no, I think Jupiter Brains are ruled out by physics.

The locality of physics - the speed of light, really limits the size of effective computational systems. You want them to be as small as possible.

Given the choice between a planet sized computer and one that was 10^10 smaller, the latter would probably be a better option.

The maximum bits and thus storage is proportional to the mass, but the maximum efficiency is inversely proportional to radius. Larger systems lose efficiency in transmission, have trouble radiating heat, and waste vast amount of time because of speed of light delays.

An an interesting side note, in three very separate lineages (human, elephant, cetacean), mammalian brains all grew to around the same size and then stopped. Most likely because of diminishing returns. Human brains are expensive for our body size, but whales have similar sized brains and it would be very cheap for them to make them bigger - but they don't. Its a scaling issue - any bigger and the speed loss doesn't justify the extra memory.

There are similar scaling issues with body sizes. Dinosaurs and prehistoric large mammals represent an upper limit - mass increases with volume, but shearing stress strengths increase only with surface area - so eventually the body becomes too heavy for any reasonable bones to support.

Similar 3d/2d scaling issues limited the maximum size of tanks, and they also apply to computers (and brains).

Comment author: timtyler 05 September 2010 02:33:54AM *  1 point [-]

The maximum bits and thus storage is proportional to the mass, but the maximum efficiency is inversely proportional to radius. Larger systems lose efficiency in transmission, have trouble radiating heat, and waste vast amount of time because of speed of light delays.

So:.why think memory and computation capacity isn't important? The data centre that will be needed to immerse 7 billion humans in VR is going to be huge - and why stop there?

The 22 milliseconds it takes light to get from one side of the Earth to the other is tiny - light speed delays are a relatively minor issue for large brains.

For heat, ideally, you use reversible computing, digitise the heat and then pipe it out cleanly. Heat is a problem for large brains - but surely not a show-stopping one.

The demand for extra storage seems substantial. Do you see any books or CDs when you look around? The human brain isn't big enough to handly the demand, and so it outsourcing its storage and computing needs.

Comment author: jacob_cannell 05 September 2010 03:02:05AM 0 points [-]

So:.why think memory and computation capacity isn't important?

So memory is important, but it scales with the mass and that usually scales with volume, so there is a tradeoff. And computational capacity is actually not directly related to size, its more related to energy. But of course you can only pack so much energy into a small region before it melts.

The data centre that will be needed to immerse 7 billion humans in VR is going to be huge - and why stop there?

Yeah - I think the size argument is more against a single big global brain. But sure data centers with huge numbers of AI's eventually - makes sense.

The 22 milliseconds it takes light to get from one side of the Earth to the other is tiny - light speed delays are a relatively minor issue for large brains.

Hmm 22 milliseconds? Light travels a little slower through fiber and there are always delays. But regardless the bigger problem is you are assuming slow human thoughtrate - 100hz. If you want to think at the limits of silicon and get thousands or millions of times accelerated, then suddenly the subjective speed of light becomes very slow indeed.