Houshalter comments on Anthropomorphic AI and Sandboxed Virtual Universes - Less Wrong

-3 Post author: jacob_cannell 03 September 2010 07:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 04 September 2010 10:17:48PM 1 point [-]

Whether the simulation pauses for a day to compute some massive event in the simulated world or it skip through a century in seconds because the entities in the simulation weren't doing much.

This is an interesting point, time flow would be quite nonlinear, but the simulation's utility is closely correlated with its speed. In fact, if we can't run it at least at real-time average speed, its not all that useful.

You bring me round to an interesting idea though, is that in the simulated world the distribution of intelligence could be much tighter or shifted compared to our world.

I expect it will be very interesting and highly controversial in our world when we say reverse engineer the brain and may find a large variation in the computational cost of an AI mind-sim of equivalent capability. A side effect of reverse engineering the brain will be a much more exact and precise understanding of IQ-type correlates, for example.

And this is why I keep bring up using AI to create/monitor the simulation in the first place.

This is surely important, but it defeats the whole point if the monitor AI approaches the complexity of the sim AI. You need a multiplier effect.

And just as a small number of guards can control a huge prison population in a well designed prison, the same principle should apply here - a smaller intelligence (that controls the sim directly) could indirectly control a much larger total sim intelligence.

"Dumb" as at human level or lower as opposed to a massive singular super entity.

A massive singular super entity as sometimes implied on this site I find not only to be improbable, but to actually be a physically impossible idea (at least not until you get to black hole computer level of technology).

Arguably it would still be impossible, but at the very least you know they can't do much on their own and they would have to communicate with one another, communication you can monitor.

I think you underestimate how (relatively) easy the monitoring aspect would be (compared to other aspects). Combine dumb-AI systems to automatically turn internal monologue into text (or audio if you wanted), put it into future google type search and indexing algorithms - and you have the entire sim-worlds thoughts at your fingertips. Using this kind of lever, one human-level intelligent operator could monitor a vast number of other intelligences.

Heck, the CIA is already trying to do a simpler version of this today.

So you can make them all as intelligent as einstein, but not as intelligent as skynet.

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

Comment author: Houshalter 04 September 2010 11:27:16PM 1 point [-]

A massive singular super entity as sometimes implied on this site I find not only to be improbable, but to actually be a physically impossible idea (at least not until you get to black hole computer level of technology).

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

On the one hand you have extremely limited AI that can't communicate with each other. They would be extremely redundant and wast alot of resources because each will have to do the exact same process and discover the exact same things on their own.

On the other hand you have a massive singular AI individual made up of thousands of computing systems, each of which is devoted to storing seperate information and doing a seperate task. Basically it's a human like brain distributed over all available resources. This will enivitably fail as well; operations done on one side of the system could be light years away (we don't know how big the AI will get or what the constrains of it's situation will be, but AGI has to adapt to every possible situation) from where the data is needed.

The best is a combination of the two, as much communication through the network as possible, but specializing areas of resources for different purposes. This could lead to skynet like intelligences, or it could lead to a very individualistic AI society where the AI isn't a single entity but a massive variety of individuals in different states working together. It probably wouldn't be much like human civilization though. Human society evolved to fit a variety of restrictions that aren't present in AI. That means it could adapt a very different structure, stuff like morals (as we know them anyways) may not be necessary.