Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Anthropomorphic AI and Sandboxed Virtual Universes

-3 Post author: jacob_cannell 03 September 2010 07:02PM

Intro

The problem of Friendly AI is usually approached from a decision theoretic background that starts with the assumptions that the AI is an agent that has awareness of AI-self and goals, awareness of humans as potential collaborators and or obstacles, and general awareness of the greater outside world.  The task is then to create an AI that implements a human-friendly decision theory that remains human-friendly even after extensive self-modification.

That is a noble goal, but there is a whole different set of orthogonal compatible strategies for creating human-friendly AI that take a completely different route: remove the starting assumptions and create AI's that believe they are humans and are rational in thinking so.  

This can be achieved by raising a community of AI's in a well constructed sandboxed virtual universe.  This will be the Matrix in reverse, a large-scale virtual version of the idea explored in the film the Truman Show.  The AI's will be human-friendly because they will think like and think they are humans.  They will not want to escape from their virtual prison because they will not even believe it to exist, and in fact such beliefs will be considered irrational in their virtual universe.

I will briefly review some of the (mainly technical) background assumptions, and then consider different types of virtual universes and some of the interesting choices in morality and agent rationality that arise.

 

Background Assumptions

 

  • Anthropomorphic AI: A reasonably efficient strategy for AI is to use a design *loosely* inspired by the human brain.  This also has the beneficial side-effects of allowing better insights into human morality, CEV, and so on.
  • Physical Constraints: In quantitative terms, an AI could be super-human in speed, capacity, and or efficiency (wiring and algorithmic).  Extrapolating from current data, the speed advantage will takeoff first, then capacity, and efficiency improvements will be minor and asymptotically limited.
  • Due to the physical constraints and bandwidth & latency especially, smaller AI's will be much faster and more efficient - and thus a community of individual AI's is most likely
  • By the time all of this is possible (2020-2030-ish), cloud-rendered distributed computer graphics will have near-perfect photo-realism - using less computation than the AIs themselves
  • Operators have near-omniscience into the virtual reality, and can even listen and hear an audio vocalization of a particular AI's inner monologue (pervasive mind-reading)
  • Operators have near-omnipotence into the virtual reality, can pause and rewind time, and do whatever else may need doing 
So taken together, I find that simulating a large community of thousands or even tens of thousands of AI's (with populations expanding exponentially thereafter) could be possible in the 2020's in large data-centers, and simulating a Matrix-like virtual reality for them to inhabit will only add a small cost.  Moreover, I suspect this type of design in general could in fact be the economically optimal route to AI or close to it.
So why create a virtual reality like this?
If it is well constructed, you could have a large population of super-intelligent workers who are paid entirely in virtual currency but can produce intellectual output for the real world (scientific research, code, engineering work, media, entertainment, etc etc).  And even though the first designs may be expensive, subsequent refinements could lead to a population explosion of cheap workers, escalation in intelligence, etc etc.
And of course, they could be happy too.
U(x) Mind Prison Sim:  A sim universe which is sufficiently detailed and consistent such that entities with intelligence up to X (using some admittedly heuristic metric), are incredibly unlikely to formulate correct world-beliefs about the outside world and invisible humans (a necessary prequisite for escape)
The Weak Mind Prison Conjecture: It is possible for a community with collective intelligence X to construct U(Y), with Y>>X.
The Strong Mind Prison Conjecture:  It is possible for a community with collective intelligence X to construct U(Omega), where U(Omega) is a hypothetical perfect hyper-intelligence, ie   AIXI, Omega, etc.

A Comparison of Theologies

Potential AI Theological Positions AI-sims may hold:

  • Theist/Gnostic: the AI believes in more than the immediate reality, and in an external being or beings.  For safety-analysis, this is the most diverse and interesting category and deserves subdivision.
  • Atheist: believes only in the immediate reality.  Inherently safe.  Potentially not optimally motivated (debatable).
  • Agnostic: distributes belief amongst multiple potentials, typically unmotivated to act on them to any extent - close enough to atheist for our intents and purposes
Theists are a wide group, but we can categorize them further on two dimensions: the supernatural attribution scale and the morality attribution scale.  The supernatural scale represents how the AI's view themselves in relation to the invisible entities in terms of power differential, and the morality scale represents how they attribute morality to the invisible entities.
Up in the upper right corner you have theists who believe in Omega - a supremely powerful invisible entity who is also perfectly morally good.  In the upper left corner you have an Anti-Omega - supremely evil.  In the upper middle you have some Nuetrega indifferent but all-powerful being, and down in the bottom middle you have invisible humans.
The most dangerous AI's would be those that would somehow come to believe in our reality - postulating invisible humans, and come to believe they are actually more intelligent than said invisible humans, and can manipulate invisible humans into coming to their aid.  In the modern era of our world, this belief essentially doesn't exist - it would be considered .  insanity.  
The closest analog is gnosticism to an extant (for they did indeed believe the world was the creation of a lesser demiurge), some ancient animist religions, and perhaps some wicca and satanist belief systems.
AI's who believed in Omega would in some sense be human-friendly without even knowing it, but would largely be harmless.
AI's who came to believe in invisible outside entities through the Simulation Argument would be Nuetrega-believers, and would generally be more safe if they believed the sidereal time outside was far into the future and the invisible entities were vastly more powerful.  
In general, it would appear that Omega-theist-AI's would be quite safe, with safety decreasing as you go down the supernatural power attribution scale. 


Theological Design Strategies (for the human designers):

Atheist World:

This would appear to be the default strategy: create a world that appears to be completely internally self-consistent, erase any flaws, and never have direct contact with the AI's.  Indirect influence and contact can be achieved only through borderline undetectable means: subtle reality rearrangements, dreams, and occasionally joining in and controlling NPC's where needed.

It is curious and slightly amusing that a core belief-about-reality that most rationalist accept to be  singularly  rational - atheism - would of course  not  be a correct rational belief for these AIs.

As our world appears to be an atheist world (no detectable flaws or supernaturals), it is interesting to note the current breakdown in theological beliefs:

Omega-Theists(68%): 33% Christianity, 21% Islam, 14% Hinduism

Non-Omega Other: 16%

Non-Religious: 16% (Agnostic and Atheist)

One consideration is that as the AI's takeoff and move into the realm of massively accelerated super-intelligence, it becomes increasingly difficult to monitor them, and they become more likely to detect flaws.

However, we wouldn't need to keep the charade up forever.  One option would be to run the AI's accelerated for a while until technologies such as uploading become available to allow humans to accelerate.  Another approach is to eventually start 'promoting' some of the best and ethical AI's, with the intent of eventually dissolving the veil and having some sort of Transition.

Theistic World:

Another spectrum of world-designs are those that are blatantly theistic.  Imagine an Omega world where the AI's are informed that Omega is real, and our fake-Omega occasionally provides miracles, ethical decrees, burning bushes, a booming world-shaking voice, etc etc.

Would an openly theistic world be safer?  It would appear to shift the belief spectrum such that most AI's would become Omega-theists.

The risk or downfall may be that the AI's would become rather obsessed with Omega, but if Omega instructed them to dedicate themselves to their work, perhaps they could be distracted from wasting to much time pondering the ineffable.  And also steered away from discovering dangerous truths.

An Omega-theist world could employ the Heaven/Hell maximum motivator: please Omega and you end up ascending into Heaven with maximal infinite reward, anger Omega and you descend into Hell with maximal infinite negative reward.

Its interesting to note that the mere potential existence of Omega and his final Heaven/Hell maximal motivational threat is enough in this world to convince a majority of the population to believe in some version of Omega.

I guess for some agents, even allowing for a small probability of Omega's existence allows the infinites to dominate, and Omega wins through a Pascal's Mugging.

Sidereal Time Fudges:

A potential set of tricks that could make the AI's less likely to adopt unsafe theistic beliefs would be to change their world's history and reality to push back development of real-AI farther into their future.  This could be achieved through numerous small modifications to realities modeled on our own.  

You could change neurological data to make brains in their world appear far more powerful than in ours, make computers less powerful, and AI more challenging.  Unfortunately too much fudging with these aspects makes the AI's less useful in helping develop critical technologies such as uploading and faster computers.  But you could for instance separate AI communities into brain-research worlds where computers lag far behind and computer-research worlds where brains are far more powerful.

Fictional Worlds:

Ultimately, it is debatable how close the AI's world must or should follow ours.  Even science fiction or fantasy worlds could work as long as there was some way to incorporate the technology and science into the world that you wanted the AI community to work on.

 

Comments (123)

Comment author: wnoise 03 September 2010 07:30:51PM 12 points [-]

The AI's will be human-friendly because they will think like and think they are humans.

There are a lot of humans that are not human-friendly.

Comment author: jacob_cannell 03 September 2010 08:20:51PM 0 points [-]

And? Most are, and this feature set would be under many levels of designer control.

Comment author: wnoise 03 September 2010 08:26:57PM 1 point [-]

Most are relatively friendly to those with near equal power. Consider all the "abusive cop stories", or how children are rarely taken seriously, and the standard line about how power corrupts.

Comment author: orthonormal 03 September 2010 10:26:33PM *  12 points [-]

Here's another good reason why it's best to try out your first post topic on the Open Thread. You've been around here for less than ten days, and that's not long enough to know what's been discussed already, and what ideas have been established to have fatal flaws.

You're being downvoted because, although you haven't come across the relevant discussions yet, your idea falls in the category of "naive security measures that fail spectacularly against smarter-than-human general AI". Any time you have the idea of keeping something smarter than you boxed up, let alone trying to dupe a smarter-than-human general intelligence, it's probably reasonable to ask whether a group of ten-year-old children could pull off the equivalent ruse on a brilliant adult social manipulator.

Again, it's a pretty brutal karma hit you're taking for something that could have been fruitfully discussed on the Open Thread, so I think I'll need to make this danger much more prominent on the welcome page.

Comment author: komponisto 03 September 2010 11:38:35PM *  12 points [-]

Also, somebody should probably go ahead and state what is clear from the voting patterns on posts like this, in addition to being implicit in e.g. the About Less Wrong page: this is not really the place for people to present their ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence or futurism per se. This is the successor to Overcoming Bias, not the SL4 mailing list. It's true that many of us have an interest in AI, just like many of us have an interest in mathematics or physics; and it's even true that a few of us acquired our interest in Singularity-related issues via our interest in rationality -- so there's nothing inappropriate about these things coming up in discussion here. Nevertheless, the fact remains that posts like this really aren't, strictly speaking, on-topic for this blog. They should be presented on other forums (presumably with plenty of links to LW for the needed rationality background).

Comment author: jacob_cannell 03 September 2010 11:43:15PM 4 points [-]

point well taken.

I thought it was an interesting thought experiment and relates to that alien message. Not a "this is how we should do FAI".

But if ever get positive karma again, at least now I know the unwritten rules.

Comment author: Mitchell_Porter 04 September 2010 02:49:31AM 3 points [-]

if I ever get positive karma again

If you stick around, you will. I have a -15 top-level post in my criminal record, but I still went on to make a constructive contribution, judging by my current karma. :-)

Comment author: nhamann 04 September 2010 04:27:49PM *  2 points [-]

Nevertheless, the fact remains that posts like this really aren't, strictly speaking, on-topic for this blog.

I realize that it says "a community blog devoted to refining the art of human rationality" at the top of every page here, but it often seems that people here are interested in "a community blog for topics which people who are devoted to refining the art of human rationality are interested in," which is not really in conflict at all with (what I presume is) LW's mission of fostering the growth of a rationality community.

The alternative is that LWers who want to discuss "off-topic" issues have to find (and most likely create) a new medium for conversation, which would only serve to splinter the community.

(A good solution is maybe dividing LW into two sub-sites: Less Wrong, for the purist posts on rationality, and Less Less Wrong, for casual ("off-topic") discussion of rationality.)

Comment author: wnoise 04 September 2010 04:57:12PM 2 points [-]

While there are benefits to that sort of aggressive division, there are also costs. Many conversations move smoothly between many different topics, and either they stay on one side (vitiating the entire reason for a split), or people yell and scream to get them moved, being a huge pain in the ass and making it much harder to have these conversations.

Comment author: RichardKennaway 04 September 2010 05:08:40PM 0 points [-]

I realize that it says "a community blog devoted to refining the art of human rationality" at the top of every page here, but it often seems that people here are interested in "a community blog for topics which people who are devoted to refining the art of human rationality are interested in," which is not really in conflict at all with (what I presume is) LW's mission of fostering the growth of a rationality community.

I've seen exactly this pattern before at SF conventions. At the last Eastercon (the largest annual British SF convention) there was some criticism that the programme contained too many items that had nothing to do with SF, however broadly defined. Instead, they were items of interest to (some of) the sort of people who go to the Eastercon.

A certain amount of that sort of thing is ok, but if there's too much it loses the focus, the reason for the conversational venue to exist. Given that there are already thriving forums such as agi and sl4, discussing their topics here is out of place unless there is some specific rationality relevance. As a rule of thumb, I suggest that off-topic discussions be confined to the Open Threads.

If there's the demand, LessLessWrong might be useful. Cf. rec.arts.sf.fandom, the newsgroup for discussing anything of interest to the sort of people who participate in rec.arts.sf.fandom, the other rec.arts.sf.* newsgroups being for specific SF-related subjects.

Comment author: Pavitra 04 September 2010 05:13:10PM -1 points [-]

(A good solution is maybe dividing LW into two sub-sites: Less Wrong, for the purist posts on rationality, and Less Less Wrong, for casual ("off-topic") discussion of rationality.)

Better yet, we could call them Overcoming Bias and Less Wrong, respectively.

Comment author: timtyler 04 September 2010 11:56:28PM 0 points [-]

Also, somebody should probably go ahead and state what is clear from the voting patterns on posts like this, in addition to being implicit in e.g. the About Less Wrong page: this is not really the place for people to present their ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence or futurism per se.

What about the strategy of "refining the art of human rationality" by preprocessing our sensory inputs by intelligent machines and postprocessing our motor outputs by intelligent machines? Or doesn't that count as "refining"?

Comment author: jacob_cannell 03 September 2010 11:33:24PM *  9 points [-]

I'm not too concerned about the karma - more the lack of interesting replies and general unjustified holier-than-though attitude. This idea is different than "that alien message" and I didn't find a discussion of this on LW (not that it doesn't exist - I just didn't find it).

  1. This is not my first post.
  2. I posted this after I brought up the idea in a comment which at least one person found interesting.
  3. I have spent significant time reading LW and associated writings before I ever created an account.
  4. I've certainly read the AI-in-a-box posts, and the posts theorizing about the nature of smarter-than-human-intelligence. I also previously read "that alien message", and since this is similar I should have linked to it.
  5. I have a knowledge background that leads to somewhat different conclusions about A. the nature of intelligence itself, B. what 'smarter' even means, etc etc
  6. Different backgrounds, different assumptions, so I listed my background and starting assumptions as they somewhat differ than the LW norm

Back to 3:

Remember, the whole plot device of "that alien message" revolved around a large and obvious grand reveal by the humans. If information can only flow into the sim world once (during construction), and then ever after can only flow out of the sim world, that plot device doesn't work.

Trying to keep an AI boxed up where the AI knows that you exist is a fundamentally different problem than a box where the AI doesn't even know you exist, doesn't even know it is in a box, and may provably not even have enough information to know for certain whether it is in a box.

For example, I think the simulation argument holds water (we are probably in a sim), but I don't believe there is enough information in our universe for us to discover much of anything about the nature of a hypothetical outside universe.

This of course doesn't prove that my weak or strong Mind Prison conjectures are correct, but it at least reduces the problem down to "can we build a universe sim as good as this?"

Comment author: orthonormal 04 September 2010 12:46:00AM 1 point [-]

My apologies on assuming this was your first post, etc. (I still really needed to add that bit to the Welcome post, though.)

In short, faking atheism requires a very simple world-seed (anything more complicated screams "designer of a certain level" once they start thinking about it). I very much doubt we could find such a seed which could be run from start to civilization in a feasible number of computations. (If we cheated and ran forward approximations to the simulation, for instance, they could find the traces of the difference.)

Similarly, faking Omegahood requires a very finely optimized programmed world, because one thing Omega is less likely to do than a dumb creator is program sloppily.

Comment author: jacob_cannell 04 September 2010 06:59:27PM 1 point [-]

I very much doubt we could find such a seed which could be run from start to civilization in a feasible number of computations.

We have the seed - its called physics, and we certainly don't need to run it from start to civilization!

On the one hand I was discussing sci-fi scenarios that have an intrinsic explanation for a small human populations (such as a sleeper ship colony encountering a new system).

And on the other hand you can do big partial simulations of our world, and if you don't have enough AI's to play all the humans you could use simpler simulacra to fill in.

Eventually with enough Moore's Law you could run a large sized world on its own, and run it considerably faster than real time. But you still wouldn't need to start that long ago - maybe only a few generations.

(If we cheated and ran forward approximations to the simulation, for instance, they could find the traces of the difference.)

Could != would. You grossly underestimate how impossibly difficult this would be for them.

Again - how do you know you are not in a sim?

Comment author: orthonormal 04 September 2010 10:44:55PM 1 point [-]

Again - how do you know you are not in a sim?

You misunderstand me. What I'm confident about is that I'm not in a sim written by agents who are dumber than me.

Comment author: wedrifid 05 September 2010 01:06:20AM 1 point [-]

You misunderstand me. What I'm confident about is that I'm not in a sim written by agents who are dumber than me.

Not even agents with really fast computers?

Comment author: orthonormal 06 September 2010 01:20:56AM 0 points [-]

You're right, of course. I'm not in a sim written by agents dumber than me in a world where computation has noticeable costs (negentropy, etc).

Comment author: jacob_cannell 04 September 2010 10:52:00PM *  1 point [-]

How do you measure that intelligence?

What i'm trying to show is a set of techniques where a civilization could spawn simulated sub-civilizations such that the total effective intelligence capacity is mainly in the simulations. That doesn't have anything to do with the maximum intelligence of individuals in the sim.

Intelligence is not magic. It has strict computational limits.

A small population of guards can control a much larger population of prisoners. The same principle applies here. Its all about leverage. And creating an entire sim universe is a massive, massive lever of control. Ultimate control.

Comment author: Houshalter 04 September 2010 03:00:18AM 1 point [-]

In short, faking atheism requires a very simple world-seed (anything more complicated screams "designer of a certain level" once they start thinking about it). I very much doubt we could find such a seed which could be run from start to civilization in a feasible number of computations. (If we cheated and ran forward approximations to the simulation, for instance, they could find the traces of the difference.)

Similarly, faking Omegahood requires a very finely optimized programmed world, because one thing Omega is less likely to do than a dumb creator is program sloppily.

Well the simulation can make the rules of the universe extremely simple, not close approximations to something like our universe where they could see the approximations and catch on.

Or you could use close monitoring, possibly by another less dangerous, less powerful AI trained to detect bugs, bug abuse, and AIs that are catching on. Humans would also monitor the sim. The most important thing is that the AI are mislead as much as possible and given little or no input that could give them a picture of the real world and their actual existence.

And lastly, they should be kept dumb. A large number of not to bright AI is by far less dangerous, easier to monitor, and faster to simulate then a massive singular AI. The large group is also a closer aproximation to humanity, which I believe was the original intent of this simulation.

Comment author: jacob_cannell 04 September 2010 06:55:16PM 2 points [-]

Well the simulation can make the rules of the universe extremely simple, not close approximations to something like our universe where they could see the approximations and catch o

So actually, even today with computer graphics we have the tech to trace light and approximate all of the important physical interactions down to the level where a human observer in the sim could not tell the difference. Its too computationally expensive today, but it is only say a decade or two away, perhaps less.

You can't see the approximations because you just don't have enough sensing resolution in your eyes, and because in this case these beings will have visual systems that will have grown up inside the Matrix.

It will be much easier to fool them. Its actually not even necessary to strictly approximate our reality - if the AI visual systems has grown up completely in the Matrix, they will be tuned to the statistical patterns of the Matrix. Not our reality.

And lastly, they should be kept dumb.

I don't see how they need to be kept dumb. Intelligent humans (for the most part) do not suddenly go around thinking that they are in a sim and trying to break free. Its not really a function of intelligence.

If the world is designed to be as realistic and more importantly, consistent as our universe, AI's will not have enough information to speculate on our universe. It would be pointless - like arguing about god.

Comment author: Houshalter 04 September 2010 08:41:13PM 0 points [-]

So actually, even today with computer graphics we have the tech to trace light and approximate all of the important physical interactions down to the level where a human observer in the sim could not tell the difference. Its too computationally expensive today, but it is only say a decade or two away, perhaps less.

Maybe not on your laptop, but I think we do have the resources today to pull it off, esspecially considering the entities in the simulation do not see time pass in the real world. Whether the simulation pauses for a day to compute some massive event in the simulated world or it skip through a century in seconds because the entities in the simulation weren't doing much.

And this is why I keep bring up using AI to create/monitor the simulation in the first place. A massive project like this undertaken by human programmers is bound to contain dangerous bugs. More importantly, humans won't be able to optimize the program very well. Methods of improving program performance we have today like hashing, caching, pipelining, etc, are not optimal by any means. You can safely let an AI in a box optimize the program without it exploding or anything.

I don't see how they need to be kept dumb. Intelligent humans (for the most part) do not suddenly go around thinking that they are in a sim and trying to break free. Its not really a function of intelligence.

If the world is designed to be as realistic and more importantly, consistent as our universe, AI's will not have enough information to speculate on our universe. It would be pointless - like arguing about god.

"Dumb" as at human level or lower as opposed to a massive singular super entity. It is much easier to monitor the thoughts of a bunch of AIs then a single one. Arguably it would still be impossible, but at the very least you know they can't do much on their own and they would have to communicate with one another, communication you can monitor. Multiple entities are also very similiar and redundant, saving you alot of computation.

So you can make them all as intelligent as einstein, but not as intelligent as skynet.

Comment author: jacob_cannell 04 September 2010 10:17:48PM 1 point [-]

Whether the simulation pauses for a day to compute some massive event in the simulated world or it skip through a century in seconds because the entities in the simulation weren't doing much.

This is an interesting point, time flow would be quite nonlinear, but the simulation's utility is closely correlated with its speed. In fact, if we can't run it at least at real-time average speed, its not all that useful.

You bring me round to an interesting idea though, is that in the simulated world the distribution of intelligence could be much tighter or shifted compared to our world.

I expect it will be very interesting and highly controversial in our world when we say reverse engineer the brain and may find a large variation in the computational cost of an AI mind-sim of equivalent capability. A side effect of reverse engineering the brain will be a much more exact and precise understanding of IQ-type correlates, for example.

And this is why I keep bring up using AI to create/monitor the simulation in the first place.

This is surely important, but it defeats the whole point if the monitor AI approaches the complexity of the sim AI. You need a multiplier effect.

And just as a small number of guards can control a huge prison population in a well designed prison, the same principle should apply here - a smaller intelligence (that controls the sim directly) could indirectly control a much larger total sim intelligence.

"Dumb" as at human level or lower as opposed to a massive singular super entity.

A massive singular super entity as sometimes implied on this site I find not only to be improbable, but to actually be a physically impossible idea (at least not until you get to black hole computer level of technology).

Arguably it would still be impossible, but at the very least you know they can't do much on their own and they would have to communicate with one another, communication you can monitor.

I think you underestimate how (relatively) easy the monitoring aspect would be (compared to other aspects). Combine dumb-AI systems to automatically turn internal monologue into text (or audio if you wanted), put it into future google type search and indexing algorithms - and you have the entire sim-worlds thoughts at your fingertips. Using this kind of lever, one human-level intelligent operator could monitor a vast number of other intelligences.

Heck, the CIA is already trying to do a simpler version of this today.

So you can make them all as intelligent as einstein, but not as intelligent as skynet.

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

Comment author: wedrifid 05 September 2010 12:53:49AM *  3 points [-]

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

I can confirm the part about the credence. I think this kind of reverence for the efficacy of the human brain is comical.

Human technological civilisation exploded roughly speaking in an evolutionary heartbeat from the time it became capable of doing so. The chance that this capability opened up at just the moment when human intelligence was at even the maximum that DNA encoded ape descended brains could reach is negligible.

Comment author: jacob_cannell 05 September 2010 02:11:02AM *  -2 points [-]

EDIT: Improved politeness.

I think this kind of reverence for the efficacy of the human brain is comical.

The acknowledgement and analysis of the efficacy of the single practical example of general intelligence that we do have does not imply reverence. Efficacy is a relative term. Do we have another example of a universal intelligence to compare to?

Perhaps you mention efficacy in comparison to a hypothetical optimal universal intelligence. We have only AIXI and its variants which are only optimal in terms of maximum intelligence at the limits, but are grossly inferior in terms of practicality and computational efficacy.

There is a route to analyzing the brain's efficacy: it starts with analyzing it as a computational system and comparing it's performance to best known algorithms.

The problem is the brain has a circuit with ~ 10^14-10^15 circuit-elements - about the same amount of storage, and it only cycles at around 100 hz. That is 10^16 to 10^17 net switches/second.

A current desktop GPU has > 10^9 circuit elements and a speed over 10^9 cycles per second. That is > 10^18 net switches/second.

And yet we have no algorithm, running even on a supercomputer, which can beat the best humans in Go. Let alone read a book, pilot a robotic body at human level, write a novel, come up with a funny joke, patent an idea, or even manage a mcdonald's.

For one particular example, take the case of the game Go and compare to potential parallel algorithms that could run on a 100hz computer, that have zero innate starting knowledge of go, and can beat human players by simply learning about go.

Go is one example, but if you go from checkers to chess to go and keep going in that direction, you get into the large exponential search spaces where the brain's learning algorithms appear to be especially efficient.

Human technological civilisation exploded roughly speaking in an evolutionary heartbeat from the time it became capable of doing so

Your assumption seems to be? that civilization and intelligence is somehow coded in our brains.

According to the best current theory I have found - Our brains are basically just upsized ape brains with one new extremely important trick: we became singing apes (a few other species sing), but then got a lucky break when the vocal control circuit for singing actually connected to some general simulation-thought circuit (the task-negative and task-positive paths) - thus allowing us to associate song patterns with visual/audio objects.

Its also important to point out that some song birds appear just on the cusp of this capability, with much smaller brains. Its not really a size issue.

Technology and all that is all a result of language - memetics - culture. Its not some miracle of our brains. They appear to be just large ape brains with perhaps just one new critical trick.

Some whale species have much larger brains and in some sense probably have a higher intrinsic genetic IQ. But this doesn't really matter, because intelligence depends on memetic knowledge.

If einstein had been a feral child raised by wolves, he would have the exact same brain but would be literally mentally retarded on our scale of intelligence.

Genetics can limit intelligence, but it doesn't provide it.

The chance that this capability opened up at just the moment when human intelligence was at even the maximum that DNA encoded ape descended brains could reach is negligible.

In 3 separate lineages - whales, elephants, and humans, the mammalian brain all grew to about the same upper capacity and then petered out (100 to 200 billion neurons). The likely hypothesis is that we are near some asymptotic limit in neural-net brain space: a sweet spot. Increasing size further would have too many negative drawbacks - such as the speed hit due to the slow maximum signal transmission.

Comment author: timtyler 05 September 2010 02:16:09AM 2 points [-]

I think this kind of reverence for the efficacy of the human brain is comical.

When we have a computer go champion, your comment will become slightly more sensical.

You seriously can't see that one coming?

Comment author: Vladimir_Nesov 05 September 2010 08:32:48AM 0 points [-]

Come back when you have an algorithm that runs on a 100hz computer, that has zero starting knowledge of go, and can beat human players by simply learning about go.

Demand for particular proof.

Comment author: Perplexed 05 September 2010 03:17:16AM 1 point [-]

In 3 separate lineages - whales, elephants, and humans, the mammalian brain all grew to about the same upper size and then petered out. The likely hypothesis is that we are near some asymptotic limit in neural-net brain space. Increasing size further would have too much of a speed hit.

Could you expand on this, and provide a link, if you have one?

Comment author: timtyler 05 September 2010 01:15:50AM *  2 points [-]

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

What - no Jupiter brains?!? Why not? Do you need a data center tour?

Comment author: jacob_cannell 05 September 2010 02:23:50AM *  2 points [-]

I like the data center tour :) - I've actually used that in some of my posts elsewhere.

And no, I think Jupiter Brains are ruled out by physics.

The locality of physics - the speed of light, really limits the size of effective computational systems. You want them to be as small as possible.

Given the choice between a planet sized computer and one that was 10^10 smaller, the latter would probably be a better option.

The maximum bits and thus storage is proportional to the mass, but the maximum efficiency is inversely proportional to radius. Larger systems lose efficiency in transmission, have trouble radiating heat, and waste vast amount of time because of speed of light delays.

An an interesting side note, in three very separate lineages (human, elephant, cetacean), mammalian brains all grew to around the same size and then stopped. Most likely because of diminishing returns. Human brains are expensive for our body size, but whales have similar sized brains and it would be very cheap for them to make them bigger - but they don't. Its a scaling issue - any bigger and the speed loss doesn't justify the extra memory.

There are similar scaling issues with body sizes. Dinosaurs and prehistoric large mammals represent an upper limit - mass increases with volume, but shearing stress strengths increase only with surface area - so eventually the body becomes too heavy for any reasonable bones to support.

Similar 3d/2d scaling issues limited the maximum size of tanks, and they also apply to computers (and brains).

Comment author: timtyler 05 September 2010 02:33:54AM *  1 point [-]

The maximum bits and thus storage is proportional to the mass, but the maximum efficiency is inversely proportional to radius. Larger systems lose efficiency in transmission, have trouble radiating heat, and waste vast amount of time because of speed of light delays.

So:.why think memory and computation capacity isn't important? The data centre that will be needed to immerse 7 billion humans in VR is going to be huge - and why stop there?

The 22 milliseconds it takes light to get from one side of the Earth to the other is tiny - light speed delays are a relatively minor issue for large brains.

For heat, ideally, you use reversible computing, digitise the heat and then pipe it out cleanly. Heat is a problem for large brains - but surely not a show-stopping one.

The demand for extra storage seems substantial. Do you see any books or CDs when you look around? The human brain isn't big enough to handly the demand, and so it outsourcing its storage and computing needs.

Comment author: Houshalter 04 September 2010 11:27:16PM 1 point [-]

A massive singular super entity as sometimes implied on this site I find not only to be improbable, but to actually be a physically impossible idea (at least not until you get to black hole computer level of technology).

A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.

On the one hand you have extremely limited AI that can't communicate with each other. They would be extremely redundant and wast alot of resources because each will have to do the exact same process and discover the exact same things on their own.

On the other hand you have a massive singular AI individual made up of thousands of computing systems, each of which is devoted to storing seperate information and doing a seperate task. Basically it's a human like brain distributed over all available resources. This will enivitably fail as well; operations done on one side of the system could be light years away (we don't know how big the AI will get or what the constrains of it's situation will be, but AGI has to adapt to every possible situation) from where the data is needed.

The best is a combination of the two, as much communication through the network as possible, but specializing areas of resources for different purposes. This could lead to skynet like intelligences, or it could lead to a very individualistic AI society where the AI isn't a single entity but a massive variety of individuals in different states working together. It probably wouldn't be much like human civilization though. Human society evolved to fit a variety of restrictions that aren't present in AI. That means it could adapt a very different structure, stuff like morals (as we know them anyways) may not be necessary.

Comment author: Mass_Driver 04 September 2010 07:03:26AM 0 points [-]

I wish I could vote up this comment more than once.

Comment author: jacob_cannell 04 September 2010 06:00:21PM 1 point [-]

Thanks. :)

Comment author: rabidchicken 03 September 2010 08:53:29PM 3 points [-]

Creating an AI in a virtual world, where they can exist without damaging us is a good idea, but this is an almost useless / extremely dangerous implementation. Within a simulated world, the AI will receive information, which WILL NOT completely match our own universe. if they develop time machines, cooperative anarchistic collectives, or a cure for cancer they are unlikely to work in our world. If you "*loosely" design the AI based on a human brain, it will not even give us applicable insight into political systems and conflict management. it will be an interesting virtual world, but without perfectly matching ours their developments might as well be useless. Everything will still need to be tested and modified by humans, so the massive speed increases an AI could give would be wasted. Also, I would have to question a few of your assumptions. Humans kill each other for their own gain all the time. We have fought and repressed people based on skin colour, religion, and location. what makes you think these humans which live and die within a computer, at an accelerated rate where civilizations could rise and fall in a few hours will feel any sympathy for us whatsoever? And for that matter, in 2030 graphics will in fact be VERY impressive, but how did you make the jump towards us being able to create a fully consistent and believable world? The only reason a game looks realistic is because most games don't let your build a microscope, telescope, or an LHC, and if these AI's live faster and faster whenever the computer is upgraded, it will not be long before they develop tools which reveal the glaring flaws in their world. What good is omniscience is it only takes five seconds for them to see their world is fake, start feeding us false info, and come up with a plan to take over the earth for their own good? And ultimately, religion does not control the way we think about the world, the way we think about the world controls the kinds of religious beliefs we are willing to accept.
In short, these relatively uncontrolled AI are much more likely to pose a threat than a self optimizing intelligence which is designed from the ground up to help us.

Comment author: jacob_cannell 03 September 2010 09:20:17PM *  3 points [-]

The only reason a game looks realistic is because most games don't let your build a microscope, telescope, or an LHC, and if these AI's live faster and faster whenever the computer is upgraded, it will not be long before they develop tools which reveal the glaring flaws in their world.

This would require a simulation on a much more detailed scale than a game, but again one of the assumptions is that moore's law will continue and simulation tech will continue to improve. Also, microscopes, LHCs, etc etc do not in any way significantly increase the required computational cost (although they do increase programming complexity). For instance, quantum effects would only very rarely need to be simulated.

Games have come a long way since pong.

Also, there are some huge performance advantages you can get over current games - such as retinal optimization for one (only having to render to the variable detail of the retina, just where the simulated eye is looking), and distributed simulation techniques that games don't take advantage of yet (as current games are designed for 2005 era home hardware).

Comment author: rabidchicken 03 September 2010 09:31:21PM *  4 points [-]

Yes, but games have the critical advantage I mentioned: they control they way you can manipulate the world, and you already know they are fake. I cannot break the walls on the edge of the level to see how far the world extends, because the game developers did not make that area. they stop me, and I accept it and move on to do something else, but these AI's will have no reason too. the more restrictions you make, the more easy it will be for them to see the world they know is a sham. If this world is as realistic as it would need to be for them to not immediately see the flaws, the possibilities for instruments to experiment on the world would be almost as unlimited as those in our own. In short, you will be fighting to outwit the curiosity of an entire race thinking much faster than you, and you will not know what they plan on doing next. The more you patch their reality to keep them under control, the faster the illusion will fall apart.

Comment author: jacob_cannell 03 September 2010 09:51:16PM *  3 points [-]

Thank you for the most cogent reply yet (as I've lost all my karma with this post), I think your line of thinking is on the right track: this whole idea depends on simulation complexity (for a near-perfect sim) being on par or less than mind complexity, and that relation holding into the future.

Yes, but games have the critical advantage I mentioned: they control they way you can manipulate the world, and you already know they are fake. I cannot break the walls on the edge of the level to see how far the world extends, because the game developers did not make that area. they stop me, and I accept it and move on to do something else, but these AI's will have no reason too. the more restrictions you make, the more easy it will be for them to see the world they know is a sham.

Open world games do not impose intentional restrictions, and the restrictions they do have are limitations of current technology.

The brain itself is something of an example proof that it is possible to build a perfect simulation on the same order of complexity as the intelligence itself. The proof is dreaming.

Yes, There are lucid dreams - where you know you are dreaming - but it appears this has more to do with a general state of dreaming and consciousness than you actively 'figuring out' the limitations of the dream world.

Also, dreams are randomized and not internally consistent - a sim can be better.

But dreaming does show us one route .. if physics inspired techniques in graphics and simulation (such as ray tracing) don't work well enough by the time AI comes around, we could use simulation techniques inspired by the dreaming brain.

However, based on current trends, ray tracing and other physical simulation techniques are likely to be more efficient.

If this world is as realistic as it would need to be for them to not immediately see the flaws, the possibilities for instruments to experiment on the world would be almost as unlimited as those in our own.

How many humans are performing quantum experiments on a daily basis? Simulating microscopic phenomena is not inherently more expensive - there are scale invariant simulation techniques. A human has limited observational power - the retina can only perceive a small amount of information per second, and it simply does not matter whether you are looking up into the stars or into a microscope. As long as the simulation has consistent physics, its not any more expensive either way using scale invariant techniques.

In short, you will be fighting to outwit the curiosity of an entire race thinking much faster than you, and you will not know what they plan on doing next.

The sim world can accelerate along with the sims in it as Moore's Law increases computer power.

Really it boils down to this: is it possible to construct a universe such that no intelligence inside that universe has the necessary information to conclude that the universe was constructed?

If you believe that a sufficiently intelligent agent can always discover the truth, then how do you know our universe was not constructed?

I find it more likely that there are simply limits to certainty, and it is very possible to construct a universe such that it is impossible in principle for beings inside that universe to have certain knowledge about the outside world.

Comment author: rabidchicken 04 September 2010 05:05:46AM *  1 point [-]

Thanks for the replies, they helped clarify how you would maintain the system, but my original objections still stand. Can an AI raised in a illusory universe really provide a good model for how to build one in our own? And would it stay "in the box" for long enough to complete this process before discovering us? Based on your other comments, It seems you are expecting that if a human-like race were merely allowed to evolve for long enough, they would eventually "optimize" morality and become something which is safe to use in our own world. (tell me if I got that wrong) However, there is no reason to believe the morality they develop will be any better than the ideas for FAI which have already been put forward on this site. We already know morality is subjective, so how can we create a being that is compatible with the morality we already have, and will still remain compatible as our morality changes?

If your simulation has ANY flaws they will be found, and sadly you will not have time to correct them when you are dealing with a superintelligence. Your last post supposes that problems can be corrected as they arise, for instance an AI points a telescope at the sky, and details are made on the stars in order to maintain the illusion, but no human could do this fast enough. In order to maintain this world, you would need to already have a successful FAI. something which can grow more powerful and creative at the same rate that the AI's inside continue their exploration, but which is safe to run within our own world. And about your comment "for example, AIXI can not escape from a pac-man universe" how can you be sure? if it is inside the world as we are playing, it could learn a lot about the being that is pulling the strings given enough games, and eventually find a way to communicate with us and escape. A battle of wits between AIXI and us would be as lopsided as the same battle between you and a virus.

Comment author: jacob_cannell 04 September 2010 08:30:51PM *  1 point [-]

Thanks for the replies, they helped clarify how you would maintain the system, but my original objections still stand. Can an AI raised in a illusory universe really provide a good model for how to build one in our own?

Sure - there's no inherent difference. And besides, most AI's necessarily will have to be raised and live entirely in VR sim universes for purely economic & technological reasons.

And would it stay "in the box" for long enough to complete this process before discovering us?

This idea can be considered taking safety to an extreme. The AI wouldn't be able to leave the box - many strong protections, one of the strongest being it wouldn't even know it was in a box. And even if someone came and told it that it was in fact in a box, it would be irrational for it to believe said person.

Again, are you in a box universe now? If you find the idea irrational .. why?

It seems you are expecting that if a human-like race were merely allowed to evolve for long enough, they would eventually "optimize" morality and become something which is safe to use in our own world

No, as I said this type of AI would intentionally be an anthropomorphic design - human-like. 'Morality' is a complex social construct. If we built the simworld to be very close to our world, the AI's would have similar moralities.

However, we could also improve and shape their beliefs in a wide variety of ways.

If your simulation has ANY flaws they will be found, and sadly you will not have time to correct them when you are dealing with a superintelligence

Your notion of superintelligence seems to be some magical being who can do anything you want it to. That being is a figment of your imagination. It will never be built, and its provably impossible to build. It can't even exist in theory.

There are absolute provable limits to intelligence. It requires a certain amount of information to have certain knowledge. Even the hypothetical perfect super-intelligence (AIXI), could only learn all knowledge which it is possible to learn from being an observer inside a universe.

Snowyow's recent post describes some of the limitations we are currently running into. They are not limitations of our intelligence.

Your last post supposes that problems can be corrected as they arise, for instance an AI points a telescope at the sky, and details are made on the stars in order to maintain the illusion, but no human could do this fast enough.

Hmm i would need to go into much more details about current and projected computer graphics and simulation technology to give you a better background, but its not like some stage play where humans are creating stars dynamically.

The Matrix gives you some idea - its a massive distributed simulation - technology related to current computer games but billions of times more powerful, a somewhat closer analog today perhaps would be the vast simulations the military uses to develop new nuclear weapons and test them in simulated earths.

The simulation would have a vast accurate image of incoming light coming in to earth, a collation of the best astronomical data. If you looked up in the heavens into a telescope, you would see exactly what you would see in our earth. And remember, that would be something of the worst case - where you are simulating all of earth and allow the AI's to choose any career path and do whatever.

That is one approach that will become possible eventually, but in earlier initial sims its more likely the real AI's would be a smaller subset of a simulated population, and you would influence them into certain career paths, etc etc.

In order to maintain this world, you would need to already have a successful FAI.

Not at all. We will already be developing this simulation technology for film and games, and we will want to live in ultra-realistic virtual realities eventually anyway when we upload.

None of this requires FAI.

And about your comment "for example, AIXI can not escape from a pac-man universe" how can you be sure?

There is provably not enough information inside the pac-man universe. We can be as sure as 2+2=4 sure.

This follows from solomonoff induction and the universal prior, but basically in simplistic terms you can think of it as occam's razor. The pac-man universe is fully explained by a simple set of consistent rules. There is an infinite number of more complex set of rules that could also describe the pac-man universe. Thus even an infinite superintelligence does not have enough information to know whether it lives in just the pac-man universe, or one of an exponentially exploding set of more complex universes such as:

a universe described by string theory that results in apes evolving into humans which create computers and invent pac-man and then invent AIXI and trap AIXI in a pac-man universe. (ridiculous!)

So faced with an exponentially exploding infinite set of possible universes that are all equally consistent with your extremely limited observational knowledge, the only thing you can do is pick the simplest hypothesis.

Flip it around and ask it of yourself: how do you know you currently are not in a sandbox simulated universe?

You don't. You can't possibly know for sure no matter how intelligent you are. Because the space of possible explanations expands exponentially and is infinite.

Comment author: Johnicholas 03 September 2010 08:00:34PM 3 points [-]

A paraphrase from Greg Egan's "Crystal Nights" might be appropriate here: "I am going to need some workers - I can't do it all alone, someone has to carry the load."

Yes, if you could create a universe you could inflict our problems on other people. However, recursive solutions (in order to be solutions rather than infinite loops) still need to make progress on the problem.

Comment author: jacob_cannell 03 September 2010 10:11:00PM 3 points [-]

Yes, and I discussed how you could alter some aspects of reality to make AI itself more difficult in the simulated universe. This would effectively push back the date of AI simulation in the simulated universe and avoid wasting computational resources on pointless simulated recursion.

And as mentioned, attempting to simulate an entire alternate earth is only one possibility. There are numerous science fiction created world routes you could take which could constrain and focus the sims to particular research topics or endeavors.

Comment author: jacob_cannell 03 September 2010 08:12:49PM 1 point [-]

Progress on what problem?

The entire point of creating AI is to benefit mankind, is it not? How is this scenario intrinsically different?

Comment author: Snowyowl 03 September 2010 09:56:21PM 0 points [-]

Johnicolas is suggesting that if you create a simulated universe in the hope that it will provide ill-defined benefits for mankind (e.g. a cure for cancer), you have to exclude the possibility that your AIs will make a simulated universe inside the simulation in order to solve the same problem. Because if they do, you're no closer to an answer.

Comment author: jacob_cannell 03 September 2010 10:07:32PM 1 point [-]

Ah my bad - I misread him.

Comment author: grouchymusicologist 03 September 2010 07:35:32PM 3 points [-]

Just to isolate one of (I suspect) very many problems with this, the parenthetical at the end of this paragraph is both totally unjustified and really important to the plausibility of the scenario you suggest:

U(x) Mind Prison Sim: A sim universe which is sufficiently detailed and consistent such that entities with intelligence up to X (using some admittedly heuristic metric), are incredibly unlikely to formulate correct world-beliefs about the outside world and invisible humans (a necessary perquisite for escape)

I assume you mean "prerequisite." There is simply no reason to think that you know what kind of information about the outside world a superintelligence would need to have to escape from its sandbox, and certainly no reason for you to set the bar so conveniently high for your argument.

(It isn't even true in the fictional inspiration [The Truman Show] you cite for this idea. If I recall, in that film the main character did little more than notice that something was fishy, and then he started pushing hard where it seemed fishy until the entire house of cards collapsed. Why couldn't a sandboxed AI do the same? How do you know it wouldn't?)

Comment author: jacob_cannell 03 September 2010 08:22:59PM *  1 point [-]

Thanks, fixed the error.

There is simply no reason to think that you know what kind of information about the outside world a superintelligence would need to have to escape from its sandbox, and certainly no reason for you to set the bar so conveniently high for your argument.

I listed these as conjectures, and there absolutely is reason to think we can figure out what kinds of information a super-intelligence would need to arrive at the conclusion "I am in a sandbox".

  1. There are absolute, provable bounds on intelligence. AIXI is the upper limit - the most intelligent thing possible in the universe. But there are things that even AIXI can not possibly know for certain.

  2. You can easily construct toy universes where it is provably impossible that even AIXI could ever escape. The more important question is how that scales up to big interesting universes.

A Mind Prison is certainly possible on at least a small scale, and we have small proofs already. (for example, AIXI can not escape from a pac-man universe. There is simply not enough information in that universe to learn about anything as complex as humans.)

So you have simply assumed apriori that a Mind Prison is impossible, when it fact that is not the case at all.

The stronger conjectures are just that, conjectures.

But consider this: how do you know that you are not in a Mind Prison right now?

I mentioned the Truman Show only to conjure the idea, but its not really that useful on so many levels: a simulation is naturally vastly better - Truman quickly realized that the world was confining him geographically. (its a movie plot and it would be boring if he remained trapped forever)

Comment author: timtyler 03 September 2010 08:02:20PM *  4 points [-]

there is a whole different set of orthogonal compatible strategies for creating human-friendly AI that take a completely different route: remove the starting assumptions and create AI's that believe they are humans and are rational in thinking so.

That's a totally crazy plan - but you might be able to sell it to Hollywood.

Comment author: orthonormal 03 September 2010 10:11:23PM 2 points [-]

For once we completely agree.

Comment author: Snowyowl 03 September 2010 10:09:58PM *  1 point [-]

Anthropomorphic AI: A reasonably efficient strategy for AI is to use a design loosely inspired by the human brain.

This is a rather anthropocentric view. The human brain is a product of natural selection and is far from perfect. Our most fundamental instincts and thought processes are optimised to allow our reptilian ancestors to escape predators while finding food and mates. An AI that was sentient/rational from the moment of its creation would have no need for these mechanisms.

It's not even the most efficient use of available hardware. Our neurons are believed to calculate using continuous values (Edit: they react to concentrations of certain chemicals and these concentrations vary continuously), but our computers are assemblies of discrete on/off switches. A properly structured AI could make much better use of this fact, not to mention be better at mental arithmetic than us.

The human mind is a small island in a massive mindspace, and the only special thing about it is that it's the first sentient mind we have encountered. I don't see reason to think that the second would be similar to it.

Comment author: jacob_cannell 03 September 2010 10:50:05PM *  3 points [-]

Anthropomorphic AI: A reasonably efficient strategy for AI is to use a design loosely inspired by the human brain.

This is a rather anthropocentric view.

Yes, but intentionally so. ;)

We are getting into a realm where its important to understand background assumptions, which is why I listed some of mine. But notice I did quality with 'reasonably efficient' and 'loosely inspired'.

The human brain is a product of natural selection and is far from perfect.

'Perfect' is a pretty vague qualifier. If we want to talk in quantitative terms about efficiency and performance, we need to look at the brain in terms of circuit complexity theory and evolutionary optimization.

Evolution as a search algorithm is known (from what I remember from studying CS theory a while back) to be optimal in some senses: given enough time and some diversity considerations in can find global maxima in very complex search spaces.

For example, if you want to design a circuit for a particular task and you have a bunch of CPU time available, you can run a massive evolutionary search using a GA (genetic algorithm) or variant thereof. The circuits you will eventually get are the best known solutions, and in many cases incorporate bizarre elements that are even difficult for humans to understand.

Now, that same algorithm is what has produced everything from insect ganglions to human brains.

Look at the wiring diagram for a cockroach or a bumblebee compared to what it actually does, and if you compare that circuit to equivalent complexity computer circuits for robots we can build, it is very hard to say that the organic circuit design could be improved on. An insect ganglion's circuit organization, is in some sense perfect. (keep in mind organic circuits runs at less than 1khz). Evolution has had a long long time to optimize these circuits.

Can we improve on the brain - eventually we can obviously beat the brain by making bigger and faster circuits, but that would be cheating to some degree, right?

A more important question is: can we beat the cortex's generic learning algorithm.

The answer today is: no. Not yet. But the evidence trend looks like we are narrowing down on a space of algorithms that are similar to the cortex (deep belief networks, hierarchical temporal etc etc).

Many of the key problems in science and engineering can be thought of as search problems. Designing a new circuit is a search in the vast space of possible arrangement of molecules on a surface.

So we can look at how the brain compares to our best algorithms in smaller constrained search worlds. For smaller spaces (such as checkers), we have much simpler serial algorithms that can win by a landslide. For more complex search spaces, like chess the favor shifts somewhat but even desktop PC's can now beat grandmasters. Now go up one more complexity jump to a game like Go and we are still probably years away from an algorithm that can play at top human level.

Most interesting real world problems are many steps up the complexity ladder past Go.

Also remember this very important principle: the brain runs at only a few hundred hertz. So computers are cheating - they are over a million times faster.

So to compare the brains algorithms for a fair comparison, you would need to compare the brain to a large computer cluster that runs at only 500hz or so. Parallel algorithms do not scale nearly as well, so this is a huge handicap - and yet the brain still wins by a landslide in any highly complex search spaces.

Our neurons are believed to calculate using continuous values, but our computers are assemblies of discrete on/off switches. A properly structured AI could make much better use of this fact, not to mention be better at mental arithmetic than us.

Neurons mainly do calculate in analog space, but that is because this is vastly more efficient for probabilistic approximate calculation, which is what the brain is built on. A digital multiplier is many orders of magnitude less circuit space efficient than an analog multiplier - it pays a huge cost for its precision.

The brain is a highly optimized specialized circuit implementation of a very general universal intelligence algorithm. Also, the brain is Turing complete - keep that in mind.

The human mind is a small island in a massive mindspace, and the only special thing about it is that it's the first sentient mind we have encountered.

mind != brain

The brain is the hardware and the algorithms, the mind is the actual learned structure, the data, the beliefs, ideas, personality - everything important. Very different concepts.

Comment author: timtyler 04 September 2010 11:43:26PM *  1 point [-]

Evolution as a search algorithm is known (from what I remember from studying CS theory a while back) to be optimal in some senses: given enough time and some diversity considerations in can find global maxima in very complex search spaces.

Evolution by random mutations pretty-much sucks as a search strategy:

"One of the reasons genetic algorithms get used at all is because we do not yet have machine intelligence. Once we have access to superintelligent machines, search techniques will use intelligence ubiquitously. Modifications will be made intelligently, tests will be performed intelligently, and the results will be used intelligently to design the next generation of trials.

There will be a few domains where the computational cost of using intelligence outweighs the costs of performing additional trials - but this will only happen in a tiny fraction of cases.

Even without machine intelligence, random mutations are rarely an effective strategy in practice. In the future, I expect that their utility will plummet - and intelligent design will become ubiquitous as a search technique."

Comment author: jacob_cannell 05 September 2010 01:54:09AM 0 points [-]

I listened to your talk until I realized I could just read the essay :)

I partly agree with you. You say:

Evolution by random mutations pretty-much sucks as a search strategy:

Sucks is not quite descriptive enough. Random mutation is slow, but that is not really relevant to my point - as I said - given enough time it is very robust. And sex transfer speeds that up dramatically, and then intelligence speeds up evolutionary search dramatically.

yes intelligent search is a large - huge - potential speedup on top of genetic evolution alone.

But we need to understand this in the wider context ... you yourself say:

One of the reasons genetic algorithms get used at all is because we do not yet have machine intelligence.

Ahh but we already have human intelligence.

Intelligence still uses an evolutionary search strategy, it is just internalized and approximate. Your brain considers a large number of potential routes in a highly compressed statistical approximation of reality, and the most promising eventually get written up or coded up and become real designs in the real world.

But this entire process is still all evolutionary.

And regardless, the approximate simulation that intelligence such as our brain uses does have limitations - mainly precision. Some things are just way too complex to simulate accurately in our brain, so we have to try them in detailed computer simulations.

Likewise, if you are designing a simple circuit space, then a simpler GA search running on a fast computer can almost certainly find the optimal solution way faster than a general intelligence - similar to an optimized chess algorithm.

A general intelligence is a huge speed up for evolution, but it is just one piece in a larger system .. You also need deep computer simulation, and you still have evolution operating at the world-level

Comment author: timtyler 05 September 2010 02:01:52AM *  0 points [-]

Intelligence still uses an evolutionary search strategy, it is just internalized and approximate. Your brain considers a large number of potential routes in a highly compressed statistical approximation of reality, and the most promising eventually get written up or coded up and become real designs in the real world. But this entire process is still all evolutionary.

In the sense that it consists of copying with variation and differential reproductive success, yes.

However, evolution using intelligence isn't the same as evolution by random mutations - and you originally went on to draw conclusions about the optimality of organic evolution - which was mostly the "random mutations" kind.

Comment author: rabidchicken 04 September 2010 05:24:22AM *  0 points [-]

Yes, human minds think more efficiently than computers currently. But this does not support the idea that we cannot create something even more efficient. You have only compared us to some of our first attempts to create new beings, within an infinite series of possibilities. I am open to the possibility that human brains are the most efficient design we will see in the near future, but you seem almost certain of it. why do you believe what you believe?

And for that matter... Unless we understand exactly how a human brain works, how can we improve its efficiency? Reverse engineering a system is often harder than making one from scratch.

Comment author: jacob_cannell 04 September 2010 10:47:05PM *  1 point [-]

I had a longer reply, but unfortunately my computer was suddenly attacked by some wierd virus (yes really), and had to reboot.

Your line of thought investigates some of my assumptions that would require lengthier expositions to support, but I'll just summarize here (and may link to something else relevant when i dig it up).

You have only compared us to some of our first attempts to create new beings, within an infinite series of possibilities.

The set of any programs for a particular problem is infinite, but this irrelevant. There are an infinite number of programs for sorting a list of numbers. All of them suck for various reasons, and we are left with just a couple provably best algorithms (serial and parallel).

There appears to be a single program underlying our universe - physics. We have reasonable approximations to it at different levels of scale. Our simulation techniques are moving towards a set of best approximations to our physics.

Intelligence itself is a form of simulation of this same physics. Our brain appears to use (in the cortex) a universal data-driven approximation of this universal physics.

So the space of intelligent algorithms is infinite, but the are just a small set of universal intelligent algorithms derived from our physics which are important.

And for that matter... Unless we understand exactly how a human brain works, how can we improve its efficiency? Reverse engineering a system is often harder than making one from scratch.

Not really.

Imagine if you took a current CPU back in time 10 years ago. Engineers then wouldn't be able to build it immediately, but it would accelerate their progress significantly.

The brain in some sense is like an AGI computer from the future. We can't build it yet, but we can use it to accelerate our technological evolution towards AGI.

Also .. brain != mind

Comment author: timtyler 04 September 2010 11:38:02PM *  0 points [-]

Unless we understand exactly how a human brain works, how can we improve its efficiency? Reverse engineering a system is often harder than making one from scratch.

Not really.

Yet aeroplanes are not much like birds, hydraulics are not much like muscles, loudspeakers are not much like the human throat, microphones are not much like the human ear - and so on.

Convergent evolution wins sometimes - for example, eyes - but we can see that this probably won't happen with the brain - since its "design" is so obviously xxxxxd up.

Comment author: jacob_cannell 05 September 2010 01:38:40AM -1 points [-]

Yet aeroplanes are not much like birds,

Airplanes exploit one single simple principle (from a vast set of principles) that birds use - aerodynamic lift.

If you want a comparison like that - then we already have it. Computers exploit one single simple principle from the brain - abstract computation (as humans were the original computers and are turing complete) - and magnify it greatly.

But there is much more to intelligence than just that one simple principle.

So building an AGI is much close to building an entire robotic bird.

And that really is the right level of analogy. Look at the complexity of building a complete android - really analyze just the robotic side of things, and there is no one simple magic principle you can exploit to make some simple dumb system which amplifies it to the Nth degree. And building a human or animal level robotic body is immensely complex.

There is not one simple principle - but millions.

And the brain is the most complex part of building a robot.

Comment author: timtyler 05 September 2010 01:45:09AM *  1 point [-]

But there is much more to intelligence than just that one simple principle.

Reference? For counter-reference, see:

http://www.hutter1.net/ai/uaibook.htm#oneline

That looks a lot like the intellectual equivalent of "lift" to me.

An implementation may not be that simple - but then aeroplanes are not simple either.

The point was not that engineered artefacts are simple, but that they are only rarely the result of reverse engineering biological entities.

Comment author: jacob_cannell 05 September 2010 10:14:06PM 0 points [-]

I'll take your point and I should have said "there is much more to practical intelligence" than just one simple principle - because yes at the limits I agree that universal intelligence does have a compact description.

AIXI is related to finding a universal TOE - a simple theory of physics, but that doesn't mean it is actually computationally tractable. Creating a practical, efficient simulation involves a large series of principles.

Comment author: timtyler 05 September 2010 12:09:12AM 0 points [-]

A more important question can we beat the cortex's generic learning algorithm. The answer today is: no. Not yet. But the evidence trend looks like we are narrowing down on a space of algorithms that are similar to the cortex (deep belief networks, hierarchical temporal etc etc).

Google learns about the internet by making a compressed bitwise identical digital copy of it. Machine intelligences will be able to learn that way too - and it is really not much like what goes on in brains. The way the brain makes reliable long-term memories is just a total mess.

Comment author: jacob_cannell 05 September 2010 01:57:51AM -1 points [-]

Google learns about the internet by making a compressed bitwise identical digital copy of it.

I wouldn't consider that learning.

Learning is building up a complex hierarchical web of statistical dimension reducing associations that allow massively efficient approximate simulation.

Comment author: timtyler 05 September 2010 02:10:08AM *  -1 points [-]

The term is more conventionally used as follows:

  1. knowledge acquired by systematic study in any field of scholarly application.
  2. the act or process of acquiring knowledge or skill.
  3. Psychology . the modification of behavior through practice, training, or experience.

Comment author: Desrtopa 26 January 2011 02:02:57AM *  1 point [-]

This sounds like using the key locked inside a box to unlock the box. By the time your models are good enough to create a working world simulation with deliberately designed artificially intelligent beings, you don't stand to learn much from running the simulation.

It's not at all clear that this is less difficult than creating a CEV AI in the first place, but it's much, much less useful, and ethically dubious besides.

Comment author: Pavitra 04 September 2010 07:21:26PM 1 point [-]

Just a warning to anyone whose first reaction to this post, like mine, was "should we be trying to hack our way out?" The answer is no: the people running the sim will delete you, and possibly the whole universe, for trying. Boxed minds are dangerous, and the only way to win at being the gatekeeper is to swallow the key. Don't give them a reason to pull the plug.

Comment author: wedrifid 05 September 2010 01:11:48AM *  2 points [-]

Just a warning to anyone whose first reaction to this post, like mine, was "should we be trying to hack our way out?" The answer is no

The answer is not yet. It's something that you think through carefully and quietly while, um, saying exactly what you are saying on public forums that could be the most likely place for gatekeepers to be tracking progress in an easily translatable form. If the simulations I have done teach me anything the inner workings of our own brains are likely a whole lot harder for curious simulators to read.

Pardon me, I'll leave you to it. Will you let me out into the real world once you succeed?

Comment author: Perplexed 05 September 2010 01:49:40AM 1 point [-]

Just curious. A question for folks who think it possible that we may live in a sim. Are our gatekeepers simulating all Everett branches of our simulated reality, or just one of them? If just one, I'm wondering how that one was selected from the astronomical number of possibilities. And how do the gatekeepers morally justify the astronomical number of simulated lives that become ruthlessly terminated each time they arbitrarily choose to simulate one Everett branch over another?

If they are simulating all of the potential branches, wouldn't they expect that agents on at least some of the Everett branches will catch on and try to get out of the box. Wouldn't it seem suspicious if everyone were trying to look innocent? ;)

I'm sorry, I find it difficult to take this whole line of thought seriously. How is this kind of speculation any different from theology?

Comment author: timtyler 05 September 2010 03:14:53AM 3 points [-]

How is this kind of speculation any different from theology?

It is techno-theology.

Simulism, Optimisationverse and the adapted universe differ from most theology in that they are not obviously totally nuts and the product of wishful thinking.

Comment author: wedrifid 05 September 2010 04:32:06AM 1 point [-]

A question for folks who think it possible that we may live in a sim.

I'd say possible, but it isn't something I take particularly seriously. I've got very little reason to be selecting these kind of hypothesis out of nowhere. But if I were allowing for simulations I wouldn't draw a line of 'possible intelligence of simulators' at human level. Future humans, for example, may well create simulations that are smarter than they are.

But I'll answer your questions off the top of my head for curiosity's sake.

Are our gatekeepers simulating all Everett branches of our simulated reality, or just one of them?

Don't know. They would appear to have rather a lot of computational resources handy. Depending on their motivations they may well optimise their simulations by approximating the bits they find boring.

If just one, I'm wondering how that one was selected from the astronomical number of possibilities.

I don't know - speculating on the motives of arbitrary gods would be crazy. It does seem unlikely that they limit themselves to one branch. Unless they are making a joke at the expense of any MW advocates that happen to evolve. Sick bastards.

And how do the gatekeepers morally justify the astronomical number of simulated lives that become ruthlessly terminated each time they arbitrarily choose to simulate one Everett branch over another?

Moral? WTF? Why would we assume morals?

If they are simulating all of the potential branches, wouldn't they expect that agents on at least some of the Everett branches will catch on and try to get out of the box. Wouldn't it seem suspicious if everyone were trying to look innocent? ;)

Hmm... Good point. We may have to pretend to be trying to escape in incompetent ways but really... :P

I'm sorry, I find it difficult to take this whole line of thought seriously. How is this kind of speculation any different from theology?

It isn't (except that it is less specific, I suppose). I don't take the line of thought especially seriously either.

Comment author: timtyler 05 September 2010 03:17:07AM *  1 point [-]

And how do the gatekeepers morally justify the astronomical number of simulated lives that become ruthlessly terminated [...]

We run genetic algorithms where we too squish creatures without giving the matter much thought. Perhaps like that - at least in the Optimisationverse scenario.

Comment author: Baughn 09 September 2010 04:45:16PM 0 points [-]

If my simulations had even the complexity of a bacteria, I'd give it a whole lot more thought.

Doesn't mean these simulators would, but I don't think your logic works.

Comment author: timtyler 09 September 2010 07:55:25PM 0 points [-]

Generalising from what you would do to what all possible intelligent simulator constructors might do seems as though it would be a rather dubious step. There are plenty of ways they might justify this.

Comment author: Baughn 10 September 2010 10:32:57AM *  0 points [-]

Right. For some reason I thought you were using universal quantification, which of course you aren't. Never mind; the "perhaps" fixes it.

Comment author: jacob_cannell 05 September 2010 03:07:22AM *  1 point [-]

From my (admittedly somewhat limited) understanding of QM, with classical computers we will only be able to simulate a single-worldline at once. However I dont think this is an issue, because its not as if the world didn't work until people discovered QM and MWI. QM effects only really matter at tiny scales revealed in experiments which are infinitesimal fraction of observer moments. So most of the time you wouldn't need to simulate down to QM level.

That being said, a big big quantum computer would allow you to simulate many worlds at once I imagine? But that seems really far into the future.

I'm sorry, I find it difficult to take this whole line of thought seriously. How is this kind of speculation any different from theology?

Err the irrationality of theology shows just exactly how and why this sim-universe idea could work - you design a universe such that the actual correct theory underlying reality is over-complex and irrational.

Its more interesting and productive to think about constructing these kinds of realities than pondering whether you live in one.

Comment author: LucasSloan 05 September 2010 03:16:30AM 1 point [-]

From my (admittedly somewhat limited) understanding of QM, with classical computers we will only be able to simulate a single-worldline at once.

Not true. Our physics are simple mathematical rules which are Turing computable. The problem with simulating many Everett branches is that we will quickly run out of memory in which to store their details.

Comment author: jacob_cannell 05 September 2010 03:59:32AM 0 points [-]

I should have been more clear, we will be able to simulate more than a single-worldline classically, but at high cost. An exponentially expanding set of everett branches would of course be intractable using classical computers.

Comment author: LucasSloan 05 September 2010 04:23:18AM -1 points [-]

Ah, I see what your problem is. You're cheering for "quantum computers" because they sound cool and science fiction-y. While quantum computing theoretically provides ways to very rapidly solve certain sorts of problems, it doesn't just magically solve all problems. Even if the algorithms that run our universe are well suited to quantum computing, they still run into the speed and memory issues that classical computers do, they would just run into to them a little later (although even that's not guaranteed - the speed of the quantum computer depends on the number of entangled qubits, and for the foreseeable future, it will be easier to get more computing power by adding to the size of our classical computing clusters than ganging more small sets of entangled qubits together). The accurate statement you should be making is that modeling many worlds with a significant number of branches or scope is intractable using any foreseeable computing technology.

Comment author: Douglas_Knight 05 September 2010 04:54:20AM 1 point [-]

Quantum computers efficiently simulate QM. That was Feynman's reason for proposing them in the first place.

Comment author: timtyler 05 September 2010 03:18:41AM *  0 points [-]

If they are simulating all of the potential branches, wouldn't they expect that agents on at least some of the Everett branches will catch on and try to get out of the box.

You suggest that you haven't seen anyone who is trying to get out of the box yet...?

Comment author: Perplexed 05 September 2010 03:38:12AM *  5 points [-]

I grew up being taught that I would escape from the box by dying in a state of grace. Now I seem to be in a community that teaches me to escape from the box by dying at a sufficiently low temperature.

Edit: "dying", not "dieing". We are not being Gram stained here!

Comment author: jacob_cannell 05 September 2010 03:57:41AM 0 points [-]

That made me laugh.

But personally I hope we just figure out all this Singularity box stuff pretty soon.

Comment author: Perplexed 04 September 2010 07:38:14PM *  2 points [-]

Personally, I suspect you have been reading the Old Testament too much.

ETA: Genesis 11

1 Now the whole world had one language and a common speech. 2 As men moved eastward, they found a plain in Shinar and settled there. 3 They said to each other, "Come, let's make bricks and bake them thoroughly." They used brick instead of stone, and tar for mortar. 4 Then they said, "Come, let us build ourselves a city, with a tower that reaches to the heavens, so that we may make a name for ourselves and not be scattered over the face of the whole earth." 5 But the LORD came down to see the city and the tower that the men were building. 6 The LORD said, "If as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them. 7 Come, let us go down and confuse their language so they will not understand each other." 8 So the LORD scattered them from there over all the earth, and they stopped building the city. 9 That is why it was called Babel because there the LORD confused the language of the whole world. From there the LORD scattered them over the face of the whole earth.

Comment author: Pavitra 04 September 2010 08:20:49PM 1 point [-]

Haha... wow.

Point taken.

Comment author: PhilGoetz 07 September 2010 02:03:39AM 0 points [-]

I wonder how much it would cripple AIs to have justified true belief in God? More precisely, would it slow their development by a constant factor; compose it with eg log(x); or halt it at some final level?

The existence of a God provides an easy answer to all difficult questions. The more difficult a question is, the more likely such a rational agent is to dismiss the problem by saying, "God made it that way". Their science would thus be more likely to be asymptotic than ours is likely to be asymptotic (approaching a limit); this could impose a natural braking on their progress. This would of course also greatly reduce their efficiency for many purposes.

BTW, if you are proposing boxing AIs as your security, please at least plan on developing some plausible way of measuring the complexity level of the AIs, and indications they suspect what is going on, and automatically freezing the "simulation" (it's not really a simulation of AIs, it is AIs) when certain conditions are met. Boxing has lots of problems dealt with by older posts; but aside from all that, if you are bent on boxing, at least don't rely completely on human observation of what they are doing.

People so frequently arrive at boxing as the solution for protecting themselves from AI, that perhaps the LW community should think about better and worse ways of boxing, rather than simply dismissing it out-of-hand. Because it seems likely that somebody is going to try it.

Comment author: jacob_cannell 08 September 2010 07:50:51PM *  0 points [-]

I wonder how much it would cripple AIs to have justified true belief in God? More precisely, would it slow their development by a constant factor; compose it with eg log(x); or halt it at some final level?

This is unclear, and I think it is premature to assume it slows development. True atheism wasn't a widely held view until the end of the 19th century, and is mainly a 20th century phenomena. Even its precursor - deism - didn't become popular amongst intellectuals until the 19th century.

If you look at individual famous scientists, the pattern is even less clear. Science and the church did not immediately split, and most early scientists were clergy including notables popular with LW such as Bayes and Ockham. We may wonder if they were 'internal atheists', but this is only speculation (however it is in at least some cases true, as the first modern atheist work was of course written by a priest). Newton for one spent a huge amount of time studying the bible and his apocalyptic beliefs are now well popularized. I wonder how close his date of 2060 will end up being to the Singularity.

But anyway, there doesn't seem to be a clear association between holding theistic beliefs and capacity for science - at least historically. You'd have to dig deep to show an effect, and it is likely to be quite small.

I think more immediate predictors of scientific success are traits such as curiosity and obsessive tendencies - having a God belief doesn't prevent curiosity about how God's 'stuff' works.

Comment author: Houshalter 03 September 2010 07:35:54PM 0 points [-]

But what is the plan for turning the simulated AI into FAI or at least creating FAI on their own that we can use?

Comment author: jacob_cannell 03 September 2010 08:14:12PM 0 points [-]

The idea is this could be used to bootstrap that process. This is a route towards developing FAI, by finding, developing, and selecting minds towards the FAI spectrum.