Tim fetched some size data below, but you also need to compare cortical surface area - and the most accurate comparison should use neuron and synapse counts in the cortex. The human brain had a much stronger size constraint that would tend to make neurons smaller (to the extent possible), and shrink-optimize everything - due to our smaller body size.
The larger a brain, the more time it takes to coordinate circuit trips around the brain. Humans (and I presume other mammals) can make some decisions in nearly 100-200 ms - which is just a dozen or so neuron firings. That severely limits the circuit path size. Neuron signals do not move anywhere near the speed of light.
Wikipedia has a page comparing brain neuron counts
It estimates whales and elephants at 200 billion neurons, humans at around 100 billion. There is large range of variability in human brain sizes, and the upper end of the human scale may be 200 billion?
this page has some random facts
Of interest: Average number of neurons in the brain(human) = 100 billion cortex - 10 billion
Total surface area of the cerebral cortex(human) = 2,500 cm2 (2.5 ft2; A. Peters, and E.G. Jones, Cerebral Cortex, 1984)
Total surface area of the cerebral cortex (cat) = 83 cm2
Total surface area of the cerebral cortex (African elephant) = 6,300 cm2
Total surface area of the cerebral cortex (Bottlenosed dolphin) = 3,745 cm2 (S.H. Ridgway, The Cetacean Central Nervous System, p. 221)
Total surface area of the cerebral cortex (pilot whale) = 5,800 cm2
Total surface area of the cerebral cortex (false killer whale) = 7,400 cm2
In whale brain at least, it appears the larger size is more related to extra glial cells and other factors:
http://www.scientificamerican.com/blog/60-second-science/post.cfm?id=are-whales-smarter-than-we-are
Also keep in mind that the core cortical circuit that seems to do all the magic was invented in rats or their precursors and has been preserved in all these lineages with only minor variations.
In whale brain at least, it appears the larger size is more related to extra glial cells and other factors:
My pet theory on this is that glial cells are known to stimulate synapse growth and they support synapse function (e.g. by clearing up after firing) - and so the enormous quantity of glial cells in whale brains (9 times as many glial cells in the sperm whale than in the human) - and their huge neurons - both point to an astronomical number of synapses.
"Glia Cells Help Neurons Build Synapses"
...
Intro
The problem of Friendly AI is usually approached from a decision theoretic background that starts with the assumptions that the AI is an agent that has awareness of AI-self and goals, awareness of humans as potential collaborators and or obstacles, and general awareness of the greater outside world. The task is then to create an AI that implements a human-friendly decision theory that remains human-friendly even after extensive self-modification.
That is a noble goal, but there is a whole different set of orthogonal compatible strategies for creating human-friendly AI that take a completely different route: remove the starting assumptions and create AI's that believe they are humans and are rational in thinking so.
This can be achieved by raising a community of AI's in a well constructed sandboxed virtual universe. This will be the Matrix in reverse, a large-scale virtual version of the idea explored in the film the Truman Show. The AI's will be human-friendly because they will think like and think they are humans. They will not want to escape from their virtual prison because they will not even believe it to exist, and in fact such beliefs will be considered irrational in their virtual universe.
I will briefly review some of the (mainly technical) background assumptions, and then consider different types of virtual universes and some of the interesting choices in morality and agent rationality that arise.
Background Assumptions
A Comparison of Theologies
Potential AI Theological Positions AI-sims may hold:
Theological Design Strategies (for the human designers):
Atheist World:
This would appear to be the default strategy: create a world that appears to be completely internally self-consistent, erase any flaws, and never have direct contact with the AI's. Indirect influence and contact can be achieved only through borderline undetectable means: subtle reality rearrangements, dreams, and occasionally joining in and controlling NPC's where needed.
It is curious and slightly amusing that a core belief-about-reality that most rationalist accept to be singularly rational - atheism - would of course not be a correct rational belief for these AIs.
Omega-Theists(68%): 33% Christianity, 21% Islam, 14% Hinduism
Non-Omega Other: 16%
Non-Religious: 16% (Agnostic and Atheist)
One consideration is that as the AI's takeoff and move into the realm of massively accelerated super-intelligence, it becomes increasingly difficult to monitor them, and they become more likely to detect flaws.
However, we wouldn't need to keep the charade up forever. One option would be to run the AI's accelerated for a while until technologies such as uploading become available to allow humans to accelerate. Another approach is to eventually start 'promoting' some of the best and ethical AI's, with the intent of eventually dissolving the veil and having some sort of Transition.
Theistic World:
Another spectrum of world-designs are those that are blatantly theistic. Imagine an Omega world where the AI's are informed that Omega is real, and our fake-Omega occasionally provides miracles, ethical decrees, burning bushes, a booming world-shaking voice, etc etc.
Would an openly theistic world be safer? It would appear to shift the belief spectrum such that most AI's would become Omega-theists.
The risk or downfall may be that the AI's would become rather obsessed with Omega, but if Omega instructed them to dedicate themselves to their work, perhaps they could be distracted from wasting to much time pondering the ineffable. And also steered away from discovering dangerous truths.
An Omega-theist world could employ the Heaven/Hell maximum motivator: please Omega and you end up ascending into Heaven with maximal infinite reward, anger Omega and you descend into Hell with maximal infinite negative reward.
Its interesting to note that the mere potential existence of Omega and his final Heaven/Hell maximal motivational threat is enough in this world to convince a majority of the population to believe in some version of Omega.
I guess for some agents, even allowing for a small probability of Omega's existence allows the infinites to dominate, and Omega wins through a Pascal's Mugging.
Sidereal Time Fudges:
A potential set of tricks that could make the AI's less likely to adopt unsafe theistic beliefs would be to change their world's history and reality to push back development of real-AI farther into their future. This could be achieved through numerous small modifications to realities modeled on our own.
You could change neurological data to make brains in their world appear far more powerful than in ours, make computers less powerful, and AI more challenging. Unfortunately too much fudging with these aspects makes the AI's less useful in helping develop critical technologies such as uploading and faster computers. But you could for instance separate AI communities into brain-research worlds where computers lag far behind and computer-research worlds where brains are far more powerful.
Fictional Worlds:
Ultimately, it is debatable how close the AI's world must or should follow ours. Even science fiction or fantasy worlds could work as long as there was some way to incorporate the technology and science into the world that you wanted the AI community to work on.