http://www.xuenay.net/Papers/DigitalAdvantages.pdf

Abstract: I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. The categories are hardware advantages, self-improvement advantages, co-operative advantages and human handicaps. The shape of hardware growth curves as well as the ease of modifying minds are found to be some of the core influences on how quickly a digital mind may take advantage of these factors.

Still a bit of a rough draft (could use a bunch of tidying up, my references aren't in a consistent format, etc.), but I wanted to finally get this posted somewhere public so I could get further feedback.

New Comment
11 comments, sorted by Click to highlight new comments since:
  • where'd you run across this? I thought I was the only one on LW who knew it:

    'The President's Council of Advisors on Science and Technology (2010) mentions that performance on a benchmark production planning model improved by a factor of 43 million between 1988 and 2003. Out of the improvement, a factor of roughly 1,000 was due to better hardware and a factor of roughly 43,000 was due ot improvements in algorithms. Also mentioned is an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008. '

  • oh, and as far as algorithmic improvement goes, integer factorization is apparently even more impressive than the linear programming improvements, but I haven't been able to re-find my reference for that
  • this may not matter since you're submitting it to Goertzel, but for a more general academic audience, I think Chalmer's singularity paper would be much better than Yudkowsky
  • also, your human biases section could use examples of zany computer solutions. eg http://lesswrong.com/lw/2l0/should_i_believe_what_the_siai_claims/2fsh

Overall, the paper seems kind of lacking in meat to me.

where'd you run across this? I thought I was the only one on LW who knew it:

It was on Slashdot.

this may not matter since you're submitting it to Goertzel, but for a more general academic audience, I think Chalmer's singularity paper would be much better than Yudkowsky

Good point, I'd forgotten about Chalmers. I'll work in a couple of cites to him.

also, your human biases section could use examples of zany computer solutions. eg http://lesswrong.com/lw/2l0/should_i_believe_what_the_siai_claims/2fsh

Those are good examples, I'll work in some of that and other examples besides. Thanks.

For example, in the hardware section you could bring up ASICs and FPGAs as technologies that vastly speed up particular algorithms - not an option ever available to humans except indirectly as tools.

In the mind section, you could point out the ability of an upload to wirehead itself, eliminating motivation and akrasia issues. (Perhaps a separate copy of the mind could be in charge of judging when the 'real' mind deserves a reward for taking care of a task.)

Or you could raise the possibility of entirely new sensory modalities, like the 'code modality' I think Eliezer proposed in LOGI - regular humans can gain new modalities with buzzing compass belts and electrical prickles onto the tongue and whatnot, but it'd be difficult to figure out a way more direct than 2D images for code. An upload could just feed the binary bits into an appropriate area of simulated neurons and let the network figure and adapt (like in the real-world examples of new sensory modalities.)

In a previous version of the paper, I had the following paragraphs. I deleted them when I added the current explanation of mental modules because I felt these became redundant. Do you think I should add them, or parts of them, back?

A digital mind could achieve qualitative improvements over the human reasoning by designing new kinds of mental modules. As an example of a mental module providing a qualitative advantage, children aged two understand the meaning of the word ”one”, but not that of other numbers. Six to nine months later, they learn what ”two” means. Some months later they learn the meaning of ”three”, and shortly thereafter they induce counting in general (Carey 2004). If we had general intelligence but no grasp of numbers, we would be incapable of thinking about mathematics, and therefore incapable of thinking many of the kinds of thoughts that are the basis of science.

There are a number of conditions in which humans lose various qualitative reasoning abilities, without the rest of their general intelligence being impaired. Dyslexia involves difficulty with reading and spelling, and manifests itself in people of all levels of intelligence (Shaywitz 1998). In neglect, patients lose awareness of part of their visual field. Ramachandran and Blakeslee (1998) report of a neglect patient who was shown, via a mirror, a pen on her neglected left side. While she consciously recognized the mirror as such and knew what it did, when asked to grab the pen she would claim it to be behind the mirror and attempt to reach through the mirror. Anosognosia patients (Cutting 1978) have a bodily disorder such as blindness or a disabled arm, but are unable to believe this, and instead confabulate explanations of why they happen to bump into things or how the disabled arm isn't really theirs. Despite falsely believing themselves to be fully healthy, their reasoning is otherwise intact.

What kinds of modules could provide a qualitative reasoning improvement over humans? Brooks (1987) mentions invisibility as an essential difficulty in software engineering. Software cannot be visualized in the same way physical products can be, and any visualization can only cover a small part of the software product. Yudkowsky (2007) discusses the notion of a codic cortex designed to natively visualize code the same way the human visual cortex is evolved to natively model the world around us.

A codic cortex can be considered a special case of directly integrating complex models to a mind. Humans employ various complex external models, such as weather simulations, which are not directly integrated to our minds. We can only study a small portion of the model at a time, which makes it difficult to detect subtle errors. For better comprehension, we re-create partial models in our minds (insert some cite here), where they are directly accessible and integrated to the rest of our mind. The ability to directly integrate external models to our minds could make it possibly for us to e.g. directly pick up on all the relevant details of a weather simulation in the same way that we can very quickly pick up all the relevant details of a picture presented to us.

Well, it's a start and better than nothing. If I were bringing in numbers here, I wouldn't focus on counting but bring in blind mathematicians and geometry, and I'd also focus on the odd sensory modality of subitization.

Substitute

increase the morality rate -> ,increase the mortality rate

Thanks, I'll fix that.

I'm having a lot of trouble understanding the second paragraph in section 2.1.2, especially by the sentences "Amdahl's law assumes that the size of the problem stays constant as the number of processors increases, but Gustafson (1988) notes that in practice the problem size scales with the number of processors." Can you expand on what you mean here?

Edit: Also there's a typo in 4.1- "practicioners".

I think the point is that when you increase the data set, you then expose more work for the parallelism to handle.

If I have a 1kb dataset and I have a partially parallel algorithm to run on it, I will very quickly 'run out of parallelism' and find that 1000 processors are as good as 2 or 3. Whereas, if I have a 1pb dataset, same data and same algorithm, I will be able to add processors for a long time before I finally run out of parallelism.

gwern's explanation is right. Gustafson's law. I'll clarify that.

Are you trying to keep this intentionally conservative and subdued? Because I found the parts about improvements to human uploads rather... very unimaginative.

(then again, the circumstances around me imagining that might've been less... err... very conductive to imagining things as a human. If you know what I mean. I'm a bit disappointed because i didn't learn anything new from this article. )

[This comment is no longer endorsed by its author]Reply