to successfully emulate a brain, we might need to emulate this wild neural randomness. However, that seems to remove the possibility that the emulation will continue on as the original person. Perhaps our very effort to emulate a specific human brain results in our producing an entirely different person altogether.
If noise is essential to central processes in everyday human cognition, then (as a result of this noise alone) an emulation would be no more different from its original mind as any person is from his or her (recent) former self.
I feel like Shores missed important subtleties, which resulted in him getting sidetracked from the actually important bits. Subtleties like emergence not being magic.
Okay, not subtleties... what's the word...
Note that, if one accepts Moravec's premise that mind depends on pattern-identity rather than body-identity, it remains a further assumption that the analog nature of brain processes doesn't matter. If brain processes really are analog, that's a feature of the pattern, not just the body. Digital processes which generate equally effective problem-solving results may or may not do so via the same mental events. Or, maybe that falls into the semantic gray-zone of mental terminology.
If all you want to do is hire an effective engineer, it doesn't matter. (Assuming gwern is correct that the learning advantage of analog systems is imaginary.) If you want to create fun, it might.
neural irregularities as pink noise, which is also called 1/f noise
A few minutes of fooling around with a color tool will show you that the spectrum of pink is flat (white) with a notch at the green and the 1/f spectrum is brown, nothing at all resembling pink. The misnomer of pink to label 1/f seems to come from a misconception that flat + a pole at red is pink (it's not--it's red) and 1/f (it's not--it's flat with a pole at red).
It is a pity this idea has gotten so much traction into the English language as it is so horribly wrong. It's like one of those things that Pauli would describe as "not even wrong."
“Misbehaving Machines: The Emulated Brains of Transhumanist Dreams”, by Corry Shores (grad student; Twitter, blog) is another recent JET paper. Abstract:
1 Introduction
2 We are such stuff as digital dreams are made on
3 Encoding all the sparks of nature
4 Arising minds
5 Mental waves and pulses: analog vs. digital computation
At this point, Shores summarizes an argument from Fred Dretske which seems to me so blatantly false that it could not possibly be what Dretske meant, so I will refrain from excerpting it and passing on my own misconceptions. Continuing on:
This is worth examining in more depth. The citation Murray 1991 is “Analogue noise-enhanced learning in neural network circuits”; the PDF is not online and the abstract is not particularly helpful:
Fortunately, Shores has excerpted key bits on his blog in “Deleuze’s & Guattari’s Neurophysiology and Neurocomputation”; I will reproduce them below:
While it’s hard to say in lieu of the full paper, this seems to be the exact same analogue argument as before: analogue supposedly gives a system more degrees of freedom and can answer more questions about similarly free systems, and the brain may be such a free system. Parts of the quotes undermine the idea that analogue offers any additional power in concrete practice (“the learning process is sufficiently slow to effectively ‘see through’ the noise in an analogue system”) and to extend this to brains is unwarranted by the same anti-analogue arguments are before - in a quantized universe, you only need more bits to get as much precision as exists.
In the paper, the neural network apparently uses 8-bit or 16-bit words; perhaps 32-bits would be enough to reach the point where the bit-length quantization is only as bad as the analogue noise, and if it is not, then perhaps 64-bits (as is now standard on commodity computers in 2011) or 128-bit lengths would be enough (commodity computers use 128-bit special-purpose vector registers, and past and present architectures have used them).
6 Even while men’s minds are wild?
The pink noise point seems entirely redundant with the previous 2 paragraphs. If the PRNGs are adequate for the latter kinds of noise, they are adequate for the former, which is merely one of many distributions statisticians employ all the time besides the ‘normal distribution’. As well, PRNGs are something of a red herring here: genuine quantum random-number generators for computers are old school, and other kinds of hardware can produce staggering quantities of randomness. (I read a few years ago of laser RNGs producing a gigabyte or two per second.)
If the inputs are the same to a perfect emulation, the outputs will be the same by definition. This is just retreading the old question of divergence; 1/f noise adds nothing new to the question. If there is a difference in the emulation or in the two inputs, of course there may be arbitrarily small or large differences in outputs. This is easy to see with simple thought-experiments that owe nothing to noise.
Imagine that the brain is completely devoid of chaos or noise or anything previously suggested. We can still produce arbitrarily large divergences based on arbitrarily small differences, right down to a single bit. Here’s an example thought-experiment: the subject resolves, before an uploading procedure, that he will recall a certain memory in which he looks at a bit, and if the bit is 1 he will become a Muslim and if a 0 he will become an atheist. He is uploaded, and the procedure goes perfectly except for one particular bit, which just happens to be the same bit of the memory; the original and the upload then, per their resolution, examine their memories and become an atheist and a Muslim. One proceeds to blow himself up in the local mall and the other spends its time ranting online about the idiocy of theism. Quite a divergence, but one can imagine greater divergences if one must. Now, are they the same people after carrying out their resolution? Or different? An answer to this would seem to cover noise just as well.
And let’s not forget the broader picture. We obviously want the upload to initially be as close as possible to the original. But there being no difference eventually would completely eliminate the desirability of uploads: even if we mirror the upload and original in lockstep, what do we do with the upload when the original dies? Faithfully emulate the process of dying and then erase it, to preserve the moment-to-moment isomorphism? Of course not - we‘d keep running it, at which point it diverges quite a bit. (’What do you mean, I’m an upload and the original has just died?!’) One could say, with great justice, that for transhumanists, divergence is not an obstacle to personal identity/continuity but the entire reason uploading is desirable in the first place.
7 Further reading
Currently there doesn’t seem to be any discussion of Shores’s paper besides J.S. Milam’s “Digital Beings Are Different Beings”, which is just a mess. For example:
Perhaps it’s just me, but I think the author doesn’t grasp Cantorian cardinality at all (different sized infinity? No, they’re all the same cardinality, in the same way ‘all powers of 2’ is the same size as ‘the even integers’), and the rest doesn’t read much more sensibly.