AndrewHickey comments on Malthusian copying: mass death of unhappy life-loving uploads - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (82)
A century is a very long time indeed. Think back to 1912.
I used to work on a program that was designed to run binaries compiled for one processor on another. It was only meant to run the binaries compiled for a single minor revision of a GNU/Linux distro on one processor on the same minor revision of the same distro on another processor.
We had access to the source code of the distro -- and got some changes made to make our job easier. We had access to the full chip design of one chip (to which, again, there were changes made for our benefit), and to the published spec of the other.
We managed to get the product out of the door, but every single code change -- even, at times, changes to non-functional lines of code like comments -- would cause major problems (mention the phrase "Java GUI" to me even now, a couple of years later, and I'll start to twitch). We would only support a limited subset of functionality, it would run at a fraction of the speed, and even that took a hell of a lot of work to do at all.
Now, that was just making binaries compiled for a distro for which we had the sources to run on a different human-designed von Neumann-architecture chip.
Given my experience of doing even that, I'd say the amount of time it would take (even assuming continued progress in processor speeds and storage capacity, which is a huge assumption) to get human brain emulation to the point where an emulated brain can match a real one for reliability and speed is in the region of a couple of hundred years, yes.
Building emulators is hard. But I think it isn't quite so hard as that, these days. Apple has now done it twice, and been able to run a really quite large subset of Mac software after each transition. Virtual machines are reasonably straightforward engineering at this point. Things like the JVM or the Microsoft common language runtime are basically emulators for an abstract virtual machine -- and they're quite robust these days with very small performance penalties. All these are certainly very large software engineering projects -- but they're routine engineering, not megaprojects, at this stage.
Further, I suspect the human brain is less sensitive than software to minor details of underlying platform. Probably small changes in the physics model correspond to small changes in temperature, chemical content, etc. And an emulation that's as good as a slightly feverish and drunk person would still be impressive and even useful.
" Apple has now done it twice,"
No they didn't. At least one of those times was actually the software I described above, bought from the company I worked for. So I know exactly how hard it was to create.
"Things like the JVM or the Microsoft common language runtime are basically emulators for an abstract virtual machine" -- which the engineers themselves get to specify, design and implement,
"Further, I suspect the human brain is less sensitive than software to minor details of underlying platform. " I would love to live in a world where re-implementing an algorithm that runs on meat, so it runs on silicon instead, amounted to a 'minor detail of underlying platform'. I live i this one, however.
I had assumed we were talking about low-level emulation: the program explicitly models each neuron, and probably at a lower level than that. And physical simulation is a well understood problem and my impression is that the chemists are pretty good at it.
Trying to do some clever white-box reimplementation of the algorithm I agree is probably intractable or worse. The emulation will be very far from the optimal implementation of the mind-program in question.