You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

RobinHanson comments on Malthusian copying: mass death of unhappy life-loving uploads - Less Wrong Discussion

12 Post author: Stuart_Armstrong 02 July 2012 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (82)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 06 July 2012 12:01:06AM 4 points [-]

I used to work on a program that was designed to run binaries compiled for one processor on another. It was only meant to run the binaries compiled for a single minor revision of a GNU/Linux distro on one processor on the same minor revision of the same distro on another processor.

We had access to the source code of the distro -- and got some changes made to make our job easier. We had access to the full chip design of one chip (to which, again, there were changes made for our benefit), and to the published spec of the other.

We managed to get the product out of the door, but every single code change -- even, at times, changes to non-functional lines of code like comments -- would cause major problems (mention the phrase "Java GUI" to me even now, a couple of years later, and I'll start to twitch). We would only support a limited subset of functionality, it would run at a fraction of the speed, and even that took a hell of a lot of work to do at all.

Now, that was just making binaries compiled for a distro for which we had the sources to run on a different human-designed von Neumann-architecture chip.

Given my experience of doing even that, I'd say the amount of time it would take (even assuming continued progress in processor speeds and storage capacity, which is a huge assumption) to get human brain emulation to the point where an emulated brain can match a real one for reliability and speed is in the region of a couple of hundred years, yes.

Comment author: RobinHanson 06 July 2012 10:15:32AM 1 point [-]

Yes, emulation can be hard. But even so, writing software with the full power of the human brain from scratch seems much harder. If you agree, then you should still expect emulations to be the first AI to arrive.

Comment author: [deleted] 06 July 2012 11:13:32AM 0 points [-]

I disagree. In general I think that once the principles involved are fully understood, writing from scratch a program that performs the same generic tasks as the human brain would be easier than emulating a specific human brain.

In fact I suspect that the code for an AI itself, if one is ever created, will be remarkably compact -- possibly the kind of thing that could be knocked up in a few lines of Perl once someone has the correct insights into the remaining problems. AIXI, for example, would be a trivially short program to write, if one had the computing power necessary to make it workable (which is not going to happen, obviously).

My view (and it is mostly a hunch) is that implementing generic intelligence will be a much, much easier task than implementing a copy of a specific intelligence that runs on different hardware, in much the same way that if you're writing a computer racing game it's much easier to create an implementation of a car that has only the properties needed for the game than it would be to emulate an entire existing car down to the level of the emissions coming out of the exhaust pipe and a model of the screwed up McDonald's wrapper under the seat. The latter would be 'easy' in the sense of just copying what was there rather than creating something from basic principles, but I doubt it's something that would be easier to do in practice.